Skip to main content

REVIEW article

Front. Comput. Sci., 09 March 2023
Sec. Human-Media Interaction
This article is part of the Research Topic Horizons in Computer Science 2022 View all 8 articles

Do you like me? Behavioral and physical features for socially and emotionally engaging interactive systems

  • 1Dipartimento di Psicologia, Università della Campania “Luigi Vanvitelli”, Caserta, Italy
  • 2Institute for Advanced Scientific Studies (IIASS), Vietri sul Mare, Italy
  • 3Osservatorio Vesuviano, Sezione di Napoli, Naples, Italy

With the aim to give an overview of the most recent discoveries in the field of socially engaging interactive systems, the present paper discusses features affecting users' acceptance of virtual agents, robots, and chatbots. In addition, questionnaires exploited in several investigations to assess the acceptance of virtual agents, robots, and chatbots (voice only) are discussed and reported in the Supplementary material to make them available to the scientific community. These questionnaires were developed by the authors as a scientific contribution to the H2020 project EMPATHIC (http://www.empathic-project.eu/), Menhir (https://menhir-project.eu/), and the Italian-funded projects SIROBOTICS (https://www.exprivia.it/it-tile-6009-si-robotics/) and ANDROIDS (https://www.psicologia.unicampania.it/android-project) to guide the design and implementation of the promised assistive interactive dialog systems. They aimed to quantitatively evaluate Virtual Agents Acceptance (VAAQ), Robot Acceptance (RAQ), and Synthetic Virtual Agent Voice Acceptance (VAVAQ).

1. Introduction

Socially engaging interactive systems can be virtual agents, robots which physically occupy the user's space, and chatbots (also intended as conversational voice interfaces). There are several factors affecting the way these different technological entities are accepted by their users. User acceptance can be defined “…as the demonstrable willingness within a user group to employ information technology for the tasks it is designed to support” (Dillon, 1996). User acceptance is different from other concepts like user experience (UX), system quality, and usability, since acceptance can be considered as their result, consisting of something that comes after and contains them. It has been shown over time that user attraction cannot be reduced to the perceived usefulness of a system and its ease of use (Davis, 1989). In fact, theoretical constructs such as a user's social influence and the accomplishment of significant user goals together with hedonic motivations (the fun or pleasure of using a technology), price values (a trade-off between perceived benefits and monetary costs), and users' habits must be considered as further determinants affecting users' intentions to use such systems (Venkatesh et al., 2003, 2012). These constructs have been operationalized through well-known theoretical models such as the Technology Acceptance Model (TAM; Davis, 1989), which evolved into the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003), and lately into UTAUT2 (Venkatesh et al., 2012), as well the Almere model developed as a further evolution of UTAUT2 upon the criticism that the latter does not account for variables related to social interactions with robots or virtual agents and does not consider seniors as potential users (Heerink et al., 2010; Tsiourti et al., 2014). However, these theoretical formulations are not able, in our opinion, to explain behavioral intention in different contexts, especially considering that contemporary proposed interactive systems are increasingly more complex, showing humanoid and human appearances. To this aim, a systematic investigation was conducted to assess the effects of behavior and appearance-related features of virtual agents, robots, and synthetic voices on users' acceptance; specific user domain preferences were exploited in the context of healthcare and particularly in the context of the H2020 projects EMPATHIC (http://www.empathic-project.eu/), Menhir (http://www.empathic-project.eu/), and the Italian-funded projects SIROBOTICS (https://www.exprivia.it/it-tile-6009-si-robotics/) and ANDROIDS (https://www.psicologia.unicampania.it/android-project) to guide the design and implementation of the promised assistive interactive technologies. These projects brought with them the promise of guiding the implementation of virtual coaches in order to simplify and make independent the life of elderly people living alone, and at the same time monitoring their mental health status. What is fundamental is focusing on the possibility of exploiting intelligent and socially believable Information Communication Technology (ICT) interfaces that support seniors in living autonomously, simplifying their management of daily tasks, and lightening workloads for caregivers. Moreover, it is possible to use technology such as robots, chatbots, and virtual agents, to help not only elders but anyone requiring support for daily life activities. For instance, conversational technologies in the shape of virtual agents, social robots, and chatbots can be exploited to improve users' mental wellbeing and lifestyles. These systems can be used as diagnostic tools for monitoring and treating symptoms of mental health conditions (Lovejoy, 2019), assessing users' tendency to engage in risky health behaviors (Elmasri and Maeder, 2016), encouraging users to adopt behaviors to increase wellbeing and reduce stress (Gardiner et al., 2017), and monitoring conversations with users and detecting the presence of depressive symptoms (Delahunty et al., 2018). Conversational agents can be exploited as mental health tools with the aim of providing support to people living with post-traumatic stress disorder (PTSD; Tielman et al., 2017), schizophrenia (Huckvale et al., 2013), phobias (Brinkman et al., 2008), major depression (Pérez Díaz de Cerio et al., 2011), and children with autism (Bernardini et al., 2013). These aspects were more deeply investigated in the context of the MENHIR (https://menhir-project.eu), aimed at researching and developing conversational technologies to promote mental health and assist people with mental health conditions (e.g., depression and anxiety) to manage their conditions. Since the successful incorporation of assistive technologies in everyday life depends mainly on how the users perceive and accept these assistive technologies (De Graaf et al., 2015), the authors' work focused on investigating these issues by adopting a user-centered perspective. Therefore, the present work summarizes these investigations, providing:

• an overview of the factors affecting users' perception of virtual agents (Section 2).

• an overview of the factors affecting users' perception of robots (Section 3).

• an overview of the factors affecting users' perception of chatbots (Section 4).

• a detailed description of the questionnaires exploited to carry out the Virtual Agents Acceptance Questionnaire (VAAQ), the Robot Acceptance Questionnaire (RAQ), and the Synthetic Virtual Agent Voice Acceptance Questionnaire (VAVAQ; Section 5).

2. Features for accepting virtual agents

Virtual agents are cyber entities capable of communicating using human-like communicative modalities, such as voice, facial expressions, and body movements (Pelachaud, 2009). The appearance of virtual agents has a strong impact on the degree of users' acceptance. Appearance includes the physical and social features of the agent, such as its face, voice, gender, dressing style, and personality (Díaz-Boladeras et al., 2013; Esposito et al., 2021). An agent's voice and face has a strong impact on users' perceptions, as shown by studies highlighting people's skeptical reactions toward agents developed using the combination of a human face with a synthetic voice or a synthetic face with a human voice (Gong and Nass, 2007). Studies have highlighted that senior users prefer human-like agents rather than machine- or animal-like ones (Straßmann and Krämer, 2017) and that people consider humanoid agents with a cartoon-like appearance more pleasant compared to realistic humanoid agents (Ring et al., 2014). Even children, when required to recognize realistic and stylized facial emotional expressions, seem to prefer stylized faces for the identification of surprise (Esposito et al., 2013). Esposito et al. (2019a) observed that voice seems to be a fundamental factor in increasing senior users' acceptance of virtual agents, while young adults and adolescents seem to not be strongly influenced by agents' voices. Moreover, interfaces endowed with a human face are able to improve employers' productivity (Kong, 2013) and virtual agents with human-like faces induce more positive user reactions compared to agents with animal-like or cartoon-like faces (Forluzzi et al., 2007; Oh et al., 2016). Gender has also been found to impact users' willingness to interact and is a factor capable of strongly influencing users' beliefs and expectations (Niculescu et al., 2010; Esposito et al., 2018b). One study (Ashby Plant et al., 2009) showed students had higher performances, increased interest, and feelings of self-efficacy while interacting with a female agent. Other studies in which seniors were involved highlighted that they noticeably enjoyed interacting with a synthetic speaking voice produced by a static female agent (Cordasco et al., 2014). A further study (Esposito et al., 2018b) investigating seniors' preferences highlighted that they assessed female humanoid agents as more pleasant, practical, and attractive than male agents and that they were more prone to engage in long-lasting interactions with them. As mentioned above, the virtual agents' dressing style is another variable affecting users' attitudes toward virtual agents, and dressing style significantly interacts with gender. To this aim, Lunardo (2016) showed that female virtual agents presented while wearing corporate clothing were evaluated as more attractive compared to male agents, which increased social presence and trust and had a positive effect on online consumer behavior. Virtual agents' gender also seems to interact with other variables such as the level of agents' realism. More specifically users seemed to prefer interacting with female virtual agents characterized by a more realistic appearance (Payne et al., 2013). Agents' behavior is another aspect influencing users; indeed, a study on persuasion showed that virtual agents characterized by higher behavioral realism were more convincing than those lower in behavioral realism (Guadagno et al., 2007). Even an agent's perceived personality has an impact on users, as shown in work by Esposito et al. (2018a) involving seniors. In this context, seniors expressed preferences for interacting with virtual agents showing joyful and practical personalities rather than sad and aggressive traits. A further crucial aspect related to the appearance and the design of a virtual assistant concerns a virtual agent's ability to manifest emotional expressions. Emotions are crucial for humans' survival and social adaptation, and they also represent a fundamental component of human-machine interaction. In fact, people prefer to interact with virtual agents which can show emotional facial expressions rather than with unemotional virtual agents (Gobron et al., 2013). It has been shown (de Melo, et al., 2014) that during negotiation processes with virtual agents, people tended to concede more if the agent expressed anger or blame compared to conditions in which the agent expressed happiness. But, when participants had the possibility to choose between accepting or rejecting an offer, they tended to accept an offer from agents expressing emotions such as joy. Alternatively, they tended to reject offers and withdraw from the negotiation when a virtual agent expressed anger or sadness. Other studies investigated the effect that facial emotional expressions conveyed by virtual agents can exert on the user within the interaction process. Bartneck et al. (2007) developed an investigation in which participants were asked to join a negotiation task in which they were asked to interact with a screen or with a robotic character. In both conditions, it emerged that participants rated the interaction with the characters expressing emotions as more pleasant compared to the emotionless characters.

In summary, an assistive technology embodied in a virtual agent is more appealing to a population ranging from 14 to 65+ years old when implemented as a female virtual agent, aged between 29 and 35 years, and with a pragmatic and/or joyful personality.

3. Features for accepting robots

As with virtual agents, users' acceptance of socially assistive robots (SARs) is affected by several features. To the same extent as in human interactional exchanges, facial features, gender, age, and ethnicity represent sources for users to understand and accept the assistance of a robot (Smarr et al., 2011). Robots' appearance is one of the most important factors in determining people's preferences. A major distinction occurs between humanoid robots, characterized by a human-like appearance, and android robots, which instead mimic a realistically human appearance. Since the formulation of the uncanny valley theory by Mori (1970), which is used to describe people's reactions to robots and how these reactions vary according to the distinct levels of perceived human likeness, several studies have focused on testing the effects of different levels of human likeness on users' acceptance of robots. Regarding the feeling of eeriness that a robot could cause in users, different explanations have been proposed: according to some studies, it is the stimulus category (human vs. non-human) that causes the uncanny valley effects rather than its level of human likeness (Burleigh et al., 2013); other studies have highlighted that this effect is due to the inadequacy of the rendering of some human-like characteristics of the robot, for instance, the robots' slow movements or poor lexicons (Wang et al., 2015), as well as the lack of consistency and reduced realism in human eyes–eyelashes–mouth, skin–nose–eyebrows (MacDorman and Chattopadhyay, 2015). Other studies investigating potential user attitudes toward robots identified a clear uncanny valley effect since humanoid robots were evaluated as more friendly and pleasant (MacDorman and Ishiguro, 2006; Wu et al., 2012; Mara and Appel, 2015; Ferrari et al., 2016), as well as being more suitable for performing assistive duties, protection and security tasks, and front desk occupations (Esposito et al., 2020a) than androids. Nevertheless, in contrast with the current trend observed in the literature, some studies (Esposito et al., 2019c, 2020b) have highlighted seniors' preference for androids rather than humanoid robots. It has also been shown that people tend to attribute racial/ethnic identities to robots (Sparrow, 2020); thus, it follows that a robot's ethnicity could have a strong impact on users' acceptance, as shown by Esposito et al. (2020b), where seniors' preferences were for female android robots with Asian traits and male androids with Caucasian traits. Nevertheless, all these factors (e.g., the levels of human likeness, gender, and ethnicity) do not seem to uniquely affect users' acceptance, but rather user acceptance appears to be a non-linear combination of all these factors (Esposito et al., 2022). User acceptance of robots depends not only on characteristics that the robot should have but also on features that the user prefers the robot would not have, as in a study (de Graaf et al., 2019) in which participants negatively evaluated the sociability and the companionship possibilities of domestic robots, suggesting that people seemed to not want robots to behave socially.

To summarize, the acceptance of assistive technologies embodied in social robots is more difficult because of the difficulty to implement (up to now) robots adequately rendering human appearance in movements, facial expressions (eyes–eyelashes–mouth, skin–nose–eyebrows), and language.

4. Features for accepting chatbots

A chatbot consists of an interactive interface based on a computer software able to simulate human conversations through natural language (Beilby et al., 2014). To be successfully accepted by users, chatbots should possess certain features, for instance, the ability to easily start an interaction, to precisely understand a user's words, be trustworthy, and provide correct and relevant answers, as well as having the ability to express emotions (Tatai et al., 2003; Zamora, 2017; Zumstein and Hundertmark, 2017). Rietz et al. (2019) examined the influence of anthropomorphic chatbot design features on user acceptance, highlighting that this design characteristic increases chatbot perceived usefulness. Language style is a fundamental feature to consider while designing chatbots, as shown in the study of Gnewuch et al. (2020) in which chatbots with different language styles, that is, dominant and submissive, were exploited. The study highlighted that when the user perceived a similarity between their own and the chatbot's language style, it increases the user's degree of self-disclosure and chatbot acceptance. A further way to increase chatbot acceptance consists of providing the chatbot with a synthetic voice, such as other well-known speech-based technologies such as Alexa and Google Assistant. Some guidelines concerning the characteristics that a synthetic voice should have in order to meet users' expectations are derived from studies investigating the role that the voice plays in the acceptance of virtual agents. These studies highlighted that potential users prefer to interact with synthetic voices, even if they are not equipped with a visual interface or virtual avatar, rather than interact with mute agents (Esposito et al., 2021). A recent study (Amorese et al., 2023) analyzed the effect of synthetic voices' gender and quality on user's preferences involving mental health experts and participants living with depression and/or anxiety. The results showed that participants' preferences seemed to be affected by both the gender and quality of the synthetic voice. More specifically, participants preferred female voices and high-quality voices. It also emerged that the quality of a synthetic voice in particular seemed to have a stronger impact on users' evaluations compared to the voice's gender.

Table 1 summarizes all the factors affecting users' acceptance as discussed above.

TABLE 1
www.frontiersin.org

Table 1. The main factors affecting acceptance of each interactive system.

5. Questionnaires to assess acceptance of virtual agents, robots, and chatbots (voice only)

With the aim of testing the previously mentioned factors and providing information concerning the perception of virtual agents, robots, and synthetic voices, as well as the degrees of technology acceptance among users, questionnaires were developed to explore potential users' satisfaction while interacting with virtual agents, robots, and synthetic voices, respectively named the Virtual Agents Acceptance Questionnaire (VAAQ), Robot Acceptance Questionnaire (RAQ), and Virtual Agent Voice Acceptance Questionnaire (VAVAQ). With regards to the VAVAQ, the reason why we did not use other standard questionnaires dealing with the measurement of voice quality was related to the possibility of collecting data concerning synthetic voices to compare with data concerning robots and virtual agents collected with the same tool. The questionnaires were developed taking inspiration from Hassenzahl's AttrakDiff questionnaire (2003, 2004, and 2014), thought to test the usability and appearance of interactive products (i.e., enterprise software, consumer products, websites, or medical devices) and distinguishing between pragmatic and hedonic factors. The VAAQ, RAQ, and VAVAQ (reported in the Supplementary material section) questionnaires are composed of seven sections. Within the Supplementary material, only one questionnaire is reported: this single questionnaire can be used to measure any of the three systems specifically, since the questions are the same and what changes is only the type of system being evaluated. Moreover, the Supplementary material contains the abridged version of the originally developed questionnaire, this version has been modified over time, and non-descriptive items have been eliminated so as to make administration of the questionnaire less burdensome for the participants. Since the questionnaire shortening could be considered as an improvement of the questionnaire, only the abridged version is reported. The first section is composed of four items collecting socio-demographic information about participants and three items investigating participants' experiences with technology and difficulties while using devices such as smartphones, tablets, and laptops. The second section, composed of one item, evaluates participants' willingness to interact with the proposed systems. The third section investigates how participants perceive the system and consists of four sub-sections, each composed of six items:

Subsection 1 is devoted to assessing the pragmatic qualities (PQ) of the system, regarding the system's usefulness, effectiveness, practicality, and ease of use.

Subsection 2 is devoted to assessing the hedonic qualities-identity (HQI) of the system, regarding the system's originality, professionality, creativeness, and pleasantness.

Subsection 3 is devoted to assessing the hedonic qualities-feeling (HQF) of the system, regarding the system's ability to arouse both positive and negative feelings.

Subsection 4 is devoted to assessing the attractiveness (ATT) of the system, regarding the system's attractiveness and ability to encourage increased use and long-term relationships.

The fourth section is composed of three items assessing the impact that the perceived age attributed to the agent, robot, or voice could have on the user. Section five investigates systems' perceived suitability for performing tasks in: (a) welfare occupations for seniors, children, and disabled people; (b) housework; (c) protection and security occupations; and (d) public relations and front office occupations. Section six is specifically devoted to assessing systems' voice and in particular its intelligibility, expressiveness, and naturalness. Section seven, lastly, is devoted to evaluating the possible effect of exploiting Wizard of Oz (WoZ) techniques during the interactions and thus obviously has to be administered only when WoZ procedures are involved.

For each item, participants' answers were given on a 5-point Likert scale from 1 = strongly agree, 2 = agree, 3 = I don't know, 4 = disagree, to 5 = strongly disagree. Since sections two, three, six, and seven of the questionnaires are composed of both positive and negative items evaluated on a 5-point Likert scale, scores from negative items are corrected in reverse, thus low scores correspond to positive evaluations, and high scores to negative ones.

The RAQ questionnaire was recently validated using principal components analysis (PCA) and the internal consistency was checked; the work is currently under submission. We are also planning to extend the validation work to the other questionnaires (VAAQ and VAVAQ), and publish the results.

6. Conclusions

In this article, we presented a brief overview of the features that socially and emotionally engaging interactive systems should possess in order to meet users' needs and expectations. We focused in particular on three typologies of interactive systems: virtual agents, robots, and chatbots. It emerged that there are several physical and behavioral features capable of affecting users' acceptance and that these even interact with each other. Considering this, and the authors' involvement, as previously mentioned, within the H2020 projects “EMPATHIC” and “MENHIR,” a systematic investigation was conducted assessing the behavioral and appearance-related features affecting users' acceptance of virtual agents, robots, and synthetic voices in the context of healthcare. This was with the hope to provide guidelines, as emphasized in the aims of the EMPATHIC project to “develop causal models of [agent] coach-user interactional exchanges, which engage elderly [sic] in emotionally believable interactions keeping off loneliness, sustaining health status, enhancing quality of life and simplifying access to future telecare services.” Among the initial research steps, priority was given to the development of a special questionnaire to assess seniors' preferences toward the developed empathic virtual coach. Throughout the midterm period, the researchers from Università della Campania L. Vanvitelli developed the “Virtual Agent Acceptance Questionnaire” (VAAQ) which dynamically changed during the project to better fit the observed final users' requirements; it also gave rise to the corresponding versions of the questionnaire dedicated to robots [the Robot Acceptance Questionnaire (RAQ)] and synthetic voices [the Virtual Agent Voice Acceptance Questionnaire (VAVAQ)]. This paper in fact, also reports the final (shortened) versions of the questionnaires and results along with the testing of a large population of users including adolescents, young adults, middle-aged adults, and seniors assessing their acceptance of not only virtual agents but also interactive systems as conversational voice interfaces, and humanoid and android robots.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

This work has been partially funded by the following projects: Project EMPATHIC, N. 769872, Horizon 2020; Project MENHIR, N. 823907, Horizon 2020; Project SIROBOTICS, Ministero dell'Istruzione, dell'Università, e della Ricerca (MIUR); and Project ANDROIDS, N. 157264, V: ALERE 2019.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomp.2023.1138501/full#supplementary-material

References

Amorese, T., Mc Convey, G., Cuciniello, M., Cordasco, G., Bond, R., Mulvenna, M., et al. (2023). “Assessing synthetic voices for mental health chatbots,” in 8th International Congress on Information and Communication Technology (ICICT 2023) (London).

Ashby Plant, E., Baylor, A. L., Doerr, C. E., and Rosenberg-Kima, R. B. (2009). Changing middle school students' attitudes and performance regarding engineering with computer-based social models. Comput. Educ. 53, 209–215. doi: 10.1016/j.compedu.2009.01.013

CrossRef Full Text | Google Scholar

Bartneck, C., Kanda, T., Ishiguro, H., and Hagita, N. (2007). “Is the uncanny valley an uncanny cliff?” in RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication (Piscataway, NJ: IEEE), 368–373. doi: 10.1109/ROMAN.2007.4415111

PubMed Abstract | CrossRef Full Text | Google Scholar

Beilby, L. J., Zakos, J., and McLaughlin, G. A. (2014). U.S. Patent No. 8,630,961. Washington, DC: U.S. Patent and Trademark Office.

Bernardini, S., Porayska-Pomsta, K., and Sampath, H. (2013). “Designing an intelligent virtual agent for social communication in autism,” in Proceedings of the Ninth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (Boston, MA: AAAI), 9–15. doi: 10.1609/aiide.v9i1.12688

CrossRef Full Text | Google Scholar

Brinkman, W. P., van der Mast, C. A. P. G., and de Vliegher, D. (2008). “Virtual reality exposure therapyfor social phobia: A pilot study in evoking fear in a virtual world,” in Proceedings of HCI2008 Workshop – HCI for Technology Enhanced Learning. Liverpool.

Google Scholar

Burleigh, T. J., Schoenherr, J. R., and Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Comput. Hum. Behav. 29, 759–771. doi: 10.1016/j.chb.2012.11.021

CrossRef Full Text | Google Scholar

Cordasco, G., Esposito, M., Masucci, F., Riviello, M. T., Esposito, A., Chollet, G., et al. (2014). “Assessing voice user interfaces: The vAssist system prototype,” in Proceedings of 5th IEEE international Conference on Cognitive InfoCommunications, Vietri sul Mare, 5-7 Nov. Vietri sul Mare, 91–96. doi: 10.1109/CogInfoCom.2014.7020425

CrossRef Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13, 319–340. doi: 10.2307/249008

CrossRef Full Text | Google Scholar

De Graaf, M. M., Allouch, S. B., and Klamer, T. (2015). Sharing a life with Harvey: Exploring the acceptance of and relationship-building with a social robot. Comput. Hum. Behav. 43, 1–14.

Google Scholar

de Graaf, M. M., Ben Allouch, S., and Van Dijk, J. A. (2019). Why would I use this in my home? A model of domestic social robot acceptance. Hum. Comput. Interact. 34, 115–173. doi: 10.1080/07370024.2017.1312406

CrossRef Full Text | Google Scholar

de Melo C Gratch J Carnevale P. (2014). Humans versus computers: Impact of emotion expressions on people's decision making. IEEE Trans. Affect. Comput. 6, 127–136. doi: 10.1109/TAFFC.2014.2332471

CrossRef Full Text | Google Scholar

Delahunty, F., Wood, I. D., and Arcan, M. (2018). “First insights on a passive major depressive disorder prediction system with incorporated conversational chatbot,” in AICS (Dublin), 327–338.

Google Scholar

Díaz-Boladeras, M., Saez-Pons, J., Heerink, M., and Angulo, C. (2013). “Emotional factors in robot-based assistive services for elderly at home,” in IEEE RO-MAN: The 22nd IEEE International Symposium on Robot and Human Interactive Communication. Gyeongju. doi: 10.1109/ROMAN.2013.6628396

CrossRef Full Text | Google Scholar

Dillon, A. (1996). User acceptance of information technology: Theories and models. Ann. Rev. Inform. Sci. Technol. 31, 3–32.

Google Scholar

Elmasri, D., and Maeder, A. (2016). “A conversational agent for an online mental health intervention,” in International Conference on Brain Informatics (Cham: Springer), 243–251. doi: 10.1007/978-3-319-47103-7_24

CrossRef Full Text | Google Scholar

Esposito, A., Amorese, T., Cuciniello, M., Esposito, A. M., Troncone, A., Torres, M. I., et al. (2018b). “Seniors' acceptance of virtual humanoid agents,” in Italian Forum of Ambient Assisted Living (Cham: Springer), 429–443. doi: 10.1007/978-3-030-05921-7_35

CrossRef Full Text | Google Scholar

Esposito, A., Amorese, T., Cuciniello, M., Pica, I., Riviello, M. T., Troncone, A., et al. (2019c). “Elders prefer female robots with a high degree of human likeness,” in 23rd International Symposium on Consumer Technologies (ISCT). Piscataway, NJ: IEEE. doi: 10.1109/ISCE.2019.8900983

CrossRef Full Text | Google Scholar

Esposito, A., Amorese, T., Cuciniello, M., Reviello, M. T., Esposito, A. M., Troncone, A., et al. (2021). Elder user's attitude toward assistive virtual agents: the role of voice and gender. J. Ambient. Intell. Human. Comput. 12, 4429–4436. doi: 10.1007/s12652-019-01423-x

CrossRef Full Text | Google Scholar

Esposito, A., Amorese, T., Cuciniello, M., Riviello, M. T., and Cordasco, G. (2020b). “How human likeness, gender and ethnicity affect elders' acceptance of assistive robots,” in Proceedings of 1st IEEE International Conference on Human-Machine Systems (ICHMS). doi: 10.1109/ICHMS49158.2020.9209546

CrossRef Full Text | Google Scholar

Esposito, A., Amorese, T., Cuciniello, M., Riviello, M. T., Esposito, A. M., Troncone, A., et al. (2019a). “The dependability of voice on elders' acceptance of humanoid agents,” in Interspeech, eds G. Kubin and Z. Kacic (Graz: ISCA), 31–35. doi: 10.21437/Interspeech.2019-1734

CrossRef Full Text | Google Scholar

Esposito, A., Cucinello, M., Amorese, T., Esposito, A. M., Troncone, A., Maldonato, M. N., et al. (2020a). “Seniors' appreciation of humanoid robots,” in Neural Approaches to Dynamics of Signal Exchanges. Smart Innovation, Systems and Technologies, Vol 151, eds A. Esposito, M. Faundez-Zanuy, F. Morabito, and E. Pasero (Singapore: Springer). doi: 10.1007/978-981-13-8950-4_30

CrossRef Full Text | Google Scholar

Esposito, A., Cuciniello, M., Amorese, T., Vinciarelli, A., and Cordasco, G. (2022). Humanoid and android robots in the imaginary of adolescents, young adults, and seniors. J. Ambient Intellig. Human. Comput. 2022, 1–20. doi: 10.1007/s12652-022-03806-z

CrossRef Full Text | Google Scholar

Esposito, A., Riviello, M. T., and Capuano, V. (2013). “Discriminating human vs. stylized emotional faces: Recognition accuracy in young children,” in Neural Nets and Surroundings. Smart Innovation, Systems and Technologies, vol 19, eds B. Apolloni, S. Bassis, A. Esposito, and F. Morabito (Berlin; Heidelberg: Springer), 39. doi: 10.1007/978-3-642-35467-0_39

CrossRef Full Text | Google Scholar

Esposito, A., Schlögl, S., Amorese, T., Esposito, A., Torres, M. I., Masucci, F., et al. (2018a). “Seniors' sensing of agents' personality from facial expressions,” in International Conference on Computers Helping People with Special Needs (Cham: Springer), 438–442. doi: 10.1007/978-3-319-94274-2_63

CrossRef Full Text | Google Scholar

Ferrari, F., Paladino, M. P., and Jetten, J. (2016). Blurring human–machine distinctions: Anthropo-morphic appearance in social robots as a threat to human distinctiveness. Int. J. Soc. Robot. 8, 287–302. doi: 10.1007/s12369-016-0338-y

CrossRef Full Text | Google Scholar

Forluzzi, J., Zimmerman, J., Mancuso, V., and Kwak, S. (2007). How interface agents affect interaction between humans and computers. Hum. Comput. Interact. Inst. 2007, 1314180. doi: 10.1145/1314161.1314180

CrossRef Full Text | Google Scholar

Gardiner, P. M., McCue, K. D., Negash, L. M., Cheng, T., White, L. F., Yinusa- Nyahkoon, L., et al. (2017). Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: A feasibility randomized control trial. Pat. Educ. Counsel. 100, 1720–1729. doi: 10.1016/j.pec.2017.04.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Gnewuch, U., Yu, M., and Maedche, A. (2020). “The effect of perceived similarity in dominance on customer self-disclosure to chatbots in conversational commerce,” in Proceedings of the 28th European Conference on Information Systems (ECIS), An Online AIS Conference. Available online at: https://aisel.aisnet.org/ecis2020_rp/53

Google Scholar

Gobron, S., Ahn, J., Thalmann, D., Skowron, M., and Kappas, A. (2013). Impact study of nonverbal facial cues on spontaneous chatting with virtual humans. J. Virtual Reality Broadcast. 19, 1-17. Available online at: https://www.jvrb.org/past-issues/10.2013/3823/1020136.pdf

Google Scholar

Gong, L., and Nass, C. (2007). When a talking-face computer agent is half-human and half-humanoid: Human identity and consistency preference. Hum. Commun. Res. 33, 163–193. doi: 10.1111/j.1468-2958.2007.00295.x

CrossRef Full Text | Google Scholar

Guadagno, R. E., Blascovich, J., Bailenson, J. N., and Mccall, C. (2007). Virtual humans and persuasion: The effects of agency and behavioral realism. Media Psychol. 10, 1–22. doi: 10.1080/15213260701300865

CrossRef Full Text | Google Scholar

Heerink, M., Kröse, B., Evers, V., and Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: The Almere model. Int. J. Soc. Robot. 2, 361–375. doi: 10.1007/s12369-010-0068-5

CrossRef Full Text | Google Scholar

Huckvale, M., Leff, J., and Williams, G. (2013). “Avatar therapy: An audiovisual dialogue system for treating auditory hallucinations,” in Interspeech, eds F. Bimbot, C. Cerisra, C. Fougeron, G. Gravier, L. Liamel, F. Pellegrion, and P. Perrier (Lyon: ISCA), 392–396. doi: 10.21437/Interspeech.2013-107

CrossRef Full Text | Google Scholar

Kong, H. (2013). Face interface will empower employee. IJACT 5, 193–199.

Google Scholar

Lovejoy, C. A. (2019). Technology and mental health: The role of artificial intelligence. Eur. Psychiatr. 55, 1–3. doi: 10.1016/j.eurpsy.2018.08.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Lunardo, R. (2016). The interacting effect of virtual agents' gender and dressing style on attractiveness and subsequent consumer online behavior. J. Retail Consum. Serv. 30, 59–66 doi: 10.1016/j.jretconser.2016.01.006

CrossRef Full Text | Google Scholar

MacDorman, K. F., and Chattopadhyay, D. (2015). Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Elsevier 2015, 190–205. doi: 10.1016/j.cognition.2015.09.019

PubMed Abstract | CrossRef Full Text | Google Scholar

MacDorman, K. F., and Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. 3, 297–337. doi: 10.1075/is.7.3.03mac

CrossRef Full Text | Google Scholar

Mara, M., and Appel, M. (2015). Effects of lateral head tilt on user perceptions of humanoid and android robots. Comput. Hum. Behav. 44, 326–334. doi: 10.1016/j.chb.2014.09.025

CrossRef Full Text | Google Scholar

Mori, M. (1970). The uncanny valley. Energy 7, 33–35.

Google Scholar

Niculescu, A., Hofs, D., van Dijk, B., and Nijholt, A. (2010). “How the agent's gender influence users' evaluation of a QA system,” in Proceedings of the International Conference on User Science and Engineering (i-USEr 2010). December 13–15, 2010. Shah Alam, Selangor, Malaysia. (Piscataway, NJ: IEEE), 16–20. doi: 10.1109/IUSER.2010.5716715

CrossRef Full Text | Google Scholar

Oh., S. Y., Bailenson, J., Krämer, N., and Li, B. (2016). Let the avatar brighten your smile: Effects of enhancing facial expressions in virtual environments. PLoS ONE 11, e0161794. doi: 10.1371/journal.pone.0161794

PubMed Abstract | CrossRef Full Text | Google Scholar

Payne, J., Szymkowiak, A., Robertson, P., and Johnson, G. (2013). “Gendering the machine: Preferred virtual assistant gender and realism in selfservice,” in Intelligent Virtual Agents. IVA 2013. Lecture Notes in Computer Science, vol 8108, eds R. Aylett, B. Krenn, C. Pelachaud, and H. Shimodaira (Berlin: Springer), 9. doi: 10.1007/978-3-642-40415-3_9

CrossRef Full Text | Google Scholar

Pelachaud, C. (2009). Modelling multimodal expression of emotion in a virtual agent. Philos. Trans. Royal Soc. B 364, 3539–3548. doi: 10.1098/rstb.2009.0186

PubMed Abstract | CrossRef Full Text | Google Scholar

Pérez Díaz de Cerio, D., Valenzuela González, J. L., Ruiz Boqué, S., García Lozano, M., and Colomé, J. M. (2011). “Help4Mood: A computational distributed system to support the treatment of patients with major depression,” in COST IC1004: Cooperative Radio Communications for Green Smart Environments (Lund). Available online at: http://hdl.handle.net/2117/15129

Rietz, T., Benke, I., and Maedche, A. (2019). The Impact of Anthropomorphic and Functional Chatbot Design Features in Enterprise Collaboration Systems on User Acceptance. Internationale Tagung Wirtschaftsinformatik (Wi 2019), Siegen, Deutschland.

Google Scholar

Ring, L., Utami, D., and Bickmore, T. (2014). “The right agent for the job? The effects of agent visual appearance on task domain,” in Proceedings of International Conference on Intelligent Virtual Agents (IVA 2014), LNCS. 8637 (Cham: Springer International Publishing), 374–384. doi: 10.1007/978-3-319-09767-1_49

CrossRef Full Text | Google Scholar

Smarr, C. A., Fausset, C. B., and Rogers, W. A. (2011). Understanding the p3tential for robot assistance for older adults in the home environment. Technical Report HFA-TR-1102. Georgia Institute of Technology, Atlanta, GA, United States.

Google Scholar

Sparrow, R. (2020). Do robots have race? Race, social construction, and HRI. IEEE Robot. Automat. Magazine 27, 144–150. doi: 10.1109/MRA.2019.2927372

CrossRef Full Text | Google Scholar

Straßmann, C., and Krämer, N. C. (2017). “A categorization of virtual agent appearances and a qualitative study on age-related user preferences,” in Proceedings of International Conference on Intelligent Virtual Agents (IVA 2017), LNCS. 10498 (Cham: Springer International Publishing), 413-422. doi: 10.1007/978-3-319-67401-8_51

CrossRef Full Text | Google Scholar

Tatai, G., Csordás, A., Kiss, Á., Szaló, A., and Laufer, L. (2003). “Happy chatbot, happy user,” in In-ternational Workshop on Intelligent Virtual Agents (Berlin, Heidelberg: Springer), 5–12. doi: 10.1007/978-3-540-39396-2_2

CrossRef Full Text | Google Scholar

Tielman, M. L., Neerincx, M. A., Bidarra, R., Kybartas, B., and Brinkman, W. P. (2017). A therapy system for post-traumatic stress disorder using a virtual agent and virtual storytelling to reconstruct traumatic memories. J. Med. Syst. 41, 125. doi: 10.1007/s10916-017-0771-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsiourti, C., Joly, E., Wings, C., Ben Moussa, M., and Wac, K. (2014). “Virtual assistive companions for older adults: Qualitative field study and design implications,” in 8th International Conference on Pervasive Computing Technologies for Healthcare, ICST, eds H. Andreas, B. Susanne, K. Friedrich (Brussels), 57–64. doi: 10.4108/icst.pervasivehealth.2014.254943

CrossRef Full Text | Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quart. 2003, 425–478. doi: 10.2307/30036540

CrossRef Full Text | Google Scholar

Venkatesh, V., Thong, J. Y., and Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quart. 2012, 157–178. doi: 10.2307/41410412

CrossRef Full Text | Google Scholar

Wang, S., Lilienfeld, S. O., and Rochat, P. (2015). The uncanny valley: Existence and explanations. Am. Psycholog. Assoc. 4, 393–407. doi: 10.1037/gpr0000056

CrossRef Full Text | Google Scholar

Wu, Y. H., Fassert, C., and Rigaud, A. S. (2012). Designing robots for the elderly: Appearance issue and beyond. Archiv. Gerontol. Geriatr. 54, 121–126. doi: 10.1016/j.archger.2011.02.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Zamora, J. (2017). “I'm sorry, dave, i'm afraid i can't do that: Chatbot perception and expectations,” in Proceedings of the 5th International Conference on Human Agent Interaction (HAI '17) (New York, NY: Association for Computing Machinery), 253–260. doi: 10.1145/3125739.3125766

CrossRef Full Text | Google Scholar

Zumstein, D., and Hundertmark, S. (2017). Chatbots–An interactive technology for personalized communication, transactions and services. Iadis Int. J. WWW/Internet 15, 96–109.

Keywords: interactive systems, acceptance, virtual agents, robots, chatbots, evaluation questionnaire

Citation: Esposito A, Amorese T, Cuciniello M, Esposito AM and Cordasco G (2023) Do you like me? Behavioral and physical features for socially and emotionally engaging interactive systems. Front. Comput. Sci. 5:1138501. doi: 10.3389/fcomp.2023.1138501

Received: 05 January 2023; Accepted: 17 February 2023;
Published: 09 March 2023.

Edited by:

Anton Nijholt, University of Twente, Netherlands

Reviewed by:

Carl Vogel, Trinity College Dublin, Ireland
Andreea I. Niculescu, Institute for Infocomm Research (A*STAR), Singapore

Copyright © 2023 Esposito, Amorese, Cuciniello, Esposito and Cordasco. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Anna Esposito, anna.esposito@unicampania.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.