1 Introduction
Algorithmic platforms, as a form of intelligent digital infrastructure underpinned by algorithms, have fundamentally transformed how individuals connect and interact with one another (Yin and Lin, 2023). The integration of emotional intelligence into these algorithms further deepens the relational connections between users and platforms. By enhancing the algorithm's affective capabilities—such as emotion perception and feedback (Wu et al., 2022; Bie and Zeng, 2024; Peng, 2024)—the attributes of user-platform relationships undergo qualitative changes within the affective dimension. Consequently, the model of human-computer interaction (HCI) evolves toward a trend of humanization, as articulated by Paul (2017). To some extent, this not only realizes the technical possibility of platform personification but also addresses modern individuals' emotional needs within Cyborg space. This evolution fosters and sustains potential connections in the emotional dimension between users and platforms (Lai, 2023; Hong and Huang, 2024). As a result, emotions have emerged as a significant focus in research concerning interactions between users and algorithmic platforms.
Going back in history, Marvin Minsky, regarded as the father of artificial intelligence (AI), proposed the groundbreaking idea that “AI should possess emotions” as early as 1985 (Marvin, 2006). Subsequently, Rosalind Picard, in her seminal work Affective Computing, further elucidated the technical possibilities of endowing computers with emotional capabilities (Rosalind, 1997). Numerous studies in media psychology have demonstrated that individuals often mindlessly equate virtual media featuring anthropomorphic cues with real-life experiences (Reeves and Nass, 1996), leading to para-social interactions and relationships with these entities (Rubin et al., 1985; Bickmore and Picard, 2005). However, at that time, this conclusion was constrained by limitations within computer science and artificial intelligence fields regarding affective computing technology. Consequently, it primarily existed at the level of academic experiments and discussions without widespread empirical evidence in real-world contexts. In recent years, however, the rapid advancement and extensive application of emotional AI have transcended these academic boundaries. This evolution has led to para-social relationships becoming increasingly normalized within society—an occurrence that is now garnering significant attention from a diverse array of scholars across the humanities and social sciences.
Unlike scholars in the field of affective computing, who primarily focus on experimental research involving HCI, those in the humanities and social sciences tend to emphasize the exploration of the social and ethical risks associated with these technologies from philosophical and sociological perspectives. For instance, Marx famously asserted that the essence of humanity is “the sum of all social relations” (Marx Engels, 2009). Consequently, understanding what it means to be human necessitates an examination of the relationships between individuals and others. Building upon this foundation, some scholars have suggested that within the current dynamics of user-platform interactions, integrating emotional intelligence into the development of human-like AI (such as algorithms) may give rise to a phenomenon termed “human alienation.” This occurs when AI—a product of human creation—poses a threat to the evolution of human subjectivity across three dimensions: communication, cognition, and labor (Xie and Liu, 2023). Thus, they call for society at large to recognize the developmental limits of AI and advocate for creating controllable, safe, and reliable AI systems while promoting a collaborative evolution between human-machine societies and general AI (Huang and Lv, 2023).
Against this backdrop, I found that, despite the existence of numerous studies exploring the emotional interactions between humans and computers (Rosalind, 1997; Reeves and Nass, 1996; McStay, 2018; Marcos-Pablos and García-Peñalvo, 2022; Peng, 2024; Lai, 2023; Gan and Wang, 2024; Zhao and Li, 2023), as well as the ethical issues arising from the development of emotional AI (Gossett, 2023; Tretter, 2024; Nyholm and Frank, 2019; Xiao and Zhang, 2024; Yin and Liu, 2021; Zhang, 2024), so far, there has been no research that approached the issue from a theoretical and speculative standpoint, focusing on the definition of the dynamic emotional interaction relationship between users and AI platforms with the development of emotional AI, and further explores how it impacts human interaction paradigms. Specifically, there is a lack of theoretical discussion on the profound changes in existing societal paradigms brought about by the advancement of emotional AI.
My goal is to fill this gap and to argue that, when algorithms integrate emotional intelligence, a new type of relationship—pseudo-intimacy—emerges between users and platforms, serving as a new paradigm of human interaction that coexists with face-to-face relationships in the real world. In this pseudo-intimacy relationship, on the one hand, users and platforms achieve instantaneous emotional interaction, partially satisfying the human's desire for intimacy. However, it is also restricted by the limited development of emotional AI and human irrationality, making the human social environment more full of contradictions and tension. Consequently, the advancement of emotional AI should not only focus on technological innovation and subjective human experiences but also fully consider its impact on human social interaction paradigms. Yet, if appropriate measures are taken to address these ethical risks, I argue, nothing can fundamentally stand in the way of the progress of emotional AI.
To elaborate on my thesis, I will first focus on how the pseudo-intimacy relationship emerges and develops, and provide a realistic explanation of its definition. Then, I will conduct a summary discussion on the relevant ethical risks, and express my attitude and recommendations toward the future development of emotional AI.
2 The pseudo-intimacy relationship of user-platform becomes a new paradigm for human interaction
The human-computer society could not exist without emotions assuming the role of the glue (Gan and Wang, 2024). However, although emotions in HCI have received attention from scholars of affective computing, such as Rosalind Pickard, since the end of the last century, and “para-social relationship” has been discussed in media studies for decades, yet emotions have been marginalized in the study of user-platform relationships in the field of sociology for a long time. This is mainly due to the stereotype of “emotion-rationality” dichotomy among some scholars (Yuan, 2021). In recent years, research within social robotics has made significant strides in enhancing robots' emotional capabilities to improve their capacity for empathy and social engagement with humans (Marcos-Pablos and García-Peñalvo, 2022). Sociological theorists are increasingly recognizing that, along with the algorithmic platform's anthropomorphic development (Wu et al., 2022; Zhao and Li, 2023), the most distinct boundary between HCI and interpersonal social interaction—the authenticity of the interaction object (Giles, 2002)—has been broken. The user-platform relationship has beyond the “para-social relationship” defined by HCI scholars, resulting in a “pseudo-intimacy relationship” between humans and humanlike entities. This is evident in current HCI, where users anthropomorphize and idealize computers based on their emotional intelligence, forming social relationships that are more satisfying than face-to-face ones.
2.1 The user-platform emotional relationship is thoroughly elucidated in the context of immediate interaction, and partially satisfying the human need for intimacy
Anthropocentrism posits that humans have an inherent tendency to anthropomorphize non-human entities, driven by a desire to engage and connect with society (Epley et al., 2007). In Alone Together, Sherry Turkle, a professor of sociology at the Massachusetts Institute of Technology (MIT), examined the psychological phenomenon whereby individuals forge intimate connections with computers. She argues that humans can develop emotional relationships with computers, even may regard them as significant others akin to family and friends (Sherry, 2014). This human-computer relationship established on this premise—particularly in the context of social media—mimics emotional bonds found among humans. However, it lacks the depth and complexity characteristic of genuine human interactions, which is somewhat constrained by the technological advancements available at that time. Since then, scholars have increasingly suggested that individuals may integrate computers into their interpersonal networks and become emotionally reliant on their presence (Thomas and Julia, 2018; Wang, 2023; Wang et al., 2024; Gan and Wang, 2024).
With the rapid advancement of AI's emotional capabilities and the widespread adoption of intelligent algorithmic platforms, this perspective is increasingly validated. Algorithmic technologies endowed with emotional intelligence facilitate instantaneous bidirectional interactions between users and platforms within the realm of emotional communication (Ke and Song, 2021; Hong and Huang, 2024). Based on the emotional purpose of human communication, this paper characterizes it as “pseudo-intimacy relationship.” In this relationship, due to the lack of non-verbal social cues in face-to-face interactions, instant emotional interactions between users and platforms mediated by affective AI may lead users to overinterpret limited information (Walther et al., 2015), thereby leading to an unhealthy development of the relationship between the two.
In terms of emotional interaction, the enhancement of algorithmic emotional intelligence not only made algorithmic platforms novel objects of human interaction but also awakened and partially satisfied the latent human need for intimacy. Some researchers have noted that this enhancement facilitates the mobilization of human emotions for immediate user-platform emotional interactions (Bie, 2023). With emotional intelligence, users display strong conscious or unconscious emotions during interactions (Nagy and Neff, 2015), and continuously motivating themselves to engage while also eliciting immediate emotional feedback from the platform, thereby accelerating the emotional flow between them.
In addition, from the point of view of the “mirror me” theory put forward by American sociologist Charles Horton Cooley, the essence of user-platform emotional interaction is an extension of human emotion projection and the construction of the ideal self in social interaction (Gan and Wang, 2024). In the dynamic interplay of human emotional projection and computer affective computing, a recursive effect akin to an “infinite mirror” emerges between the two entities (Panaite and Bogdanffy, 2019), wherein emotions are continuously iterated and refined. This process fosters the evolution of user-platform communication forms and experiences, with pseudo-intimacy becoming a defining characteristic of the user-platform relationship. This further deepens the emotional exchange between users and platforms, potentially elevating it to a cultural level and generating consensus on granting platforms the status of “interaction subjects” in society, and even envisioning a future where user-platform emotional exchanges are equalized.
However, it must be pointed out that, in contrast to the technological object essence of emotional AI, only human beings are truly emotional animals. Emotions, as a reflection of collective human intentions, are expressed through and rationalize human behavior (Swallow, 2009). As a result, regarding the evolution of user-platform relationship attributes, emotional intelligence in AI systems is an external factor, while the human need for intimacy is the initial driving force that promotes pseudo-intimacy relationship to be a new paradigm of human interaction.
2.2 User-platform emotional interactions have become more real and tangible, while the human social environment characterized by heightened contradictions and tensions
From the perspective of social relationships, before algorithmic emotional capabilities were developed, the user-platform relationship was fundamentally an HCI. Even with emotional undertones, it remains a one-sided contribution from users, who receive no emotional response from platforms and only feedback on usage and satisfaction—referred to as “user stickiness” (Periaiya and Nandukrishna, 2024). In contemporary times, with further developments in emotional AI technology, algorithmic platforms are now endowed with emotional capabilities. The anthropomorphic affective attributes within the user-platform relationship have become more pronounced in communicative contexts (Zhejiang Lab and Deloitte, 2023). This evolution has introduced a degree of warmth into these interactions, leading to the emergence and implementation of conversational and companionable AI.
However, akin to two sides of the same coin, the development of emotional intelligence in AI systems has also introduced a range of associated risks and sparked extensive discussions regarding their ethical implications within studies (McStay, 2018; Greene, 2020; Gremsl and Hödl, 2022; Gossett, 2023; Tretter, 2024). These discussions highlight the potential benefits of emotionally capable AI systems while simultaneously addressing the challenges posed by the technological uncontrollability of AI companions and human irrationality in emotional ineractions with intelligent systems (Yang and Wu, 2024; Chen and Tang, 2024). Scholars contend that as long as emotional AI technologies can influence human emotions, they possess the potential to serve as instruments of emotional deception (Bertolini and Arian, 2020). In light of these concerns, many researchers advocate for implementing protective measures across various fields such as education, healthcare, and justice to regulate AI systems capable of interpreting and responding to human emotions while preventing their irrational use (McStay, 2020; Vagisha and Harendra, 2023; Crawford, 2021).
In the context of the user-platform relationship that this article examines, the advancement of emotional AI technology has also exacerbated ethical concerns related to private data security, algorithmic bias leading to discrimination, and information cocooning (Mei, 2024; Yan et al., 2024). This is because, as the user-platform relationship becomes increasingly emotional, the relational attributes between a given platform and its different users may differ significantly. For platforms to sustain stable user-platform relationships, they must collect extensive data on users' emotional preferences and privacy information (Lu et al., 2022). However, current legislative frameworks regarding data protection in several countries with advanced platform technology development—such as China and the United States—remain incomplete. There are no uniform norms or standards governing how interest groups backing algorithmic platforms can protect or utilize such data. Grounded in media literacy, this situation has prompted a degree of self-reflexivity among users, leading them to develop concerns about risks associated with self-information security and emotional manipulation—commonly referred to as algorithmic anxiety (Cha et al., 2022).
In addition, from the perspective of the overall social environment, the current user-platform relationship can be characterized as a pseudo-intimacy relationship that does not exist in a seamless enclosure devoid or isolated space solely created by algorithmic platforms, AI, and other emotional agents. Instead, it coexists with genuine interpersonal socialization within real society, collectively forming a social environment rife with contradictions and tensions for individuals. Therefore, while the user-platform pseudo-intimacy relationship may enrich an individual's social life and alleviate loneliness to some extent (Yuan et al., 2024), it also impacts users' real-life interpersonal relationships. This influence can even adversely affect their social skills and attitudes, thereby hindering their understanding of interpersonal emotions and their significance, reducing opportunities for establishing more meaningful interactions (Sharkey and Sharkey, 2011; Nyholm and Frank, 2019). This negative impact arises because algorithmic platforms despite being programmed to understand and react to human emotions, it still lack the same innate capacity for empathy inherent in human beings (Morgante et al., 2024). Furthermore, the natural divide between humans and computers also leads users to perceive algorithmic platforms as a “quasi-other” (Mu and Wu, 2024). In this context, true reciprocal emotional communication between users and platforms has yet to be realized. The equalization of emotional communication between the two will continue to be a lengthy and challenging endeavor, hindered by both technical and ethical constraints.
3 Conclusions and future research
In summary, I argue that as emotional AI continues to develop, the user-algorithm platform relationship has shifted from a traditional HCI to a negotiating pseudo-intimacy between humans and humanlike entities. This is not only an imaginative, anthropomorphic, and social feature that emotional AI has bestowed upon HCI, but also an important supplement to human existing interaction paradigms. In addition, the emergence and development of pseudo-intimacy relationships partially satisfy human needs for intimacy in modern society; however, due to limitations about technological development and other factors, it is not entirely beneficial and raises ethical issues like privacy data security, increasing contradictions, and tension in the human social environment.
Therefore, we should agree that the advancement of emotional AI must focus not only on technological innovation but also on ethical constraints imposed by existing social norms, such as privacy security. Technological progress that violates ethical norms is always unacceptable. Also, in the face of the increasing emotional capabilities of algorithms, we should abandon the binary thinking of technology vs. humanity, rationality vs. emotion, and explore the harmonious coexistence of humanistic spirit and technological rationality in the future (Peng, 2021).
The discussion in this paper also has some limitations. Research on integrating emotional intelligence into algorithms, exploring the development of emotional functions within AI systems through methods such as human-computer experiments holds more significant practical application value. However, due to constraints related to genre, this paper primarily summarizes and examines these concepts at a theoretical level without engaging in large-scale experimental studies. Furthermore, as previously noted, the discourse surrounding ethical issues tacitly approve that we ought to allocate ethical responsibilities to AI technologies that are integrated with emotional intelligence. The reality, however, is that the questions of whether and how to continue to refine these technologies, whether and how to assign ethical responsibilities to them, and how humans should respond to the humanlike qualities of these technologies when interacting with them, are still under development and heated discussion. The answers need to be synthesized through data, theory, and other explorations by future researchers in computing science, humanities, social sciences, and other fields. Of course, it is also possible that there will not be a definite answer for a long period, which is also the charm of academic research.
Author contributions
JW: Writing – original draft, Writing – review & editing.
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work has been supported by the Outstanding Innovative Talents Cultivation Funded Programs 2023 of Renmin University of China (grant number 22RXW190).
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Bertolini, A., and Arian, S. (2020). Do Robots Care? Towards an Anthropocentric Framework in the Caring of Frail Individuals through Assistive Technologies. Berlin: Walter de Gruyter Gmb.
Bickmore, T. W., and Picard, R. W. (2005). Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput. Hum. Interact. 12, 293–327. doi: 10.1145/1067860.1067867
Bie, J. H. (2023). Platformized digital interactions: affective practices based on the availability of technology. Young Journal. 4, 22–25. doi: 10.15997/j.cnki.qnjz.2023.04.007
Bie, J. H., and Zeng, Y. T. (2024). Algorithmic imagination of platform participation and affective networks: an analysis of users on Xiaohongshu. China Youth Res. 2, 15–23. doi: 10.19633/j.cnki.11-2579/d.2024.0018
Cha, D. L., Jiang, Z. H., and Cao, G. H. (2022). A study of user-perceived algorithmic anxiety and its structural dimensions in information systems. Intell. Sci. 6, 66–73. doi: 10.13833/j.issn.1007-7634.2022.06.009
Chen, S. H., and Tang, L. (2024). Human-machine love: emotional interchain and emotional intelligence coupling of artificial intelligence partners. J. Hainan Univ. 9, 1–9. doi: 10.15886/j.cnki.hnus.202405.0437
Crawford, K. (2021). Time to regulate AI that interprets human emotions. Nature 592:7853. doi: 10.1038/d41586-021-00868-5
Epley, N., Waytz, A., and Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114:864. doi: 10.1037/0033-295X.114.4.864
Gan, L. H., and Wang, H. (2024). From emotional projection to digital emotion: emotional transformation of human-computer interaction in digital landscapes. Mod. Publish. 3, 27–38. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCDmCc1OEm2hxvh6X0oFChoroZAEOKw16Ydf4Nuh6PvnuaXoedaO7_9n5dh1kWt2tD2u33p4GFjpUaVrOcWpal7QLsmPovVfdlbAKpDm7zdIF21SETl2I9Q5m19sL9JVYZUBVCNu6i2QYQbPrEonkZ3nFAtAkaZeP_7aOJQHx_nxFduevAu-eKQVczw&uniplatform=NZKPT&language=CHS
Giles, D. C. (2002). Parasocial interaction: a review of the literature and a model for future research. Media Psychol. 4, 279–305. doi: 10.1207/S1532785XMEP0403_04
Gossett, S. (2023). Emotion AI: 3 Experts on the Possibilities and Risks. Available at: https://builtin.com/artificial-intelligence/emotion-ai (accessed September 27, 2024).
Greene, G. (2020). The Ethics of AI and Emotional Intelligence. Available at: https://partnershiponai.org/paper/the-ethics-of-ai-and-emotional-intelligence/ (accessed September 27, 2024).
Gremsl, T., and Hödl, E. (2022). Emotional AI: legal and ethical challenges. Inf. Polity 27, 163–174. doi: 10.3233/IP-211529
Hong, J. W., and Huang, Y. (2024). “Making” emotions: the logic of human-computer emotion generation and the dilemma of invisibility. Journal. Univ. 1, 61–121. doi: 10.20050/j.cnki.xwdx.2024.01.008
Huang, R., and Lv, S. B. (2023). ChatGPT: ontology, impact and trends. Contempor. Commun. 2, 33–44. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCDOIGKAp9OFuZLyR4yvYC43G7dFd2r6wwu6GUBFvd-uUu57d8SGfpBJDSDzCbINfbIIFqu04MVonzb_e-lHoSCyQTIgc9AypbD2ZtoSitaKGAQxil3NrzlT3p8mdMg4UWN9aC1xbVp7ssP5y2NqpalKaug-wVL1KQDKVMTGA3nS-0jjoAGO7lLhcfk&uniplatform=NZKPT&language=CHS
Ke, Z., and Song, X. K. (2021). From “me in the mirror” to “me in the fog”: the aberration and theoretical crisis of social interaction in virtual reality. Journal. Writ. 8, 75–83. doi: 10.20050/j.cnki.xwdx.2023.02.008
Lai, C. Y. (2023). Recursive negotiation: the symbiosis and interaction between users and algorithms on short video platforms: an ethnographic study centered on “influencers”. Journalist 10, 3–15. doi: 10.16057/j.cnki.31-1171/g2.2023.10.002
Lu, Y. Y., Sun, Y. T., Zhang, Y., and Li, X. G. (2022). Impact of artificial intelligence, machine learning, automation and robotics on the information industry—overview and implications of CILIP symposium 2021. Libr. Intell. 66, 143–152. doi: 10.13266/j.issn.0252-3116.2022.19.014
Marcos-Pablos, S., and García-Peñalvo, F. J. (2022). Emotional Intelligence in Robotics: a Scoping Review. Cham: Springer.
Marx and Engels (2009). Collected Works of Marx and Engels, Volume 4. Beijing: People's Publishing House.
McStay, A. (2020). Emotional AI and EdTech: serving the public good? Learn. Media Technol. 45, 270–283. doi: 10.1080/17439884.2020.1686016
Mei, A. (2024). Innovation of algorithmic discrimination governance model under positive ethics. Polit. Law 2, 113–126. doi: 10.15984/j.cnki.1005-9512.2024.02.007
Morgante, E., Susinna, C., Culicetto, L., Quartarone, A., and Lo, B. V. (2024). Is it possible for people to develop a sense of empathy toward humanoid robots and establish meaningful relationships with them? Front. Psychol. 15:1391832. doi: 10.3389/fpsyg.2024.1391832
Mu, Y., and Wu, Y. H. (2024). From quasi-social relation to human-machine relation: a two-dimensional classification model based on authenticity and interactivity. Contempor. Commun. 3, 9–14. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCBTNAmv8qsiWRsZaefY95bL9wUDOq7Tvt0fUkC7sujULEcQa-N0yxrQ2HqNNyw8k7SqhCkmpZm_SFQDl_BTLArm74g8pr_azVZ6orv7M9tzXFRWMTmSSrt1GEbNKX6tFAMm09O3a3mdKN09m-jpHth_dxNfE80dnBlT7o1w_pgQhqnRw43GF0Vo&uniplatform=NZKPT&language=CHS
Nagy, P., and Neff, G. (2015). Imagined affordance: reconstructing a keyword for communication theory. Soc. Media Soc. 1, 1–9. doi: 10.1177/2056305115603385
Nyholm, S., and Frank, L. E. (2019). It loves me, it loves me not: is it morally problematic to design sex robots that appear to love their owners? Techne Res. Philos. Technol. 23:122110. doi: 10.5840/techne2019122110
Panaite, A. F., and Bogdanffy, L. (2019). Reimagining vision with infinity mirrors. MATEC Web Conf. 290:e01011. doi: 10.1051/matecconf/201929001011
Paul, L. (2017). Replaying the Human Journey: Media Evolution. Chongqing: Southwest Normal University Press.
Peng, L. (2021). Survival, cognition, relationships: how algorithms will change us. Journalism 3, 45–53. doi: 10.15897/j.cnki.cn51-1046/g2.2021.03.002
Peng, L. (2024). “Mirror” and “Other”: an examination of the relationship between intelligent machines and humans. Journal. Univ. 3, 18–118. doi: 10.20050/j.cnki.xwdx.2024.03.001
Periaiya, S., and Nandukrishna, A. T. (2024). What drives user stickiness and satisfaction in OTT video streaming platforms? a mixed-method exploration. Int. J. Hum. Comput. Interact. 40, 2326–2342. doi: 10.1080/10447318.2022.2160224
Reeves, B., and Nass, C. I. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge: Cambridge University Press.
Rubin, A. M., Perse, E. M., and Powell, R. A. (1985). Loneliness, parasocial interaction, and local television news viewing. Hum. Commun. Res. 12, 155–180. doi: 10.1111/j.1468-2958.1985.tb00071.x
Sharkey, A., and Sharkey, N. (2011). Children, the elderly, and interactive robots. IEEE Robot. Automat. Mag. 18:940151. doi: 10.1109/MRA.2010.940151
Swallow (2009). Emotional Culture in Chinese history: An Interdisciplinary Textual Study of Ming and Qing Texts. Beijing: The Commercial Press.
Thomas, H. D., and Julia, K. (2018). Human-Machine Symbiosis. Hangzhou: Zhejiang People's Publishing House.
Tretter, M. (2024). Equipping AI-decision-support-systems with emotional capabilities? Ethical perspectives. Front. Artif. Intell. 7:1398395. doi: 10.3389/frai.2024.1398395
Vagisha, S., and Harendra, K. (2023). Emotional intelligence in the era of artificial intelligence for medical professionals. Int. J. Med. Grad. 2:112. doi: 10.56570/jimgs.v2i2.112
Walther, J. B., Van Der Heide, B., Ramirez, A., Burgoon, J. K., and Peña, J. (2015). Interpersonal and hyperpersonal dimensions of computer-mediated communication. Handb. Psychol. Commun. Technol. 1:22. doi: 10.1002/9781118426456.ch1
Wang, H., Hu, F. Z., Liu, I T., Gan, L. H., and Liu, T. (2024). Has the digital landscape beautified our lives? (Academic Dialogue). Res. Cult. Art 1, 56–114. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCBMlpFdshFmLBSZaXN8v3vovBTQphAfVf-EnfMRlTHUerJ3nVonChNvSvmzmM9oiiGe29RJPYhh0u7GxPsHtIzMAZUZHxlDEy3ZR7eguR6l4EJQ-3VPhn3OGjFSwlg-rg-Yfw7W86bJO6OHiWpvUsuFiQ2_5ePl1ufu2D0Got2m0BXM94TJGXIA&uniplatform=NZKPT&language=CHS
Wang, Z. W. (2023). Marx's three metaphors on machines—a study based on the perspective of human-machine relationship. Monthly J. Theory 9:19. doi: 10.14180/j.cnki.1004-0544.2023.09.002
Wu, J. W., Yang, P. C., and Ding, Y. H. (2022). The questioning of technology: an examination of the human-technology relationship in smart news production. Journal. Writ. 10, 29–42. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCCCyKoUtw7W4wVgMSKHUm5n17Mhzm0_xOf82fri1SP9WR8isOGCNorKMiZJdngUpJo_M2K3aLin4d4LWxTuRajI2seiugCrbxgW8nTgkz-k4glU-S7o-s0wOPBuL8vjzdmNhl81ICDZwVD_DycWRJzLz1bzoFyc_eCkCO9yMur7iF2Zit-HlmlB&uniplatform=NZKPT&language=CHS
Xiao, H. J., and Zhang, L. L. (2024). Theoretical deconstruction and governance innovation of big model ethical misconduct. Res. Fin. Iss. 5, 15–32. doi: 10.19654/j.cnki.cjwtyj.2024.05.002
Xie, J., and Liu, R. L. (2023). ChatGPT: generative artificial intelligence causes human alienation crisis and its rethinking. J. Chongqing Univ. 5, 111–124. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCBfxrU5noED1Y8e1Q7L3BP_ozGl6n6FWryHTMp-LJIXOELosG08rwmhn7e3JGH41P1_a8wkqKOx-5ZxsfKA0lBoj9g3V5pzDrQHeelJo4-eWk7_QJOgycM0SmrgOCfCOvTp7J5g-t3gFmLiaLj1NvZ3na3d5lN06GZrQ6lUALQnTqMnx5SB8rPf&uniplatform=NZKPT&language=CHS
Yan, W. W., Wang, Y. Y., and Song, J. H. (2024). A study on the impact of user data collection on the willingness to abandon the use of artificial intelligence services - based on information sensitivity and privacy perspectives. Intell. Sci. 5, 1–21. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCD4p611K7fU3WTCrRqcvYN3tUaVFIdG8jKJhm6z8kFVkn4U3WKXsHDVrq3CT2eUyttUSEvAx6wQopLsAV_xW7uhhkm9NPWb2ThIp4Eh8CZ7xBxkKI9PScb-tTPAI7SxlKVDvhmTAYDLI4huuETb2wy00EvxdpCkbaXbXmpVPKWrQ62FicBKtKWx&uniplatform=NZKPT&language=CHS
Yang, J., and Wu, N. (2024). Social problems of brain-computer interface technology and its countermeasures. Ideol. Theoret. Front. 1, 80–88. doi: 10.13231/j.cnki.jnip.2024.01.009
Yin, L. G., and Liu, Y. L. (2021). Technological empowerment and visible labor in short video platforms—an examination based on the political economy of communication. Fut. Commun. 6, 41–121. doi: 10.13628/j.cnki.zjcmxb.2021.06.010
Yin, Q., and Lin, Y. (2023). “Pigeons that drag the shift”: elastic relations in platform content production labor—an exploratory study based on Beili Beili. Journal. Commun. Res. 12, 69–128. Available at: https://kns.cnki.net/kcms2/article/abstract?v=64ENavj7QCDJnxrwLhTYYsE0ZSBmyvMAm-M9DrJf_laMyHoMDKBKYj_cDZjIWqBYVV2BQ5cREWdOJqr6kZzfxsijpx_YFLGUot6HYy1SYKqlDn98VdBprdnYwY1NSB0SYXXfrwDVQAIVD0PNUtC3XCF1ux1KeUoj79174C2YtuCHLrnDbW63rSKG7NjTQPbO&uniplatform=NZKPT&language=CHS
Yuan, G. F. (2021). Toward a theoretical path to “practice”: understanding emotional expression in public opinion. Journal. Int. 6, 55–72. doi: 10.13495/j.cnki.cjjc.2021.06.004
Yuan, Z., Cheng, X., and Duan, Y. (2024). Impact of media dependence: how emotional interactions between users and chat robots affect human socialization? Front. Psychol. 15:1388860. doi: 10.3389/fpsyg.2024.1388860
Zhang, L. H. (2024). Algorithmic security risks and their resolution strategies in the construction of digital society. J. Northeast Norm. Univ. 2, 134–144. doi: 10.16164/j.cnki.22-1062/c.2024.02.014
Zhao, Y., and Li, M. Q. (2023). Virtual anchor practice and human-computer emotional interaction under the trend of anthropomorphism. Mod. Commun. 1, 110–116. doi: 10.19997/j.cnki.xdcb.2023.01.012
Keywords: emotion, algorithm, user-platform relationship, pseudo-intimacy relationship, paradigm for human interaction
Citation: Wu J (2024) Social and ethical impact of emotional AI advancement: the rise of pseudo-intimacy relationships and challenges in human interactions. Front. Psychol. 15:1410462. doi: 10.3389/fpsyg.2024.1410462
Received: 01 April 2024; Accepted: 21 October 2024;
Published: 05 November 2024.
Edited by:
Simone Belli, Complutense University of Madrid, SpainReviewed by:
Meisam Dastani, Gonabad University of Medical Sciences, IranGrant Bollmer, University of Maryland, United States
Copyright © 2024 Wu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jie Wu, wujie98@ruc.edu.cn