Skip to main content

OPINION article

Front. Psychol., 08 December 2020
Sec. Auditory Cognitive Neuroscience
This article is part of the Research Topic The Effects of Music on Cognition and Action View all 36 articles

Come on Baby, Light My Fire: Sparking Further Research in Socio-Affective Mechanisms of Music Using Computational Advancements

  • 1Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
  • 2Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Berlin, Germany

Musical Engagement: Socio-affective Underpinnings

Socio-affective behavior is entangled in our experience of music (Devroop, 2012; Koelsch, 2014; Aucouturier and Canonne, 2017; Saarikallio, 2019). Joint musical engagement, or making and listening to music with others, was found to result in increased prosocial tendencies (Kirschner and Tomasello, 2010; Rabinowitch et al., 2013; Cirelli et al., 2014) and is thought to occur due to overlapping mechanisms underpinning interactive musical behavior and empathically driven prosocial behaviors (Rabinowitch et al., 2012; Clarke et al., 2015; Saarikallio, 2019). In this paper, we present opportunities for experimental investigation of emotional contagion, a specific subprocess hypothesized to lie at this overlap, and highlight ways to improve understanding of how joint musical engagement may promote prosocial behaviors.

Socio-affective components of joint musical engagement have been postulated following empirical investigation of joint music-making and group music-listening (Egermann et al., 2011; Rabinowitch et al., 2013) and hinge on subprocesses including affective alignment, where joint expression of emotion among interlocutors allows for facilitated transfer of semantic and affective content (Cross, 2005; Bharucha et al., 2012; Rabinowitch et al., 2012; Vesper et al., 2017). In this sense, affective alignment may contribute to higher-level processes of musical interaction such as shared intentionality by ensuring that members are working toward a common musical goal in real time and have “coordinated action roles for pursuing that shared goal” (Tomasello et al., 2005, p. 680) through upregulating constituent socio-affective behaviors (e.g., other-directed behaviors) that help individuals ascertain their interlocutor's internal state and align their behavior accordingly (Cross et al., 2012). Joint music-making's positive influence on socio-affective behaviors in non-musical contexts suggests that psychosocial processes underpinning musical interaction may overlap with those involved in non-musical interaction, and that co-activation of these overlapping structures may result in prosocial transfer effects (Kirschner and Tomasello, 2010; Cross et al., 2012; Saarikallio, 2019).

Scientific inquiry probing the effects of musical engagement on prosociality has risen in prevalence in recent years; particularly, musical engagement's influence on prosocial behaviors underscored by empathy has gained considerable traction in music psychology and related fields (King and Waddington, 2017; Davis, 2018; Riess et al., 2018). Empathy may be defined as “the ability to produce emotional and experiential responses to the situations of others that approximate their responses and experiences” (Rabinowitch et al., 2013, p. 485) and is a core component of social cognition comprising both slow (controlled) and fast (automatic) psychological subprocesses (i.e., dual process theory; Lieberman, 2007; Batson, 2009) that ultimately “constitute a causal force in motivating prosociality towards other conspecifics” (Decety et al., 2016, p. 371). Slow processes are “evaluative,” requiring top-down cognitive assessment, while fast processes are immediate, “automatic detection” of social signals; separate neural representations for fast and slow processing of social information have been proposed accordingly (i.e., “mirror neuron system” and “mentalizing system”; Vogeley, 2017). Emotional contagion, a subprocess of empathy, is defined as automatic mimicry of another's behavioral cues associated with a particular affective state; it is thought to foster survival through increasing recognition of and successful communication between conspecifics, and underpins the capacity to build and maintain human attachment bonds (de Vignemont and Singer, 2006; Feldman, 2017; Prochazkova and Kret, 2017). Though theoretical study of emotional contagion in music has begun, there is a lack of experimental study that causally explains how automatic detection of socio-affective signals influences our experience of music (Miu and Vuoskoski, 2017). Investigation of emotional contagion during musical interaction is critical to understanding how relationships between co-performers may be similar to other types of social relationships (e.g., through attachment bonds) and, consequently, how joint musical engagement may lead to upregulated other-directed behaviors such as those that arise within a particular social relationship.

Experimental investigation of emotional contagion is practically difficult because it requires simultaneous measurement of interlocutors' complex emotion states in interactionist paradigms; this matter is further complicated in the context of music, where substantial ecological validity is needed to elicit behaviors of interest (e.g., empathy-promoting musical components; Rabinowitch et al., 2013). In the following two sections, we introduce research from related fields incorporating computational techniques for measuring behavioral and physiological correlates of emotional contagion; we situate such techniques in the context of music psychology and suggest avenues by which they may be incorporated into existing experimental paradigms to triangulate investigation of socio-affective processes using behavioral, physiological, and social signal processing, as has been done across numerous subfields of psychology (Pantic and Vinciarelli, 2015; Azevedo et al., 2016; Sutherland et al., 2017; de Barbaro, 2019; Oswald et al., 2020).

The Frontier: Behavioral Cues

The following section outlines possibilities for investigating socio-affective components in music using computational methods for behavior recognition drawn from research in computer science and the behavioral sciences. First, facial cues are an important behavioral cue for emotional expression in music (Thompson et al., 2008, 2010; Livingstone et al., 2009; Waddell and Williamon, 2017). Approximately 95% of automatic emotion recognition literature relies on facial cues, which has led to applicability of these techniques to an expanding number of datasets (Noroozi et al., 2018). Computational emotion recognition using facial cues has been incorporated into the study of empathic behavior in group settings. For instance, Kumano et al. (2011, 2014, 2015) conducted a series of experiments to see if empathic interactions could be predicted based on facial data from video recordings of four-person meetings. Their Naive Bayes Network Model was able to predict empathy state given facial expression information across time and improved when parameters such as reaction time in mirrored expression between interlocutors and head gesture annotations were added. Scientific study of music has not yet incorporated computational determination of joint emotional expression epochs from facial cues; this is likely to be a fruitful area of inquiry, involving relay of complex affective information at the intersection of individually, socially, and musically driven systems.

Though literature examining communication of emotion through body as opposed to facial cues in non-musical contexts is lacking, several studies have found that determination of emotion state is modulated by body posture/movement (Aviezer et al., 2008a,b; Martinez et al., 2016). In musical settings, visual content, often in the form of body movement, plays a critical role in conveying affective information (Vines et al., 2011; Vuoskoski et al., 2014, 2016). Computational analysis of body position/gesture using motion capture experiments has incurred important findings with respect to joint emotional expression and audience-perceived emotion (Burger et al., 2013; Chang et al., 2019). Still, analyses of body postures/gestures across various cultural and developmental contexts and further determination of indices that convey socio-affective information are necessary to better understand their role in both musical interaction and potential transference to non-musical interaction.

Following research on prosocial behaviors as a consequence of joint music-making, similar outcomes of music listening have begun to be studied (Ruth and Schramm, 2020). Continuous self-report of emotion by audience members during live concert settings is a promising experimental tool (Egermann, 2019). These measures collect rating data simultaneously with and continuously throughout the stimuli's presentation and may be able to achieve the temporal specificity needed in order to determine instances of affective alignment between participants. Moreover, continuous measurement of self-reported affect supports various forms of rating interfaces, including linear potentiometers (Vines et al., 2011; Baytaş et al., 2016), binary trigger buttons (Baytaş et al., 2019), and four-quadrant valence-arousal joysticks (Sharma et al., 2020; or its digital analog in Egermann, 2019). Furthermore, such interfaces may be attached to the participant (i.e., wearables) such that implementation in a paradigm involving movement is possible. In addition, top-down cameras can provide useful visual displays of crowd behavior, as evinced in analyses of pedestrian movement (Xu et al., 2020); concerning non-coordinated movement of audiences, this line of research within computer science could nicely complement existing methods in motion tracking (e.g., analysis of head movement in Swarbrick et al., 2019).

The Future: Emerging Data Sources and Analyses

Music serves a number of social functions in everyday listening (Sloboda and O'Neill, 2001). Recently, social surrogacy was added as a potential reason for musical listening, extrapolating online listener behavior to internal processes (Schäfer and Eerola, 2020). Greenberg and Rentfrow (2017) list a number of avenues by which social media and streaming data can be used within music psychology; implementing several of these analyses in tandem could be well-suited to studying socio-affective behavior. For example, combining analyses of song-specific emotion data from Spotify APIs, listeners' comments on social media and self-report data gathered from online surveys could help determine individual differences in socio-emotional components of music listening. In addition, experience sampling methods (ESMs) have become increasingly popular for administering repeated surveys of everyday musical experience (e.g., Juslin et al., 2008; de Barbaro, 2019), with more recent ESM interfaces allowing for user mobility and nuanced user input (e.g., Randall and Rickard, 2017). Housing state-of-the-art digital self-assessment scales for emotion within existing ESMs could help bolster outcome evaluations and uncover relationships between socio-affective components in everyday musical behaviors (Betella and Verschure, 2016; Juslin, 2016).

Detecting emotion from acoustic properties of music has been extensively researched; for instance, tempo and mode tend to be good indicators of perceived emotion (Eerola et al., 2013). However, computational methods for music emotion recognition have tended to favor certain features over others (e.g., timbre accounting for over 60%; Yang et al., 2019). Recently, researchers have begun to develop software packages for emotion recognition in music, which include fine-grained features such as specific textural shifts and articulations (Panda et al., 2018). Such software could provide important contextual affective information in existing joint music-making paradigms. In addition, natural language processing (NLP) of song lyrics is a burgeoning area of research in music psychology (Anglada-Tort et al., 2019). NLP could be useful for identifying empathic tendencies in group songwriting, a prevalent music therapy intervention (e.g., using language style synchrony as a proxy for empathy as in Lord et al., 2015).

Lastly, behavioral tasks that can robustly quantify socio-emotional components in various populations after engagement in social music activities are needed. Several studies to date have developed novel tasks or adapted tasks from other disciplines to achieve this end (Rabinowitch et al., 2013; Reddish et al., 2014; Brown, 2017). In social neuroscience, researchers have investigated mechanisms underlying evolutionarily advantageous socio-affective behavior through experimental paradigms targeting social modulation of threat response (DeVries et al., 2003; Coan et al., 2006). Automated stress recognition via analyses of multimodal physiological and motion data has begun to show potential for validated use in social science research (Hovsepian et al., 2015). Further, higher-order pattern detection of heart rate variability (HRV) has been used to predict interpersonal affective alignment at levels above chance (McCraty, 2017). In the near future, such methods could be incorporated into behavioral cooperation tasks following joint music-making paradigms in order to assess transfer effects on socio-affective processing in non-musical contexts.

Concluding Thoughts

Several precautions should be taken when incorporating emotion detection techniques into scientific study of music. A general framework for ethical (e.g., intrinsic biases due to demographically limited training) and practical (e.g., overfitted algorithms) considerations for using computational techniques in social science research is covered in a review paper by Martinez (2019). Concerns specific to music psychology include the following. First, non-verbal displays of emotion in musical settings may present differently than in idealized non-musical settings (e.g., differing behavioral cues for real vs. acted emotions as in Wilting et al., 2006; overlapping basic emotions as in Juslin et al., 2011; Juslin, 2013; Akkermans et al., 2019); an affective taxonomy appropriate for the given research question should be carefully determined and cross-checked with each session of algorithmic fine-tuning. Furthermore, it is likely advantageous to limit stimuli to a particular genre or song in order to conserve behavioral cue utilization among both performers and listeners (Juslin, 2000) and de-escalate computational complexity (Lange and Frieler, 2018).

This article has summarized recent developments in music psychology and related fields that may be applied to detecting emotional contagion in music. We have discussed this research in terms of how it may be incorporated into existing experimental paradigms in scientific studies of music. We hope to encourage further findings regarding the means by which various forms of musical engagement can result in positive prosocial consequences for a broader population.

Author Contributions

IH conducted literature reviews in order to gather necessary evidence and to generate initial drafts. MBK assisted argument development and expanded source material. Both authors approved the final version of the manuscript.

Funding

This research was supported by the Konrad Adenauer Stiftung for Postgraduate Research at Humboldt-Universität zu Berlin, and the Herchel Smith Fellowship for Postgraduate Research at Cambridge.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to acknowledge Konrad Adenauer Stiftung for their financial and administrative support provided throughout the process that this article was conceived. In addition, the first author would like to thank the Institut für Musikwissenschaft und Medienwissenschaft, in particular Professor Sebastian Klotz and the members of the Transkulturelle Musikwissenschaft Kolloquium, as well as members of the Cognitive Neurobiology Current Topics Seminar, particularly Dr. Vladislav Nachev, Dr. Katharina Stumpenhorst, and Professor York Winter, for providing crucial feedback during the development of this research. Lastly, the first author would like to thank her current mentor, Professor Ian Cross, for immensely promoting consolidation of this research through critical feedback and discussion.

References

Akkermans, J., Schapiro, R., Müllensiefen, D., Jakubowski, K., Shanahan, D., Baker, D., et al. (2019). Decoding emotions in expressive music performances: a multi-lab replication and extension study. Cogn. Emot. 33, 1099–1118. doi: 10.1080/02699931.2018.1541312

PubMed Abstract | CrossRef Full Text | Google Scholar

Anglada-Tort, M., Krause, A. E., and North, A. C. (2019). Popular music lyrics and musicians' gender over time: a computational approach. Psychol. Music. doi: 10.1177/0305735619871602

CrossRef Full Text | Google Scholar

Aucouturier, J.-J., and Canonne, C. (2017). Musical friends and foes: the social cognition of affiliation and control in improvised interactions. Cognition 161, 94–108. doi: 10.1016/j.cognition.2017.01.019

PubMed Abstract | CrossRef Full Text | Google Scholar

Aviezer, H., Hassin, R. R., Bentin, S., and Trope, Y. (2008a). “Putting facial expressions back in context,” in First Impressions, eds N. Ambady, and J. J. Skowronski (New York, NY: Guilford Publications), 255–286.

Google Scholar

Aviezer, H., Hassin, R. R., Ryan, J., Grady, C., Susskind, J., Anderson, A., et al. (2008b). Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19, 724–732. doi: 10.1111/j.1467-9280.2008.02148.x

CrossRef Full Text | Google Scholar

Azevedo, R., Taub, M., Mudrick, N., Farnsworth, J., and Martin, S. A. (2016). “Interdisciplinary research methods used to investigate emotions with advanced learning technologies,” in Methodological Advances in Research on Emotion and Education, eds M. Zembylas, and P. A. Schutz (Cham: Springer International Publishing), 231–243. doi: 10.1007/978-3-319-29049-2_18

CrossRef Full Text | Google Scholar

Batson, C. D. (2009). These Things Called Empathy: Eight Related but Distinct Phenomena. The MIT Press. Available online at: https://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262012973.001.0001/upso-9780262012973-chapter-2 (accessed April 5, 2020).

Google Scholar

Baytaş, M. A., Göksun, T., and Özcan, O. (2016). The perception of live-sequenced electronic music via hearing and sight. Proc. Int. Conf. New Interfaces Music. Expr. 16, 194–199. doi: 10.5281/zenodo.1175978

CrossRef Full Text | Google Scholar

Baytaş, M. A., Obaid, M., La Delfa, J., Yantaç, A. E., and Fjeld, M. (2019). “Integrated apparatus for empirical studies with embodied autonomous social drones,” in 1st International Workshop on Human-Drone Interaction (Glasgow: Ecole Nationale de l'Aviation Civile [ENAC]). Available online at: https://hal.archives-ouvertes.fr/hal-02128387 (accessed February 19, 2020).

Google Scholar

Betella, A., and Verschure, P. F. M. J. (2016). The affective slider: a digital self-assessment scale for the measurement of human emotions. PLoS ONE 11:e0148037. doi: 10.1371/journal.pone.0148037

PubMed Abstract | CrossRef Full Text | Google Scholar

Bharucha, J., Curtis, M., and Paroo, K. (2012). “Musical communication as alignment of brain states,” in Language and Music as Cognitive Systems (New York, NY: Oxford University Press), 139–155. doi: 10.1093/acprof:oso/9780199553426.003.0016

CrossRef Full Text | Google Scholar

Brown, L. S. (2017). The influence of music on facial emotion recognition in children with autism spectrum disorder and neurotypical children. J. Music Ther. 54, 55–79. doi: 10.1093/jmt/thw017

PubMed Abstract | CrossRef Full Text | Google Scholar

Burger, B., Saarikallio, S., Luck, G., Thompson, M. R., and Toiviainen, P. (2013). Relationships between perceived emotions in music and music-induced movement. Music Percept. Interdiscip. J. 30, 517–533. doi: 10.1525/mp.2013.30.5.517

CrossRef Full Text | Google Scholar

Chang, A., Kragness, H. E., Livingstone, S. R., Bosnyak, D. J., and Trainor, L. J. (2019). Body sway reflects joint emotional expression in music ensemble performance. Sci. Rep. 9:205. doi: 10.1038/s41598-018-36358-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Cirelli, L. K., Einarson, K. M., and Trainor, L. J. (2014). Interpersonal synchrony increases prosocial behavior in infants. Dev. Sci. 17, 1003–1011. doi: 10.1111/desc.12193

PubMed Abstract | CrossRef Full Text | Google Scholar

Clarke, E., DeNora, T., and Vuoskoski, J. (2015). Music, empathy and cultural understanding. Phys. Life Rev. 15, 61–88. doi: 10.1016/j.plrev.2015.09.001

CrossRef Full Text | Google Scholar

Coan, J. A., Schaefer, H. S., and Davidson, R. J. (2006). Lending a hand: social regulation of the neural response to threat. Psychol. Sci. 17, 1032–1039. doi: 10.1111/j.1467-9280.2006.01832.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Cross, I. (2005). “Music and meaning, ambiguity and evolution,” in Musical Communication, eds D. Miell, R. MacDonald, and D. Hargreaves (Oxford, UK: Oxford University Press), 27–43.

Google Scholar

Cross, I., Laurence, F., and Rabinowitch, T.-C. (2012). “Empathy and creativity in group musical practices: towards a concept of empathic creativity,” in The Oxford Handbook of Music Education, Vol. 2, eds G. E. McPherson, and G. F. Welch (New York, NY: Oxford University Press), 336–353. doi: 10.1093/oxfordhb/9780199928019.013.0023

CrossRef Full Text | Google Scholar

Davis, M. H. (2018). “Chapter 1: history and definitions,” in Empathy: A Social Psychological Approach, eds G. E. McPherson, and G. F. Welch (New York, NY; Oxon: Routledge). doi: 10.4324/9780429493898-1

CrossRef Full Text | Google Scholar

de Barbaro, K. (2019). Automated sensing of daily activity: a new lens into development. Dev. Psychobiol. 61, 444–464. doi: 10.1002/dev.21831

PubMed Abstract | CrossRef Full Text | Google Scholar

de Vignemont, F., and Singer, T. (2006). The empathic brain: how, when and why? Trends Cogn. Sci. 10, 435–441. doi: 10.1016/j.tics.2006.08.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Decety, J., Bartal, I. B.-A., Uzefovsky, F., and Knafo-Noam, A. (2016). Empathy as a driver of prosocial behaviour: highly conserved neurobehavioural mechanisms across species. Philos. Trans. R. Soc. B Biol. Sci. 371. doi: 10.1098/rstb.2015.0077

PubMed Abstract | CrossRef Full Text | Google Scholar

DeVries, A. C., Glasper, E. R., and Detillion, C. E. (2003). Social modulation of stress responses. Physiol. Behav. 79, 399–407. doi: 10.1016/S0031-9384(03)00152-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Devroop, K. (2012). The social-emotional impact of instrumental music performance on economically disadvantaged South African students. Music Educ. Res. 14, 407–416. doi: 10.1080/14613808.2012.685456

CrossRef Full Text | Google Scholar

Eerola, T., Friberg, A., and Bresin, R. (2013). Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Front. Psychol. 4:487. doi: 10.3389/fpsyg.2013.00487

PubMed Abstract | CrossRef Full Text | Google Scholar

Egermann, H. (2019). Aesthetic Judgement and Emotional Processing in Contemporary Music, (Cambridge).

Google Scholar

Egermann, H., Sutherland, M. E., Grewe, O., Nagel, F., Kopiez, R., and Altenmüller, E. (2011). Does music listening in a social context alter experience? A physiological and psychological perspective on emotion. Music. Sci. 15, 307–323. doi: 10.1177/1029864911399497

CrossRef Full Text | Google Scholar

Feldman, R. (2017). The neurobiology of human attachments. Trends Cogn. Sci. 21, 80–99. doi: 10.1016/j.tics.2016.11.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Greenberg, D. M., and Rentfrow, P. J. (2017). Music and big data: a new frontier. Curr. Opin. Behav. Sci. 18, 50–56. doi: 10.1016/j.cobeha.2017.07.007

CrossRef Full Text | Google Scholar

Hovsepian, K., al'Absi, M., Ertin, E., Kamarck, T., Nakajima, M., and Kumar, S. (2015). cStress: towards a gold standard for continuous stress assessment in the mobile environment. Proc. ACM Int. Conf. Ubiquitous Comput. 2015, 493–504. doi: 10.1145/2750858.2807526

PubMed Abstract | CrossRef Full Text | Google Scholar

Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26, 1797–1812. doi: 10.1037/0096-1523.26.6.1797

PubMed Abstract | CrossRef Full Text | Google Scholar

Juslin, P. N. (2013). What does music express? Basic emotions and beyond. Front. Psychol. 4:596. doi: 10.3389/fpsyg.2013.00596

PubMed Abstract | CrossRef Full Text | Google Scholar

Juslin, P. N. (2016). “Emotional reactions to music,” in The Oxford Handbook of Music Psychology, eds I. Cross, S. Hallam, and M. Thaut (Oxford: Oxford University Press), 197–214. doi: 10.1093/oxfordhb/9780198722946.013.17

CrossRef Full Text | Google Scholar

Juslin, P. N., Liljeström, S., Laukka, P., Västfjäll, D., and Lundqvist, L.-O. (2011). Emotional reactions to music in a nationally representative sample of Swedish adults: prevalence and causal influences. Music Sci. 15, 174–207. doi: 10.1177/1029864911401169

CrossRef Full Text | Google Scholar

Juslin, P. N., Liljeström, S., Västfjäll, D., Barradas, G., and Silva, A. (2008). An experience sampling study of emotional reactions to music: listener, music, and situation. Emotion 8, 668–683. doi: 10.1037/a0013505

PubMed Abstract | CrossRef Full Text | Google Scholar

King, E., and Waddington, C. (Eds.). (2017). Music and Empathy. Abingdon, Oxon; New York, NY: Routledge. doi: 10.4324/9781315596587

CrossRef Full Text | Google Scholar

Kirschner, S., and Tomasello, M. (2010). Joint music making promotes prosocial behavior in 4-year-old children. Evol. Hum. Behav. 31, 354–364. doi: 10.1016/j.evolhumbehav.2010.04.004

CrossRef Full Text | Google Scholar

Koelsch, S. (2014). Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 15, 170–180. doi: 10.1038/nrn3666

PubMed Abstract | CrossRef Full Text | Google Scholar

Kumano, S., Otsuka, K., Matsuda, M., and Yamato, J. (2014). Analyzing perceived empathy based on reaction time in behavioral mimicry. IEICE Trans. Inf. Syst. E97.D, 2008–2020. doi: 10.1587/transinf.E97.D.2008

CrossRef Full Text | Google Scholar

Kumano, S., Otsuka, K., Mikami, D., Matsuda, M., and Yamato, J. (2015). Analyzing interpersonal empathy via collective impressions. IEEE Trans. Affect. Comput. 6, 324–336. doi: 10.1109/TAFFC.2015.2417561

CrossRef Full Text | Google Scholar

Kumano, S., Otsuka, K., Mikami, D., and Yamato, J. (2011). Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings. Face Gesture 2011, 43–50. doi: 10.1109/FG.2011.5771440

CrossRef Full Text | Google Scholar

Lange, E. B., and Frieler, K. (2018). Challenges and opportunities of predicting musical emotions with perceptual and automatized features. Music Percept. Interdiscip. J. 36, 217–242. doi: 10.1525/mp.2018.36.2.217

CrossRef Full Text | Google Scholar

Lieberman, M. D. (2007). Social cognitive neuroscience: a review of core processes. Annu. Rev. Psychol. 58, 259–289. doi: 10.1146/annurev.psych.58.110405.085654

PubMed Abstract | CrossRef Full Text | Google Scholar

Livingstone, S. R., Thompson, W. F., and Russo, F. A. (2009). Facial expressions and emotional singing: a study of perception and production with motion capture and electromyography. Music Percept. Interdiscip. J. 26, 475–488. doi: 10.1525/mp.2009.26.5.475

CrossRef Full Text | Google Scholar

Lord, S. P., Sheng, E., Imel, Z. E., Baer, J., and Atkins, D. C. (2015). More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client. Behav. Ther. 46, 296–303. doi: 10.1016/j.beth.2014.11.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Martinez, A. M. (2019). The promises and perils of automated facial action coding in studying children's emotions. Developmental Psychology, 55, 1965–1981. doi: 10.1037/dev0000728

PubMed Abstract | CrossRef Full Text | Google Scholar

Martinez, L., Falvello, V. B., Aviezer, H., and Todorov, A. (2016). Contributions of facial expressions and body language to the rapid perception of dynamic emotions. Cogn. Emot. 30, 939–952. doi: 10.1080/02699931.2015.1035229

PubMed Abstract | CrossRef Full Text | Google Scholar

McCraty, R. (2017). New frontiers in heart rate variability and social coherence research: techniques, technologies, and implications for improving group dynamics and outcomes. Front. Public Health 5:267. doi: 10.3389/fpubh.2017.00267

PubMed Abstract | CrossRef Full Text | Google Scholar

Miu, A., and Vuoskoski, J. (2017). “The social side of music listening: empathy and contagion in music-induced emotions,” in Music and Empathy, eds E. King and C. Waddington (New York, NY: Routledge), 124–138. doi: 10.4324/9781315596587-6

CrossRef Full Text | Google Scholar

Noroozi, F., Corneanu, C., Kamińska, D., Sapiński, T., Escalera, S., and Anbarjafari, G. (2018). Survey on emotional body gesture recognition. IEEE Trans. Affect. Comput. 9:1. doi: 10.1109/TAFFC.2018.2874986

CrossRef Full Text | Google Scholar

Oswald, F. L., Behrend, T. S., Putka, D. J., and Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: forward progress for organizational research and practice. Annu. Rev. Organ. Psychol. Organ. Behav. 7, 505–533. doi: 10.1146/annurev-orgpsych-032117-104553

CrossRef Full Text | Google Scholar

Panda, R., Malheiro, R. M., and Paiva, R. P. (2018). Novel audio features for music emotion recognition. IEEE Trans. Affect. Comput. 9:1. doi: 10.1109/TAFFC.2018.2820691

CrossRef Full Text | Google Scholar

Pantic, M., and Vinciarelli, A. (2015). “Social signal processing,” in The Oxford Handbook of Affective Computing, eds R. Calvo, S. D'Mello, J. Gratch, and A. Kappas (Oxford: Oxford University Press).

Google Scholar

Prochazkova, E., and Kret, M. E. (2017). Connecting minds and sharing emotions through mimicry: a neurocognitive model of emotional contagion. Neurosci. Biobehav. Rev. 80, 99–114. doi: 10.1016/j.neubiorev.2017.05.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Rabinowitch, T.-C., Cross, I., and Burnard, P. (2012). “Musical group interaction, intersubjectivity and merged subjectivity,” in Kinesthetic Empathy in Creative and Cultural Practices, eds D. Reynolds and M. Reason (Bristol; Chicago, IL: Intellect Books), 109–120.

Google Scholar

Rabinowitch, T.-C., Cross, I., and Burnard, P. (2013). Long-term musical group interaction has a positive influence on empathy in children. Psychol. Music 41, 484–498. doi: 10.1177/0305735612440609

CrossRef Full Text | Google Scholar

Randall, W. M., and Rickard, N. S. (2017). Personal music listening: a model of emotional outcomes developed through mobile experience sampling. Music Percept. Interdiscip. J. 34, 501–514. doi: 10.1525/mp.2017.34.5.501

CrossRef Full Text | Google Scholar

Reddish, P., Bulbulia, J., and Fischer, R. (2014). Does synchrony promote generalized prosociality? Relig. Brain Behav. 4, 3–19. doi: 10.1080/2153599X.2013.764545

PubMed Abstract | CrossRef Full Text | Google Scholar

Riess, H., Neporent, L., and Alda, A. (2018). “Shared mind intelligence,” in The Empathy Effect: Seven Neuroscience-Based Keys for Transforming the Way We Live, Love, Work, and Connect Across Differences (Boulder, CO: Sounds True).

Google Scholar

Ruth, N., and Schramm, H. (2020). Effects of prosocial lyrics and musical production elements on emotions, thoughts and behavior. Psychology of Music. doi: 10.1177/0305735620902534

CrossRef Full Text | Google Scholar

Saarikallio, S. (2019). Access-awareness-agency (AAA) model of music-based social-emotional competence (MuSEC). Music Sci. 2:2059204318815421. doi: 10.1177/2059204318815421

CrossRef Full Text | Google Scholar

Schäfer, K., and Eerola, T. (2020). How listening to music and engagement with other media provide a sense of belonging: an exploratory study of social surrogacy. Psychol. Music 48, 232–251. doi: 10.1177/0305735618795036

CrossRef Full Text | Google Scholar

Sharma, K., Castellini, C., Stulp, F., and van den Broek, E. L. (2020). Continuous, real-time emotion annotation: a novel joystick-based analysis framework. IEEE Trans. Affect. Comput. 11, 78–84. doi: 10.1109/TAFFC.2017.2772882

CrossRef Full Text | Google Scholar

Sloboda, J. A., and O'Neill, S. A. (2001). “Emotions in everyday listening to music,” in Music and Emotion: THEORY and Research. Series in Affective Science, eds P. N. Juslin, and J. A. Sloboda (New York, NY: Oxford University Press), 415–429.

Google Scholar

Sutherland, C. A. M., Rhodes, G., and Young, A. W. (2017). Facial image manipulation: a tool for investigating social perception. Soc. Psychol. Personal. Sci. 8, 538–551. doi: 10.1177/1948550617697176

CrossRef Full Text | Google Scholar

Swarbrick, D., Bosnyak, D., Livingstone, S., Bansal, J., Marsh-Rollo, S., Woolhouse, M., et al. (2019). How live music moves us: head movement differences in audiences to live versus recorded music. Front. Psychol. 9:2682. doi: 10.3389/fpsyg.2018.02682

PubMed Abstract | CrossRef Full Text | Google Scholar

Thompson, W. F., Russo, F. A., and Livingstone, S. R. (2010). Facial expressions of singers influence perceived pitch relations. Psychon. Bull. Rev. 17, 317–322. doi: 10.3758/PBR.17.3.317

PubMed Abstract | CrossRef Full Text | Google Scholar

Thompson, W. F., Russo, F. A., and Quinto, L. (2008). Audio-visual integration of emotional cues in song. Cogn. Emot. 22, 1457–1470. doi: 10.1080/02699930701813974

CrossRef Full Text | Google Scholar

Tomasello, M., Carpenter, M., Call, J., Behne, T., and Moll, H. (2005). Understanding and sharing intentions: the origins of cultural cognition. Behav. Brain Sci. 28, 675–691. doi: 10.1017/S0140525X05000129

PubMed Abstract | CrossRef Full Text | Google Scholar

Vesper, C., Abramova, E., Bütepage, J., Ciardo, F., Crossey, B., Effenberg, A., et al. (2017). Joint action: mental representations, shared information and general mechanisms for coordinating with others. Front. Psychol. 7:2039. doi: 10.3389/fpsyg.2016.02039

CrossRef Full Text | Google Scholar

Vines, B., Krumhansl, C., Wanderley, M., Dalca, I., and Levitin, D. (2011). Music to my eyes: cross-modal interactions in the perception of emotions in musical performance. Cognition 118, 157–170. doi: 10.1016/j.cognition.2010.11.010

CrossRef Full Text | Google Scholar

Vogeley, K. (2017). Two social brains: neural mechanisms of intersubjectivity. Philos. Trans. R. Soc. B Biol. Sci. 372:20160245. doi: 10.1098/rstb.2016.0245

PubMed Abstract | CrossRef Full Text | Google Scholar

Vuoskoski, J. K., Gatti, E., Spence, C., and Clarke, E. F. (2016). Do visual cues intensify the emotional responses evoked by musical performance? A psychophysiological investigation. Psychomusicol. Music Mind Brain 26, 179–188. doi: 10.1037/pmu0000142

CrossRef Full Text | Google Scholar

Vuoskoski, J. K., Thompson, M. R., Clarke, E. F., and Spence, C. (2014). Crossmodal interactions in the perception of expressivity in musical performance. Atten. Percept. Psychophys. 76, 591–604. doi: 10.3758/s13414-013-0582-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Waddell, G., and Williamon, A. (2017). Eye of the beholder: stage entrance behavior and facial expression affect continuous quality ratings in music performance. Front. Psychol. 8:513. doi: 10.3389/fpsyg.2017.00513

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., Krahmer, E., and Swerts, M. (2006). Real vs. acted emotional speech. Int. J. Web Eng. Technol 3, 805–808.

Google Scholar

Xu, T., Shi, D., Chen, J., Li, T., Lin, P., and Ma, J. (2020). Dynamics of emotional contagion in dense pedestrian crowds. Phys. Lett. A 384:126080. doi: 10.1016/j.physleta.2019.126080

CrossRef Full Text | Google Scholar

Yang, F., Zhao, X., Jiang, W., Gao, P., and Liu, G. (2019). Multi-method fusion of cross-subject emotion recognition based on high-dimensional EEG features. Front. Comput. Neurosci. 13:53. doi: 10.3389/fncom.2019.00053

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: socio-affective behavior, computational methods, joint music-making, prosocial behavior, emotional contagion

Citation: Harris I and Küssner MB (2020) Come on Baby, Light My Fire: Sparking Further Research in Socio-Affective Mechanisms of Music Using Computational Advancements. Front. Psychol. 11:557162. doi: 10.3389/fpsyg.2020.557162

Received: 29 April 2020; Accepted: 30 October 2020;
Published: 08 December 2020.

Edited by:

Marta Olivetti Belardinelli, Sapienza University of Rome, Italy

Reviewed by:

Jonna Katariina Vuoskoski, University of Oslo, Norway

Copyright © 2020 Harris and Küssner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ilana Harris, ilanaharris@alumni.harvard.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.