- 1Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
- 2Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Berlin, Germany
Musical Engagement: Socio-affective Underpinnings
Socio-affective behavior is entangled in our experience of music (Devroop, 2012; Koelsch, 2014; Aucouturier and Canonne, 2017; Saarikallio, 2019). Joint musical engagement, or making and listening to music with others, was found to result in increased prosocial tendencies (Kirschner and Tomasello, 2010; Rabinowitch et al., 2013; Cirelli et al., 2014) and is thought to occur due to overlapping mechanisms underpinning interactive musical behavior and empathically driven prosocial behaviors (Rabinowitch et al., 2012; Clarke et al., 2015; Saarikallio, 2019). In this paper, we present opportunities for experimental investigation of emotional contagion, a specific subprocess hypothesized to lie at this overlap, and highlight ways to improve understanding of how joint musical engagement may promote prosocial behaviors.
Socio-affective components of joint musical engagement have been postulated following empirical investigation of joint music-making and group music-listening (Egermann et al., 2011; Rabinowitch et al., 2013) and hinge on subprocesses including affective alignment, where joint expression of emotion among interlocutors allows for facilitated transfer of semantic and affective content (Cross, 2005; Bharucha et al., 2012; Rabinowitch et al., 2012; Vesper et al., 2017). In this sense, affective alignment may contribute to higher-level processes of musical interaction such as shared intentionality by ensuring that members are working toward a common musical goal in real time and have “coordinated action roles for pursuing that shared goal” (Tomasello et al., 2005, p. 680) through upregulating constituent socio-affective behaviors (e.g., other-directed behaviors) that help individuals ascertain their interlocutor's internal state and align their behavior accordingly (Cross et al., 2012). Joint music-making's positive influence on socio-affective behaviors in non-musical contexts suggests that psychosocial processes underpinning musical interaction may overlap with those involved in non-musical interaction, and that co-activation of these overlapping structures may result in prosocial transfer effects (Kirschner and Tomasello, 2010; Cross et al., 2012; Saarikallio, 2019).
Scientific inquiry probing the effects of musical engagement on prosociality has risen in prevalence in recent years; particularly, musical engagement's influence on prosocial behaviors underscored by empathy has gained considerable traction in music psychology and related fields (King and Waddington, 2017; Davis, 2018; Riess et al., 2018). Empathy may be defined as “the ability to produce emotional and experiential responses to the situations of others that approximate their responses and experiences” (Rabinowitch et al., 2013, p. 485) and is a core component of social cognition comprising both slow (controlled) and fast (automatic) psychological subprocesses (i.e., dual process theory; Lieberman, 2007; Batson, 2009) that ultimately “constitute a causal force in motivating prosociality towards other conspecifics” (Decety et al., 2016, p. 371). Slow processes are “evaluative,” requiring top-down cognitive assessment, while fast processes are immediate, “automatic detection” of social signals; separate neural representations for fast and slow processing of social information have been proposed accordingly (i.e., “mirror neuron system” and “mentalizing system”; Vogeley, 2017). Emotional contagion, a subprocess of empathy, is defined as automatic mimicry of another's behavioral cues associated with a particular affective state; it is thought to foster survival through increasing recognition of and successful communication between conspecifics, and underpins the capacity to build and maintain human attachment bonds (de Vignemont and Singer, 2006; Feldman, 2017; Prochazkova and Kret, 2017). Though theoretical study of emotional contagion in music has begun, there is a lack of experimental study that causally explains how automatic detection of socio-affective signals influences our experience of music (Miu and Vuoskoski, 2017). Investigation of emotional contagion during musical interaction is critical to understanding how relationships between co-performers may be similar to other types of social relationships (e.g., through attachment bonds) and, consequently, how joint musical engagement may lead to upregulated other-directed behaviors such as those that arise within a particular social relationship.
Experimental investigation of emotional contagion is practically difficult because it requires simultaneous measurement of interlocutors' complex emotion states in interactionist paradigms; this matter is further complicated in the context of music, where substantial ecological validity is needed to elicit behaviors of interest (e.g., empathy-promoting musical components; Rabinowitch et al., 2013). In the following two sections, we introduce research from related fields incorporating computational techniques for measuring behavioral and physiological correlates of emotional contagion; we situate such techniques in the context of music psychology and suggest avenues by which they may be incorporated into existing experimental paradigms to triangulate investigation of socio-affective processes using behavioral, physiological, and social signal processing, as has been done across numerous subfields of psychology (Pantic and Vinciarelli, 2015; Azevedo et al., 2016; Sutherland et al., 2017; de Barbaro, 2019; Oswald et al., 2020).
The Frontier: Behavioral Cues
The following section outlines possibilities for investigating socio-affective components in music using computational methods for behavior recognition drawn from research in computer science and the behavioral sciences. First, facial cues are an important behavioral cue for emotional expression in music (Thompson et al., 2008, 2010; Livingstone et al., 2009; Waddell and Williamon, 2017). Approximately 95% of automatic emotion recognition literature relies on facial cues, which has led to applicability of these techniques to an expanding number of datasets (Noroozi et al., 2018). Computational emotion recognition using facial cues has been incorporated into the study of empathic behavior in group settings. For instance, Kumano et al. (2011, 2014, 2015) conducted a series of experiments to see if empathic interactions could be predicted based on facial data from video recordings of four-person meetings. Their Naive Bayes Network Model was able to predict empathy state given facial expression information across time and improved when parameters such as reaction time in mirrored expression between interlocutors and head gesture annotations were added. Scientific study of music has not yet incorporated computational determination of joint emotional expression epochs from facial cues; this is likely to be a fruitful area of inquiry, involving relay of complex affective information at the intersection of individually, socially, and musically driven systems.
Though literature examining communication of emotion through body as opposed to facial cues in non-musical contexts is lacking, several studies have found that determination of emotion state is modulated by body posture/movement (Aviezer et al., 2008a,b; Martinez et al., 2016). In musical settings, visual content, often in the form of body movement, plays a critical role in conveying affective information (Vines et al., 2011; Vuoskoski et al., 2014, 2016). Computational analysis of body position/gesture using motion capture experiments has incurred important findings with respect to joint emotional expression and audience-perceived emotion (Burger et al., 2013; Chang et al., 2019). Still, analyses of body postures/gestures across various cultural and developmental contexts and further determination of indices that convey socio-affective information are necessary to better understand their role in both musical interaction and potential transference to non-musical interaction.
Following research on prosocial behaviors as a consequence of joint music-making, similar outcomes of music listening have begun to be studied (Ruth and Schramm, 2020). Continuous self-report of emotion by audience members during live concert settings is a promising experimental tool (Egermann, 2019). These measures collect rating data simultaneously with and continuously throughout the stimuli's presentation and may be able to achieve the temporal specificity needed in order to determine instances of affective alignment between participants. Moreover, continuous measurement of self-reported affect supports various forms of rating interfaces, including linear potentiometers (Vines et al., 2011; Baytaş et al., 2016), binary trigger buttons (Baytaş et al., 2019), and four-quadrant valence-arousal joysticks (Sharma et al., 2020; or its digital analog in Egermann, 2019). Furthermore, such interfaces may be attached to the participant (i.e., wearables) such that implementation in a paradigm involving movement is possible. In addition, top-down cameras can provide useful visual displays of crowd behavior, as evinced in analyses of pedestrian movement (Xu et al., 2020); concerning non-coordinated movement of audiences, this line of research within computer science could nicely complement existing methods in motion tracking (e.g., analysis of head movement in Swarbrick et al., 2019).
The Future: Emerging Data Sources and Analyses
Music serves a number of social functions in everyday listening (Sloboda and O'Neill, 2001). Recently, social surrogacy was added as a potential reason for musical listening, extrapolating online listener behavior to internal processes (Schäfer and Eerola, 2020). Greenberg and Rentfrow (2017) list a number of avenues by which social media and streaming data can be used within music psychology; implementing several of these analyses in tandem could be well-suited to studying socio-affective behavior. For example, combining analyses of song-specific emotion data from Spotify APIs, listeners' comments on social media and self-report data gathered from online surveys could help determine individual differences in socio-emotional components of music listening. In addition, experience sampling methods (ESMs) have become increasingly popular for administering repeated surveys of everyday musical experience (e.g., Juslin et al., 2008; de Barbaro, 2019), with more recent ESM interfaces allowing for user mobility and nuanced user input (e.g., Randall and Rickard, 2017). Housing state-of-the-art digital self-assessment scales for emotion within existing ESMs could help bolster outcome evaluations and uncover relationships between socio-affective components in everyday musical behaviors (Betella and Verschure, 2016; Juslin, 2016).
Detecting emotion from acoustic properties of music has been extensively researched; for instance, tempo and mode tend to be good indicators of perceived emotion (Eerola et al., 2013). However, computational methods for music emotion recognition have tended to favor certain features over others (e.g., timbre accounting for over 60%; Yang et al., 2019). Recently, researchers have begun to develop software packages for emotion recognition in music, which include fine-grained features such as specific textural shifts and articulations (Panda et al., 2018). Such software could provide important contextual affective information in existing joint music-making paradigms. In addition, natural language processing (NLP) of song lyrics is a burgeoning area of research in music psychology (Anglada-Tort et al., 2019). NLP could be useful for identifying empathic tendencies in group songwriting, a prevalent music therapy intervention (e.g., using language style synchrony as a proxy for empathy as in Lord et al., 2015).
Lastly, behavioral tasks that can robustly quantify socio-emotional components in various populations after engagement in social music activities are needed. Several studies to date have developed novel tasks or adapted tasks from other disciplines to achieve this end (Rabinowitch et al., 2013; Reddish et al., 2014; Brown, 2017). In social neuroscience, researchers have investigated mechanisms underlying evolutionarily advantageous socio-affective behavior through experimental paradigms targeting social modulation of threat response (DeVries et al., 2003; Coan et al., 2006). Automated stress recognition via analyses of multimodal physiological and motion data has begun to show potential for validated use in social science research (Hovsepian et al., 2015). Further, higher-order pattern detection of heart rate variability (HRV) has been used to predict interpersonal affective alignment at levels above chance (McCraty, 2017). In the near future, such methods could be incorporated into behavioral cooperation tasks following joint music-making paradigms in order to assess transfer effects on socio-affective processing in non-musical contexts.
Concluding Thoughts
Several precautions should be taken when incorporating emotion detection techniques into scientific study of music. A general framework for ethical (e.g., intrinsic biases due to demographically limited training) and practical (e.g., overfitted algorithms) considerations for using computational techniques in social science research is covered in a review paper by Martinez (2019). Concerns specific to music psychology include the following. First, non-verbal displays of emotion in musical settings may present differently than in idealized non-musical settings (e.g., differing behavioral cues for real vs. acted emotions as in Wilting et al., 2006; overlapping basic emotions as in Juslin et al., 2011; Juslin, 2013; Akkermans et al., 2019); an affective taxonomy appropriate for the given research question should be carefully determined and cross-checked with each session of algorithmic fine-tuning. Furthermore, it is likely advantageous to limit stimuli to a particular genre or song in order to conserve behavioral cue utilization among both performers and listeners (Juslin, 2000) and de-escalate computational complexity (Lange and Frieler, 2018).
This article has summarized recent developments in music psychology and related fields that may be applied to detecting emotional contagion in music. We have discussed this research in terms of how it may be incorporated into existing experimental paradigms in scientific studies of music. We hope to encourage further findings regarding the means by which various forms of musical engagement can result in positive prosocial consequences for a broader population.
Author Contributions
IH conducted literature reviews in order to gather necessary evidence and to generate initial drafts. MBK assisted argument development and expanded source material. Both authors approved the final version of the manuscript.
Funding
This research was supported by the Konrad Adenauer Stiftung for Postgraduate Research at Humboldt-Universität zu Berlin, and the Herchel Smith Fellowship for Postgraduate Research at Cambridge.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The authors would like to acknowledge Konrad Adenauer Stiftung for their financial and administrative support provided throughout the process that this article was conceived. In addition, the first author would like to thank the Institut für Musikwissenschaft und Medienwissenschaft, in particular Professor Sebastian Klotz and the members of the Transkulturelle Musikwissenschaft Kolloquium, as well as members of the Cognitive Neurobiology Current Topics Seminar, particularly Dr. Vladislav Nachev, Dr. Katharina Stumpenhorst, and Professor York Winter, for providing crucial feedback during the development of this research. Lastly, the first author would like to thank her current mentor, Professor Ian Cross, for immensely promoting consolidation of this research through critical feedback and discussion.
References
Akkermans, J., Schapiro, R., Müllensiefen, D., Jakubowski, K., Shanahan, D., Baker, D., et al. (2019). Decoding emotions in expressive music performances: a multi-lab replication and extension study. Cogn. Emot. 33, 1099–1118. doi: 10.1080/02699931.2018.1541312
Anglada-Tort, M., Krause, A. E., and North, A. C. (2019). Popular music lyrics and musicians' gender over time: a computational approach. Psychol. Music. doi: 10.1177/0305735619871602
Aucouturier, J.-J., and Canonne, C. (2017). Musical friends and foes: the social cognition of affiliation and control in improvised interactions. Cognition 161, 94–108. doi: 10.1016/j.cognition.2017.01.019
Aviezer, H., Hassin, R. R., Bentin, S., and Trope, Y. (2008a). “Putting facial expressions back in context,” in First Impressions, eds N. Ambady, and J. J. Skowronski (New York, NY: Guilford Publications), 255–286.
Aviezer, H., Hassin, R. R., Ryan, J., Grady, C., Susskind, J., Anderson, A., et al. (2008b). Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19, 724–732. doi: 10.1111/j.1467-9280.2008.02148.x
Azevedo, R., Taub, M., Mudrick, N., Farnsworth, J., and Martin, S. A. (2016). “Interdisciplinary research methods used to investigate emotions with advanced learning technologies,” in Methodological Advances in Research on Emotion and Education, eds M. Zembylas, and P. A. Schutz (Cham: Springer International Publishing), 231–243. doi: 10.1007/978-3-319-29049-2_18
Batson, C. D. (2009). These Things Called Empathy: Eight Related but Distinct Phenomena. The MIT Press. Available online at: https://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262012973.001.0001/upso-9780262012973-chapter-2 (accessed April 5, 2020).
Baytaş, M. A., Göksun, T., and Özcan, O. (2016). The perception of live-sequenced electronic music via hearing and sight. Proc. Int. Conf. New Interfaces Music. Expr. 16, 194–199. doi: 10.5281/zenodo.1175978
Baytaş, M. A., Obaid, M., La Delfa, J., Yantaç, A. E., and Fjeld, M. (2019). “Integrated apparatus for empirical studies with embodied autonomous social drones,” in 1st International Workshop on Human-Drone Interaction (Glasgow: Ecole Nationale de l'Aviation Civile [ENAC]). Available online at: https://hal.archives-ouvertes.fr/hal-02128387 (accessed February 19, 2020).
Betella, A., and Verschure, P. F. M. J. (2016). The affective slider: a digital self-assessment scale for the measurement of human emotions. PLoS ONE 11:e0148037. doi: 10.1371/journal.pone.0148037
Bharucha, J., Curtis, M., and Paroo, K. (2012). “Musical communication as alignment of brain states,” in Language and Music as Cognitive Systems (New York, NY: Oxford University Press), 139–155. doi: 10.1093/acprof:oso/9780199553426.003.0016
Brown, L. S. (2017). The influence of music on facial emotion recognition in children with autism spectrum disorder and neurotypical children. J. Music Ther. 54, 55–79. doi: 10.1093/jmt/thw017
Burger, B., Saarikallio, S., Luck, G., Thompson, M. R., and Toiviainen, P. (2013). Relationships between perceived emotions in music and music-induced movement. Music Percept. Interdiscip. J. 30, 517–533. doi: 10.1525/mp.2013.30.5.517
Chang, A., Kragness, H. E., Livingstone, S. R., Bosnyak, D. J., and Trainor, L. J. (2019). Body sway reflects joint emotional expression in music ensemble performance. Sci. Rep. 9:205. doi: 10.1038/s41598-018-36358-4
Cirelli, L. K., Einarson, K. M., and Trainor, L. J. (2014). Interpersonal synchrony increases prosocial behavior in infants. Dev. Sci. 17, 1003–1011. doi: 10.1111/desc.12193
Clarke, E., DeNora, T., and Vuoskoski, J. (2015). Music, empathy and cultural understanding. Phys. Life Rev. 15, 61–88. doi: 10.1016/j.plrev.2015.09.001
Coan, J. A., Schaefer, H. S., and Davidson, R. J. (2006). Lending a hand: social regulation of the neural response to threat. Psychol. Sci. 17, 1032–1039. doi: 10.1111/j.1467-9280.2006.01832.x
Cross, I. (2005). “Music and meaning, ambiguity and evolution,” in Musical Communication, eds D. Miell, R. MacDonald, and D. Hargreaves (Oxford, UK: Oxford University Press), 27–43.
Cross, I., Laurence, F., and Rabinowitch, T.-C. (2012). “Empathy and creativity in group musical practices: towards a concept of empathic creativity,” in The Oxford Handbook of Music Education, Vol. 2, eds G. E. McPherson, and G. F. Welch (New York, NY: Oxford University Press), 336–353. doi: 10.1093/oxfordhb/9780199928019.013.0023
Davis, M. H. (2018). “Chapter 1: history and definitions,” in Empathy: A Social Psychological Approach, eds G. E. McPherson, and G. F. Welch (New York, NY; Oxon: Routledge). doi: 10.4324/9780429493898-1
de Barbaro, K. (2019). Automated sensing of daily activity: a new lens into development. Dev. Psychobiol. 61, 444–464. doi: 10.1002/dev.21831
de Vignemont, F., and Singer, T. (2006). The empathic brain: how, when and why? Trends Cogn. Sci. 10, 435–441. doi: 10.1016/j.tics.2006.08.008
Decety, J., Bartal, I. B.-A., Uzefovsky, F., and Knafo-Noam, A. (2016). Empathy as a driver of prosocial behaviour: highly conserved neurobehavioural mechanisms across species. Philos. Trans. R. Soc. B Biol. Sci. 371. doi: 10.1098/rstb.2015.0077
DeVries, A. C., Glasper, E. R., and Detillion, C. E. (2003). Social modulation of stress responses. Physiol. Behav. 79, 399–407. doi: 10.1016/S0031-9384(03)00152-5
Devroop, K. (2012). The social-emotional impact of instrumental music performance on economically disadvantaged South African students. Music Educ. Res. 14, 407–416. doi: 10.1080/14613808.2012.685456
Eerola, T., Friberg, A., and Bresin, R. (2013). Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Front. Psychol. 4:487. doi: 10.3389/fpsyg.2013.00487
Egermann, H. (2019). Aesthetic Judgement and Emotional Processing in Contemporary Music, (Cambridge).
Egermann, H., Sutherland, M. E., Grewe, O., Nagel, F., Kopiez, R., and Altenmüller, E. (2011). Does music listening in a social context alter experience? A physiological and psychological perspective on emotion. Music. Sci. 15, 307–323. doi: 10.1177/1029864911399497
Feldman, R. (2017). The neurobiology of human attachments. Trends Cogn. Sci. 21, 80–99. doi: 10.1016/j.tics.2016.11.007
Greenberg, D. M., and Rentfrow, P. J. (2017). Music and big data: a new frontier. Curr. Opin. Behav. Sci. 18, 50–56. doi: 10.1016/j.cobeha.2017.07.007
Hovsepian, K., al'Absi, M., Ertin, E., Kamarck, T., Nakajima, M., and Kumar, S. (2015). cStress: towards a gold standard for continuous stress assessment in the mobile environment. Proc. ACM Int. Conf. Ubiquitous Comput. 2015, 493–504. doi: 10.1145/2750858.2807526
Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26, 1797–1812. doi: 10.1037/0096-1523.26.6.1797
Juslin, P. N. (2013). What does music express? Basic emotions and beyond. Front. Psychol. 4:596. doi: 10.3389/fpsyg.2013.00596
Juslin, P. N. (2016). “Emotional reactions to music,” in The Oxford Handbook of Music Psychology, eds I. Cross, S. Hallam, and M. Thaut (Oxford: Oxford University Press), 197–214. doi: 10.1093/oxfordhb/9780198722946.013.17
Juslin, P. N., Liljeström, S., Laukka, P., Västfjäll, D., and Lundqvist, L.-O. (2011). Emotional reactions to music in a nationally representative sample of Swedish adults: prevalence and causal influences. Music Sci. 15, 174–207. doi: 10.1177/1029864911401169
Juslin, P. N., Liljeström, S., Västfjäll, D., Barradas, G., and Silva, A. (2008). An experience sampling study of emotional reactions to music: listener, music, and situation. Emotion 8, 668–683. doi: 10.1037/a0013505
King, E., and Waddington, C. (Eds.). (2017). Music and Empathy. Abingdon, Oxon; New York, NY: Routledge. doi: 10.4324/9781315596587
Kirschner, S., and Tomasello, M. (2010). Joint music making promotes prosocial behavior in 4-year-old children. Evol. Hum. Behav. 31, 354–364. doi: 10.1016/j.evolhumbehav.2010.04.004
Koelsch, S. (2014). Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 15, 170–180. doi: 10.1038/nrn3666
Kumano, S., Otsuka, K., Matsuda, M., and Yamato, J. (2014). Analyzing perceived empathy based on reaction time in behavioral mimicry. IEICE Trans. Inf. Syst. E97.D, 2008–2020. doi: 10.1587/transinf.E97.D.2008
Kumano, S., Otsuka, K., Mikami, D., Matsuda, M., and Yamato, J. (2015). Analyzing interpersonal empathy via collective impressions. IEEE Trans. Affect. Comput. 6, 324–336. doi: 10.1109/TAFFC.2015.2417561
Kumano, S., Otsuka, K., Mikami, D., and Yamato, J. (2011). Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings. Face Gesture 2011, 43–50. doi: 10.1109/FG.2011.5771440
Lange, E. B., and Frieler, K. (2018). Challenges and opportunities of predicting musical emotions with perceptual and automatized features. Music Percept. Interdiscip. J. 36, 217–242. doi: 10.1525/mp.2018.36.2.217
Lieberman, M. D. (2007). Social cognitive neuroscience: a review of core processes. Annu. Rev. Psychol. 58, 259–289. doi: 10.1146/annurev.psych.58.110405.085654
Livingstone, S. R., Thompson, W. F., and Russo, F. A. (2009). Facial expressions and emotional singing: a study of perception and production with motion capture and electromyography. Music Percept. Interdiscip. J. 26, 475–488. doi: 10.1525/mp.2009.26.5.475
Lord, S. P., Sheng, E., Imel, Z. E., Baer, J., and Atkins, D. C. (2015). More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client. Behav. Ther. 46, 296–303. doi: 10.1016/j.beth.2014.11.002
Martinez, A. M. (2019). The promises and perils of automated facial action coding in studying children's emotions. Developmental Psychology, 55, 1965–1981. doi: 10.1037/dev0000728
Martinez, L., Falvello, V. B., Aviezer, H., and Todorov, A. (2016). Contributions of facial expressions and body language to the rapid perception of dynamic emotions. Cogn. Emot. 30, 939–952. doi: 10.1080/02699931.2015.1035229
McCraty, R. (2017). New frontiers in heart rate variability and social coherence research: techniques, technologies, and implications for improving group dynamics and outcomes. Front. Public Health 5:267. doi: 10.3389/fpubh.2017.00267
Miu, A., and Vuoskoski, J. (2017). “The social side of music listening: empathy and contagion in music-induced emotions,” in Music and Empathy, eds E. King and C. Waddington (New York, NY: Routledge), 124–138. doi: 10.4324/9781315596587-6
Noroozi, F., Corneanu, C., Kamińska, D., Sapiński, T., Escalera, S., and Anbarjafari, G. (2018). Survey on emotional body gesture recognition. IEEE Trans. Affect. Comput. 9:1. doi: 10.1109/TAFFC.2018.2874986
Oswald, F. L., Behrend, T. S., Putka, D. J., and Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: forward progress for organizational research and practice. Annu. Rev. Organ. Psychol. Organ. Behav. 7, 505–533. doi: 10.1146/annurev-orgpsych-032117-104553
Panda, R., Malheiro, R. M., and Paiva, R. P. (2018). Novel audio features for music emotion recognition. IEEE Trans. Affect. Comput. 9:1. doi: 10.1109/TAFFC.2018.2820691
Pantic, M., and Vinciarelli, A. (2015). “Social signal processing,” in The Oxford Handbook of Affective Computing, eds R. Calvo, S. D'Mello, J. Gratch, and A. Kappas (Oxford: Oxford University Press).
Prochazkova, E., and Kret, M. E. (2017). Connecting minds and sharing emotions through mimicry: a neurocognitive model of emotional contagion. Neurosci. Biobehav. Rev. 80, 99–114. doi: 10.1016/j.neubiorev.2017.05.013
Rabinowitch, T.-C., Cross, I., and Burnard, P. (2012). “Musical group interaction, intersubjectivity and merged subjectivity,” in Kinesthetic Empathy in Creative and Cultural Practices, eds D. Reynolds and M. Reason (Bristol; Chicago, IL: Intellect Books), 109–120.
Rabinowitch, T.-C., Cross, I., and Burnard, P. (2013). Long-term musical group interaction has a positive influence on empathy in children. Psychol. Music 41, 484–498. doi: 10.1177/0305735612440609
Randall, W. M., and Rickard, N. S. (2017). Personal music listening: a model of emotional outcomes developed through mobile experience sampling. Music Percept. Interdiscip. J. 34, 501–514. doi: 10.1525/mp.2017.34.5.501
Reddish, P., Bulbulia, J., and Fischer, R. (2014). Does synchrony promote generalized prosociality? Relig. Brain Behav. 4, 3–19. doi: 10.1080/2153599X.2013.764545
Riess, H., Neporent, L., and Alda, A. (2018). “Shared mind intelligence,” in The Empathy Effect: Seven Neuroscience-Based Keys for Transforming the Way We Live, Love, Work, and Connect Across Differences (Boulder, CO: Sounds True).
Ruth, N., and Schramm, H. (2020). Effects of prosocial lyrics and musical production elements on emotions, thoughts and behavior. Psychology of Music. doi: 10.1177/0305735620902534
Saarikallio, S. (2019). Access-awareness-agency (AAA) model of music-based social-emotional competence (MuSEC). Music Sci. 2:2059204318815421. doi: 10.1177/2059204318815421
Schäfer, K., and Eerola, T. (2020). How listening to music and engagement with other media provide a sense of belonging: an exploratory study of social surrogacy. Psychol. Music 48, 232–251. doi: 10.1177/0305735618795036
Sharma, K., Castellini, C., Stulp, F., and van den Broek, E. L. (2020). Continuous, real-time emotion annotation: a novel joystick-based analysis framework. IEEE Trans. Affect. Comput. 11, 78–84. doi: 10.1109/TAFFC.2017.2772882
Sloboda, J. A., and O'Neill, S. A. (2001). “Emotions in everyday listening to music,” in Music and Emotion: THEORY and Research. Series in Affective Science, eds P. N. Juslin, and J. A. Sloboda (New York, NY: Oxford University Press), 415–429.
Sutherland, C. A. M., Rhodes, G., and Young, A. W. (2017). Facial image manipulation: a tool for investigating social perception. Soc. Psychol. Personal. Sci. 8, 538–551. doi: 10.1177/1948550617697176
Swarbrick, D., Bosnyak, D., Livingstone, S., Bansal, J., Marsh-Rollo, S., Woolhouse, M., et al. (2019). How live music moves us: head movement differences in audiences to live versus recorded music. Front. Psychol. 9:2682. doi: 10.3389/fpsyg.2018.02682
Thompson, W. F., Russo, F. A., and Livingstone, S. R. (2010). Facial expressions of singers influence perceived pitch relations. Psychon. Bull. Rev. 17, 317–322. doi: 10.3758/PBR.17.3.317
Thompson, W. F., Russo, F. A., and Quinto, L. (2008). Audio-visual integration of emotional cues in song. Cogn. Emot. 22, 1457–1470. doi: 10.1080/02699930701813974
Tomasello, M., Carpenter, M., Call, J., Behne, T., and Moll, H. (2005). Understanding and sharing intentions: the origins of cultural cognition. Behav. Brain Sci. 28, 675–691. doi: 10.1017/S0140525X05000129
Vesper, C., Abramova, E., Bütepage, J., Ciardo, F., Crossey, B., Effenberg, A., et al. (2017). Joint action: mental representations, shared information and general mechanisms for coordinating with others. Front. Psychol. 7:2039. doi: 10.3389/fpsyg.2016.02039
Vines, B., Krumhansl, C., Wanderley, M., Dalca, I., and Levitin, D. (2011). Music to my eyes: cross-modal interactions in the perception of emotions in musical performance. Cognition 118, 157–170. doi: 10.1016/j.cognition.2010.11.010
Vogeley, K. (2017). Two social brains: neural mechanisms of intersubjectivity. Philos. Trans. R. Soc. B Biol. Sci. 372:20160245. doi: 10.1098/rstb.2016.0245
Vuoskoski, J. K., Gatti, E., Spence, C., and Clarke, E. F. (2016). Do visual cues intensify the emotional responses evoked by musical performance? A psychophysiological investigation. Psychomusicol. Music Mind Brain 26, 179–188. doi: 10.1037/pmu0000142
Vuoskoski, J. K., Thompson, M. R., Clarke, E. F., and Spence, C. (2014). Crossmodal interactions in the perception of expressivity in musical performance. Atten. Percept. Psychophys. 76, 591–604. doi: 10.3758/s13414-013-0582-2
Waddell, G., and Williamon, A. (2017). Eye of the beholder: stage entrance behavior and facial expression affect continuous quality ratings in music performance. Front. Psychol. 8:513. doi: 10.3389/fpsyg.2017.00513
Wilting, J., Krahmer, E., and Swerts, M. (2006). Real vs. acted emotional speech. Int. J. Web Eng. Technol 3, 805–808.
Xu, T., Shi, D., Chen, J., Li, T., Lin, P., and Ma, J. (2020). Dynamics of emotional contagion in dense pedestrian crowds. Phys. Lett. A 384:126080. doi: 10.1016/j.physleta.2019.126080
Keywords: socio-affective behavior, computational methods, joint music-making, prosocial behavior, emotional contagion
Citation: Harris I and Küssner MB (2020) Come on Baby, Light My Fire: Sparking Further Research in Socio-Affective Mechanisms of Music Using Computational Advancements. Front. Psychol. 11:557162. doi: 10.3389/fpsyg.2020.557162
Received: 29 April 2020; Accepted: 30 October 2020;
Published: 08 December 2020.
Edited by:
Marta Olivetti Belardinelli, Sapienza University of Rome, ItalyReviewed by:
Jonna Katariina Vuoskoski, University of Oslo, NorwayCopyright © 2020 Harris and Küssner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ilana Harris, ilanaharris@alumni.harvard.edu