Skip to main content

EDITORIAL article

Front. Psychol., 21 March 2024
Sec. Cognitive Science
This article is part of the Research Topic Crossmodal Correspondence View all 14 articles

Editorial: Crossmodal correspondence

  • 1The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
  • 2Department of Communication and Psychology, Aalborg University, Aalborg, Denmark
  • 3Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom

Editorial on the Research Topic
Crossmodal correspondence

Over the last decade or so, there has been an explosion of scientific research interest in the crossmodal correspondences, this the name given to the often surprising, yet consensual, associations that have been documented between a growing number of basic sensory features, attributes, and dimensions in different sensory modalities (see Spence, 2011, for a review). So, for example, people have been shown to match auditory pitch with visual elevation, size, lightness, etc. Over the years, several different mechanisms have been put forward to help explain the existence of such crossmodal correspondences, including the statistical account, the structural (or neurophysiological) account, a semantic/lexical account, and an account in terms of emotional-mediation. It is, however, important to note that these various explanations should not be treated as mutually exclusive. Indeed, several or perhaps all of them may help to explain a variable proportion of the various different correspondences that have been documented to date.

The 13 papers that were eventually accepted for this Research Topic effectively serve to highlight the global growth of research interest in this emerging phenomenon currently. What is also striking is how the articles collected together here range well-beyond the pitch-based audiovisual correspondences that attracted so much of the interest in correspondences research previously (see Spence and Sathian, 2020, for a review). Recently, researchers have increasingly started to investigate group differences in crossmodal correspondences as well as studying when in human development humans start to express a sensitivity to such crossmodal correspondences (see Spence, 2022, for a review). The latter approach is illustrated in this Research Topic by an exploratory study reported by Meng et al. in which crossmodal correspondences between visual features (such as shape/angularity and color) and tastes (e.g., bitter, sweet, sour, salty) were assessed in a group of pre-schoolers. Importantly, several important factors that have been suggested to affect/constrain the crossmodal correspondences include their context-dependence (Motoki and Velasco, 2021), their automaticity (Spence and Deroy, 2013b; Getz and Kubovy, 2018), and their bidirectionality (Motoki et al., 2023; Yang et al., 2023; Chen and Huang). These topics are addressed in several of the contributions here. Chen and Huang showed that the sound-shape correspondences are not completely automatic, but their modulation was bidirectionally symmetrical once it occurred. The bidirectionality of crossmodal correspondences means that effects on visual perception can be elicited by stimuli from other sensory modalities, such as gustatory and/or olfactory stimuli, as highlighted by a couple of the intriguing submissions in this Research Topic (e.g., Ward et al.; Yang et al., 2023). Ward et al. used an achromatic adjustment task to show that the presence of odors modulates color perception. Such crossmodal effects on vision are surprising inasmuch as vision is so often found to be the dominant sense in multisensory research (Hutmacher, 2019).

“Sonic seasoning”, the generic name for the modification of the taste and flavor based on listening to music that corresponds crossmodally to the dominant tastes/flavors of food and drink that have been documented between sound and the chemical senses also continues to attract much research interest. The experimental paper from Xu et al. investigates the role of self-construal priming on the effectiveness of sonic seasoning. Meanwhile, the paper from Mesz et al. investigates emotional associations with music-based crossmodal correspondences.

Historically, the role of crossmodal correspondences in helping to solve the crossmodal binding problem has occupied the attention of many researchers (Chen and Spence, 2017). At the same time, the paper by Yang et al. (2023) demonstrates that the crossmodal correspondence between color and taste influences performance on the well-established Stroop task. However, what is striking about so many of the articles that are collected together here is how they attempt to apply our growing understanding of the crossmodal correspondences to a range of real-world applications: this includes everything from a consideration of the potential facilitatory role of designing human augmentation systems based on the crossmodal correspondences (see Pinardi et al.), through to the use of crossmodal correspondences to help describe, communicate about, and market wine (Crichton-Fock, Spence, Mora, et al.; Crichton-Fock, Spence, and Pettersson). At the same time, the crossmodal correspondences clearly also help to provide guidelines for the design of multisensory experiential events (see Velasco and Spence, 2022, for a review). Meanwhile, the article from Ogata et al. reports some intriguing results concerning the impact of the shape of chocolate on taste ratings, based on the literature on shape-taste crossmodal correspondences (see Spence, 2014).

One other area of continuing research interest concerns the relationship between crossmodal correspondences, synaesthesia, and mental imagery (Rader and Tellegen, 1987; Martino and Marks, 2001; Spence and Deroy, 2013a; Nanay, 2020). Relevant here, the paper by Hitsuwari and Nomura details research validating a Japanese version of the Plymouth Sensory Imagery Questionnaire which provides researchers with a means of assessing the strength of mental imagery in each of the senses.

Extending the scope of crossmodal correspondences research, this Research Topic also includes a couple of papers that might best be classified as sensory-conceptual/categorical correspondence (Chen et al.) and intramodal visual correspondences (Zelazny et al.). The first of these two papers demonstrates that the color red biases sex categorization of human bodies. One theme that emerges from the latter study, as well as from several other studies that have been published recently (e.g., Velasco et al., 2023) highlights the importance of providing participants with a wide enough range of options if one's goal is to identify the strongest correspondences, given that older studies with a narrow range of colors, say, may merely have picked up the best color amongst the range of options provided to the participant. Such methodological developments should help ensure that future theorizing about the correspondences is based on firm empirical foundations.

Taken together, the research papers that have been gathered together in this Research Topic clearly highlight the vibrant state of crossmodal correspondences research in both the theoretical and applied arenas. One exciting area in correspondences research that is not represented here relates to the emergence of studies assessing the sensitivity of various animals to crossmodal correspondences. So, for example, Loconsole et al. (2021, 2022) have recently published several studies demonstrating audiovisual crossmodal correspondences in both chicks and the tortoise (Testudo hermanni; Loconsole et al., 2023).

Author contributions

NC: Writing – review & editing. TS: Writing – review & editing. CS: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Chen, Y.-C., and Spence, C. (2017). Assessing the role of the ‘unity assumption' on multisensory integration: a review. Front. Psychol. 8:445. doi: 10.3389/fpsyg.2017.00445

Crossref Full Text | Google Scholar

Getz, L. M., and Kubovy, M. (2018). Questioning the automaticity of audiovisual correspondences. Cognition 175, 101–108. doi: 10.1016/j.cognition.2018.02.015

PubMed Abstract | Crossref Full Text | Google Scholar

Hutmacher, F. (2019). Why is there so much more research on vision than on any other sensory modality? Front. Psychol. 10:2246. doi: 10.3389/fpsyg.2019.02246

Crossref Full Text | Google Scholar

Loconsole, M., Gasparini, A., and Regolin, L. (2022). Pitch–luminance crossmodal correspondence in the baby chick: an investigation on predisposed and learned processes. Vision 6:24. doi: 10.3390/vision6020024

PubMed Abstract | Crossref Full Text | Google Scholar

Loconsole, M., Pasculli, M. S., and Regolin, L. (2021). Space-luminance crossmodal correspondences in domestic chicks. Vision Res. 188, 26–31. doi: 10.1016/j.visres.2021.07.001

PubMed Abstract | Crossref Full Text | Google Scholar

Loconsole, M., Stancher, G., and Versace, E. (2023). Crossmodal association between visual and acoustic cues in a tortoise (Testudo hermanni). Biol. Lett. 19:20230265. doi: 10.1098/rsbl.2023.0265

PubMed Abstract | Crossref Full Text | Google Scholar

Martino, G., and Marks, L. E. (2001). Synesthesia: strong and weak. Curr. Dir. Psychol. Sci. 10, 61–65. doi: 10.1111/1467-8721.00116

Crossref Full Text | Google Scholar

Motoki, K., Marks, L. E., and Velasco, C. (2023). Reflections on cross-modal correspondences: current understanding and issues for future research. Multisens. Res. 37, 1–23. doi: 10.1163/22134808-bja10114

PubMed Abstract | Crossref Full Text | Google Scholar

Motoki, K., and Velasco, C. (2021). Taste-shape correspondences in context. Food Qual. Prefer. 88:104082. doi: 10.1016/j.foodqual.2020.104082

Crossref Full Text | Google Scholar

Nanay, B. (2020). Synesthesia as (multimodal) mental imagery. Multisens. Res. 34, 281–296. doi: 10.1163/22134808-bja10027

PubMed Abstract | Crossref Full Text | Google Scholar

Rader, C. M., and Tellegen, A. (1987). An investigation of synesthesia. J. Pers. Soc. Psychol. 52, 981–987. doi: 10.1037/0022-3514.52.5.981

Crossref Full Text | Google Scholar

Spence, C. (2011). Crossmodal correspondences: a tutorial review. Attent. Percept. Psychophys. 73, 971–995. doi: 10.3758/s13414-010-0073-7

Crossref Full Text | Google Scholar

Spence, C. (2014). Assessing the influence of shape and sound symbolism on the consumer's response to chocolate. New Food 17, 59–62.

Google Scholar

Spence, C. (2022). Exploring group differences in the crossmodal correspondences. Multisens. Res. 35, 495–536. doi: 10.1163/22134808-bja10079

PubMed Abstract | Crossref Full Text | Google Scholar

Spence, C., and Deroy, O. (2013a). “Crossmodal mental imagery,” in Multisensory Imagery: Theory and Applications, eds S. Lacey, and R. Lawson (New York, NY: Springer), 157–183.

Google Scholar

Spence, C., and Deroy, O. (2013b). How automatic are crossmodal correspondences? Conscious. Cogn. 22, 245–260. doi: 10.1016/j.concog.2012.12.006

PubMed Abstract | Crossref Full Text | Google Scholar

Spence, C., and Sathian, K. (2020). “Audiovisual crossmodal correspondences: behavioural consequences and neural underpinnings,” in Multisensory Perception: From Laboratory to Clinic, eds K. Sathian, and V. S. Ramachandran (San Diego, CA: Elsevier), 239–258.

Google Scholar

Velasco, C., Barbosa Escobar, F., Spence, C., and Olier, J. S. (2023). The taste of colours revisited. Food Qual. Prefer. 112:105009. doi: 10.1016/j.foodqual.2023.105009

Crossref Full Text | Google Scholar

Velasco, C., and Spence, C. (2022). “Capitalizing on the crossmodal correspondences between audition and olfaction in the design of multisensory experiences,” in Experiential Marketing in an Era of Hyper-Connectivity, eds N. Pomirleanu, B. J. Mariadoss, and J. Schibrowsky (Newcastle upon Tyne: Cambridge Scholars Publishing), 85–113.

Google Scholar

Yang, Y., Chen, N., Kobayashi, M., and Watanabe, K. (2023). Color-taste correspondence tested by the Stroop task. Front. Psychol. 15:1250781. doi: 10.3389/fpsyg.2024.1250781

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: crossmodal correspondences, sound-shape, music-taste, development, color-taste

Citation: Chen N, Sørensen TA and Spence C (2024) Editorial: Crossmodal correspondence. Front. Psychol. 15:1385480. doi: 10.3389/fpsyg.2024.1385480

Received: 12 February 2024; Accepted: 11 March 2024;
Published: 21 March 2024.

Edited and reviewed by: Snehlata Jaswal, Sikkim University, India

Copyright © 2024 Chen, Sørensen and Spence. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Na Chen, aW1taW5hbmE3JiN4MDAwNDA7Z21haWwuY29t

ORCID: Na Chen orcid.org/0000-0003-4558-2393
Thomas Alrik Sørensen orcid.org/0000-0002-2429-6550
Charles Spence orcid.org/0000-0003-2111-072X

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.