- 1 Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- 2 Rehabilitation Engineering Laboratory, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland
- 3 Department of Neurology, University Hospital, Geneva, Switzerland
Scientific investigations on the nature of the self have so far focused on high-level mechanisms. Recent evidence, however, suggests that low-level bottom-up mechanisms of multi-sensory integration play a fundamental role in encoding specific components of bodily self-consciousness, such as self-location and first-person perspective (Blanke and Metzinger, 2009). Self-location and first-person perspective are abnormal in neurological patients suffering from out-of-body experiences (Blanke et al., 2004), and can be manipulated experimentally in healthy subjects by imposing multi-sensory conflicts (Lenggenhager et al., 2009). Activity of the temporo-parietal junction (TPJ) reflects experimentally induced changes in self-location and first-person perspective (Ionta et al., 2011), and dysfunctions in TPJ are causally associated with out-of-body experiences (Blanke et al., 2002). We argue that TPJ is one of the key areas for multi-sensory integration of bodily self-consciousness, that its levels of activity reflect the experience of the conscious “I” as embodied and localized within bodily space, and that these mechanisms can be systematically investigated using state of the art technologies such as robotics, virtual reality, and non-invasive neuroimaging.
Bodily Self
Some of the most important brain systems of humans are dedicated to the maintenance of the balance between the self and the external environment, by processing and integrating many different bodily sensory inputs (visual, auditory, vestibular, somatosensory, motor, visceral, etc.), and providing an online representation of the body in the world (Damasio, 1999; Gallagher, 2005; Jeannerod, 2006; Blanke and Metzinger, 2009). In this view, the body representation in the brain is a complex crossroad where multi-sensory information is compounded in order to build the basis for bodily self-consciousness (Haggard et al., 2003; Jeannerod, 2007; Metzinger, 2008). Many behavioral studies over the last two decades have used techniques imposing multi-sensory conflict as a means to manipulate some components of self-consciousness. For example, the “rubber hand illusion” paradigm showed that by manipulating local aspects of body perception, it is possible to induce an illusory sense of ownership of a fake hand (e.g., Botvinick and Cohen, 1998; Pavani et al., 2000; Ehrsson et al., 2004; Tsakiris and Haggard, 2005; Tsakiris et al., 2007; Aimola Davies et al., 2010). In particular, if participants observe a rubber hand being stroked synchronously with their own (hidden) hand, they tend to report self-attribution of the rubber hand, as if it was their own hand. This illusory self-attribution is often accompanied by a “proprioceptive drift” toward the location of the rubber hand. Specifically, participants report a change in where they feel their real hand to be located (review in Tsakiris, 2010). Similarly, if a participant holds one palm against that of someone else and simultaneously strokes the dorsal side of both her/his own and the other’s index finger, an illusory feeling of numbness for the other person’s finger can be perceived: the so-called “numbness” illusion (Dieguez et al., 2009). Furthermore, it has recently been shown that illusory self-attribution is not limited to the hands, but extends to other body parts including the face (Sforza et al., 2010). For example, the experience of having one’s own face touched whilst simultaneously (the spatial and temporal sense) seeing the same action applied to the face of another, elicits the so-called “enfacement” illusion: that is an illusory sense of face ownership is induced and the other’s facial features are incorporated into the participant’s face (Sforza et al., 2010). All of these findings on illusory self-attribution support the idea that low-level multi-sensory processes can influence bodily self-consciousness. However, the self and bodily self-consciousness is globally associated with the body, rather than with multiple different body parts (Lenggenhager et al., 2007; Metzinger, 2008; Blanke and Metzinger, 2009). Recent behavioral studies showed that, beyond local aspects of body perception and self-attribution (rubber hand illusion, numbness illusion, face illusion), multi-sensory conflicts can also be used to manipulate more global aspects of body perception (Ehrsson, 2007; Lenggenhager et al., 2007, 2009; Petkova and Ehrsson, 2008; Aspell et al., 2009, 2010). These studies showed that it is possible to investigate more global aspects of bodily self-consciousness and described several different components thereof, such as self-location, first-person perspective, and self-identification.
Abnormal Bodily Self-Consciousness
A central aspect of global bodily self-consciousness is the sense of where the self is perceived to be located in space, or “self-location.” This apparently obvious link between the self and the body can be altered and experienced as being non-body centered. Patients suffering from out-of-body experiences (OBEs) of neurological origin experience themselves as located outside their own bodily boundaries (abnormal self-location), and report looking at their real body from an elevated perspective in extrapersonal space (abnormal first-person perspective; Irwin, 1985; Blanke et al., 2004; Blanke and Mohr, 2005; De Ridder et al., 2007). Investigations into the neural correlates of OBEs provide insights on the multi-sensory nature of self-consciousness (Irwin, 1985; Brugger et al., 1997; Blanke et al., 2002, 2004; Brugger, 2002; Blanke and Mohr, 2005). Clinical studies showed that OBEs are linked to dysfunctions of the temporo-parietal junction (TPJ; Blanke et al., 2004; Blanke and Mohr, 2005), but also frontal, and parietal cortices (Lopez et al., 2010; Heydrich et al., 2011). Furthermore, electrical stimulation of the TPJ induces OBE-like experiences (Penfield, 1955; Blanke et al., 2002; De Ridder et al., 2007), and the TPJ is activated during mental imagery of “disembodied” self-location (Arzy et al., 2006; Blanke et al., 2010). Based on these findings an association between TPJ dysfunction and OBEs has been proposed (Blanke et al., 2002, 2004; Maillard et al., 2004; Blanke and Mohr, 2005; Brandt et al., 2005; De Ridder et al., 2007; see also Ionta et al., 2011). The TPJ is an excellent candidate for integrating multi-sensory bodily information (and self-consciousness), because it is involved in many self-related processes, such as first-person perspective (Ruby and Decety, 2001; Vogeley and Fink, 2003; Vogeley et al., 2004), self/other discrimination (Farrer et al., 2003; Frith, 2005), theory-of-mind (ToM; review in Frith and Frith, 2003), and self-regulation (Heatherton, 2011). Accordingly, a selective impairment in self-other tasks, such as understanding others’ beliefs, has been reported in patients with lesions of the TPJ (Samson et al., 2005). Together with other brain regions, TPJ has also been considered as part of a brain network involved in ToM, that is the ability to understand others’ intentions, beliefs, and desires (review in Frith and Frith, 2003). In particular, the right TPJ is believed to play a crucial role in the attribution of mental states (e.g., “she wants to be a teacher”), and both left and right TPJ are recruited when participants are asked to imagine the other’s mind (Saxe and Wexler, 2005). Furthermore, activity of the left TPJ seems to be selective for verbal descriptions of another person’s beliefs, while the right TPJ seems to respond more selectively to non-verbal stimuli (Saxe and Kanwisher, 2003). In addition, the TPJ also plays a central role in processing vestibular information, with a right hemispheric predominance for otolithic inputs and a left hemispheric predominance for inputs from semicircular canals (see Lopez et al., 2008 for review). In monkeys, neurons in the TPJ discharge during vestibular stimulation, during tactile stimulation of face and trunk, and when a stimulus is in close proximity to the body (Grusser et al., 1990; Duhamel et al., 1998; Bremmer et al., 2002). It is likely that bi- and tri-modal neurons in the TPJ encode the multi-sensory matching of vestibular, visual, and tactile information for the full-body, similar to visuo-tactile bimodal neurons in the premotor and intraparietal sulcus that are anchored to body parts, including the hand (Iriki et al., 1996; Graziano et al., 2000; Maravita and Iriki, 2004).
Jointly, the reviewed data on the role of the TPJ in self-location and first-person perspective, as well as processes related to self-other distinction and ToM, reveal that cognitive and multi-sensory perceptual aspects of the self recruit at least partly overlapping neural substrates. More work is necessary to investigate how both crucial aspects of the self (conscious-perceptual, cognitive, as well as conceptual mechanisms of the self) interrelate behaviorally and neurally at the TPJ and beyond (Blanke and Metzinger, 2009).
Full-Body Illusions and Self-Consciousness
The nature of abnormal self-location and self-identification during OBEs provides a unique opportunity to investigate self-consciousness, but generalization of results is rendered difficult by several methodological issues (e.g. sample size, lesion homogeneity, different etiologies, and/or phenomenology, and generalization to the normal brain). In order to better control manipulations of self-consciousness with standardized and repeatable experimental protocols, several studies have recently induced OBE-like illusions in large samples of healthy participants by presenting ambiguous multi-sensory information. In particular, self-location, first-person perspective, and self-identification have been experimentally manipulated in healthy subjects using visuo-tactile conflicts (e.g., Ehrsson, 2007; Lenggenhager et al., 2007, 2009; Petkova and Ehrsson, 2008; Aspell et al., 2009).
Pioneering studies by Lenggenhager et al. (2007) and Ehrsson (2007) induced changes in self-location and self-identification using congruent and incongruent visuo-tactile multi-sensory inputs. Their general approach was adapted and extended from the original procedure of the rubber hand illusion (review in Tsakiris, 2010), with a particular emphasis on the synchrony between visual and tactile information. In the setup used by Lenggenhager et al. (2007), participants viewed their own back through a head-mounted display (HMD) connected to a video-camera positioned behind their body. In this way they could see their back from a visuo-spatial third-person point of view, as if it was a virtual body. Their own back was then touched with a wooden stick (tactile stimulation) and the HMD showed the movement either with or without a delay (synchronous/asynchronous visual stimulation). Thus, the touch (tactile experience) perceived by participants was either synchronous or asynchronous with respect to that viewed on the visually presented body. The congruence between the visual and the tactile stimulation determined changes in bodily self-consciousness. In particular, subjective reports indicated that when the visual and tactile stimulation were synchronous, stronger self-identification with the virtual body and stronger illusory touch were experienced (Lenggenhager et al., 2007). Furthermore, behavioral measurements of self-location were acquired by displacing the participants (blindfolded) from the position where they were standing during the visuo-tactile stimulation, and asking them to return to the initial position. Importantly, the indicated positions shifted away from participants’ actual starting location and toward that of the virtual body (Lenggenhager et al., 2007) only after synchronous stimulation. Based on these findings, the authors defined the complex of changes in bodily self-consciousness including self-identification and illusory touch, as well as the self-location change toward the virtual body, as a “full-body illusion.”
Ehrsson (2007) used a slightly different setup. Similar to the previous study, participants sat on a chair and wore an HMD connected to two cameras positioned behind their back, affording them with a third-person perspective in extrapersonal space. Dissimilar to the previous study, though, the site of tactile stimulation in this study was the chest, and another stick (identical to the one used for the tactile stimulation) was moved up and down in front of the cameras. The seen and the felt movement were again either synchronous or asynchronous. After 2 min of visuo-tactile multi-sensory stimulation, participants completed a questionnaire. Results indicated that only after the synchronous stroking did participants report the experience of “sitting behind their back” and “looking at themselves from this location.” Control questions did not show differences in responses across synchronous and asynchronous conditions. Furthermore, physiological measurements (skin conductance response) were higher during a threat toward the virtual body after the synchronous stroking with respect to the asynchronous stroking condition (Ehrsson, 2007).
In both experiments participants looked at their own body from an external perspective, and only after synchronous stroking did they report stronger self-identification with the virtual body (Ehrsson, 2007; Lenggenhager et al., 2007), and changes in self-location biased toward the position of the virtual body (Lenggenhager et al., 2007). The direct comparison between these two approaches (back vs. chest-stroking, standing vs. sitting position, presence vs. absence of view of the contact between the stick and virtual body, etc.) has recently been provided (Lenggenhager et al., 2009). In this study, the participants’ body position was held constant whilst the experimenters measured three different components of bodily self-consciousness: self-location, self-identification, and first-person visuo-spatial perspective. To that end, participants were placed in a prone position and wore an HMD connected to a camera such that they could see their body from above. In one condition participants received the tactile stimulation on their chest and saw a moving stick in front of the camera (with the virtual body in the background). In another condition they felt the stroking on their back and saw the virtual body being touched by the same stick. In both conditions the visual and the tactile stimulations were either synchronous or asynchronous. Participants completed the usual questionnaires on self-identification. Furthermore, self-location was measured by asking participants to imagine dropping a ball from their “felt” location, and estimating the amount of time required by the ball to “hit the ground.” The response times (RTs) of this “mental ball dropping” (MBD) were recorded. Lenggenhager et al. (2009) showed that during the back-stroking, self-identification and illusory touch (as indicated by the questionnaires) were stronger after synchronous than asynchronous visuo-tactile stimulation. During the chest-stroking, self-identification and illusory touch were weaker during the synchronous than the asynchronous visuo-tactile stimulation. Results of the MBD indicated that RTs were shorter in the synchronous back-stroking than in the comparable chest-stroking condition, suggesting that the felt “height” was affected. Specifically, RT analysis suggested that felt “height” was lower for the back-stroking condition, thus further suggesting that self-location was biased more toward the virtual body (below) during the back-stroking, and more toward the camera (above) during the chest-stroking.
These data corroborated pioneering self-observations by G. M. Stratton who described his own experiences in an similar experimental setup. This classical setup allowed him to induce changes in how he saw and felt his body. He reported changes in the visual first-person perspective and self-location, when walking with a portable device made of mirrors aligned in such a way that the walker (Stratton himself) could see a projection of his body below and in front of him (Stratton, 1899; see also Blanke et al., 2008). The setup projected an online image of his body in his anterior peripersonal space while he was walking in the countryside of California. He reported progressively increasing changes in self-location and self-identification over the time of exposure, further associated with the feelings of “being out-of-body” (Stratton, 1899). Similarly, a comparable spatial conflict between the visual information relative to the moving body and the multi-sensory cues from the real body can be elicited by asking the participants to wear an HMD onto which their body filmed from an elevated perspective is projected, so that they could see their body while walking in the room (Mizumoto and Ishikawa, 2005). Using this setup participants report to experience the self as located at the position of the visual perspective and simultaneously at the location of the visually presented body (Mizumoto and Ishikawa, 2005). Somewhat comparably in the experimental setup used by Lenggenhager et al. (2009) and Ehrsson (2007) participants saw their own body being stroked synchronously or asynchronously. This induced changes in self-identification and self-location that were further modulated by the synchrony between visual and tactile stimulation. On that basis it has been proposed that self-location and self-identification are strongly influenced by the location of the seen touch, and that embodied self-location and the first-person perspective can be transformed to a disembodied or outside-body self-location and third-person perspective as a function of how and where the visuo-tactile stimulation occurs (Lenggenhager et al., 2009). We argue that experimental designs based on visuo-tactile multi-sensory disintegration might lead to alterations of the first-person perspective, and that this could be further facilitated by a more extended use of virtual reality (Tarr and Warren, 2002; Sanchez-Vives and Slater, 2005; Riva, 2007; Slater et al., 2010), and perhaps through repeated and prolonged exposure to such artificial bodily signals (Stratton, 1899).
The work on perturbation of visual field – prisms adaptation (PA) – provided important insights into visuo-spatial processing that may be related to the reviewed experiments (Striemer and Danckert, 2010). According to the classic PA procedure developed by Richard Held and colleagues, participants are asked to repeatedly perform goal-directed movements while wearing prismatic goggles (Held and Freedman, 1963; Redding and Wallace, 1997). Prismatic goggles allow researchers to induce variable optical deviations between the seen and the real target position. Thus, the goal-directed reaching or pointing movements are shifted in the direction of the visual deviation. These adaptations progressively increase with practice and – when the prismatic goggles are removed – this adaptation generally leads to an error in the opposite direction (Held and Freedman, 1963). The PA procedure affects the everyday correlation between motor signals and sensory feedback. The reviewed visuo-tactile procedures using video and virtual reality techniques in order to manipulate bodily self-consciousness, share several similarities with such adaptations induced by prisms, and affect the everyday correlation between tactile, visual, and vestibular signals. More systematic work is needed to evaluate whether adaptations as those during prism studies also occur during visuo-tactile stroking (this is for example suggested by changes in self-location) and whether comparable post-effects exist.
Neuroscience Robotics and the Neural Bases of Self-Location and First-Person Perspective
The different setups that investigated self-location and first-person perspective by using video projections and visuo-tactile conflicts, showed that it is possible to manipulate some sub-components of bodily self-consciousness (Ehrsson, 2007; Lenggenhager et al., 2007, 2009). However, the temporal and spatial correspondence between the visual and the tactile stimulation in these setups was always applied by the experimenter. As such, precise and repeatable manipulations free from any possible experimenter bias were illusive to achieve. It was therefore necessary to develop more reliable methodological approaches and to precisely monitor and control what the participants feel and see. Moreover, even though it has been shown that self-location and first-person perspective could be experimentally studied, the neural underpinnings have not been investigated, probably due to the difficulty of applying the visuo-tactile multi-sensory conflict in a well-controlled and repeatable manner during brain imaging data acquisition. Robotic systems and virtual reality are the ideal tools to realize such standardized stimulation, and can therefore improve the control in such experimental studies (Blanke and Gassert, 2009). The rapid evolution of computer- and virtual-reality technology over the past decades has provided researchers with novel tools to explore different modalities of human perception and cognitive function. This has allowed researchers to revisit long-known phenomena and sensory illusions in behavioral studies with well-controlled and repeatable stimuli that can easily be manipulated in order to introduce multi-sensory conflicts. These conditions can further be manipulated to explore how humans integrate information from different sensory modalities and how they react to perceptual conflicts (Ellis et al., 1999; Ernst and Banks, 2002; Bertelson et al., 2003; Ernst and Bulthoff, 2004). Such environments have found increasing applications in clinics, e.g., for phobia treatment and neurorehabilitation (Jang et al., 2002; Holden, 2005).
In order to expand the variety of sensory modalities and include haptic perception, researchers performed studies in mixed environments, combining virtual reality with real objects. For example Carlin et al. (1997) used tactile stimulation and virtual reality to treat arachnophobia. More recently, robotic systems, in the form of haptic displays, have been added to such environments, taking advantage of their unique ability to precisely apply tactile stimuli – both temporally and spatially – or render variable dynamic environments for physical interaction under computer control (Wolpert and Flanagan, 2010). Combined with virtual reality, such systems offer the potential to systematically investigate haptic perception and sensorimotor control with the ability to precisely control and modulate factors such as intensity, location, type, and congruency of stimuli. Flanagan and Wing (1997) used a servo-controlled linear actuator to investigate if the central nervous system (CNS) uses internal models to adjust grip force when stabilizing hand-held loads during arm movements. Ernst and Banks used a haptic interface and virtual reality to measure the variance in visual and haptic percepts, and to explore how these percepts are optimally integrated based on their reliability (Ernst and Banks, 2002). While the previous developments have provided greater control over experimental conditions with reduced variability in the presentation of stimuli, they have so far been limited to behavioral studies, and the associated neural correlates and mechanisms remained unexplored. More recent advances combining virtual reality and/or robotics with non-invasive neuroimaging have therefore opened a whole new range of technology and neuroscience-driven avenues to investigate sensory processing and multi-sensory integration (Gassert et al., 2008a,b; Blanke and Gassert, 2009; Annett and Bischof, 2010; Dueñas et al., 2011). The first functional studies with robotic interfaces were carried out over a decade ago with positron emission tomography (PET; Shadmehr and Holcomb, 1997; Krebs et al., 1998), and took advantage of the fact that PET is not susceptible to electromagnetic interference from conventional robotic systems. However, PET requires injection of radioactive tracers, has low temporal resolution (in the order of a minute for oxygen-based studies), and low spatial resolution, making it difficult to differentiate between activation in functionally different areas. The rapid spread and evolution of functional magnetic resonance imaging (fMRI) over the past years, providing whole brain coverage with high spatial and good temporal resolution, have made this imaging method attractive for neuroscience investigations.
The MR environment precludes the use of conventional robotic devices with fMRI, both for safety and compatibility reasons. However, despite these constraints, a study using fMRI, MR-compatible robotics, and visuo-tactile multi-sensory conflict has recently investigated the neural mechanisms of self-location and first-person perspective (Ionta et al., 2011). A robotic device built from MR-compatible materials, sensors, and actuators was embedded in the MR-scanner bed. Participants lay on an ergonomic mattress divided into two parts, holding a robotic stimulator in the center, between the two mattresses. Based on Lenggenhager et al. (2009), the robotic device moved a tactile stimulator along a linear guide located below the back of the subject, driven by an ultrasonic motor over a rack and pinion gear. A tactile stimulation sphere was attached at the output over a flexible spring blade. This ensured a constant pressure on the participants’ back and allowed the tactile stimulus to be presented according to a precisely repeatable movement profile. While feeling the tactile stimulation on the back, participants watched videos through MR-compatible video goggles placed in front of their eyes. The videos showed the back of a human body in a prone position, filmed from an elevated perspective, being stroked (visual stroking) synchronously or asynchronously with respect to the tactile stroking performed by the robotic device on the participants’ back. In a control condition the human body was hidden, and participants could see only the rod moving up and down in an empty room. By virtue of this computer-controlled robotic device, the spatial and temporal aspects of the visuo-tactile stimulation were precisely controlled during the fMRI sessions within and across participants. After the visuo-tactile stimulation, self-location was estimated using the MBD task (Lenggenhager et al., 2009). Furthermore, participants completed the questionnaire on self-identification (Ehrsson, 2007; Lenggenhager et al., 2007, 2009) adapted from the original one used for the rubber hand illusion (Botvinick and Cohen, 1998). Confirming pilot testing it was found that some participants felt as if they were looking up at the virtual body (concordant with their real orientation) whilst others felt as if they were looking down on their virtual body (even if they were facing upward). This finding indicated that two different directions of first-person perspective were adopted by participants: those forming the “up-group” had the impression of looking upward, those in the “down-group” of looking downward at the virtual body. Extending the difference in experienced direction of the first-person perspective between both groups (as indicated by subjective reports), behavioral results showed that RTs in the MBD task were significantly different between the synchronous and the asynchronous visuo-tactile stroking only when a human body was observed (not during control conditions). Most importantly, the direction of this effect was different for the two groups: in the up-group self-location was higher during the synchronous condition (longer RTs in the MBD) with respect to the asynchronous condition; in the down-group self-location was lower during the synchronous condition (shorter RTs in the MBD) with respect to the asynchronous condition. Moreover, independently of the synchrony of stroking, participants from the up-group had faster RTs than those in the down-group, suggesting further differences in self-location between the two groups: subjects in the up-group experienced lower height than those in the down-group. These findings indicated that self-location as measured by the MBD was altered in opposite directions in the two groups, depending on the experienced direction of the first-person perspective (subjective reports). fMRI results showed that the activation patterns in TPJ reflected changes in self-location and first-person perspective. In particular, in both groups the magnitude of the BOLD response was lower in conditions with a higher self-location as quantified by the MBD task, and conditions with a lower self-location were associated with a higher BOLD response. Thus, TPJ activity reflected synchrony-related changes in self-location with respect to the position or level of self-location, and further depended differently on the direction of the first-person perspective. Comparable changes in self-location and the direction of the first-person perspective reported by patients with OBEs due to TPJ damage (Ionta et al., 2011) also concur with these behavioral and fMRI data, independent of any potential attention modulation as shown by the effects of stroking synchrony and especially the effect of first-person perspective. OBE patients classically report an elevated perspective that is distant from the body and down-looking (comparable with participants from the down-group). The results obtained in healthy participants are therefore compatible with clinical data in neurological patients with OBEs (Blanke et al., 2004; De Ridder et al., 2007) and reveal that the temporo-parietal cortex, especially in the right hemisphere, encodes these aspects of bodily self-consciousness.
Finally, the (right lateralized) TPJ has been also considered as part of the brain network involved in visuo-spatial attention (review in Corbetta and Shulman, 2011). Interestingly, improvements in visuo-spatial neglect, a pathological condition that typically affects the egocentric spatial relationship with visuo-spatial perspective or extrapersonal space (Karnath, 1994; Farrell and Robertson, 2000; Vogeley and Fink, 2003), are reported following exposure to prisms (Rode et al., 2006), and further extend to other sensory modalities such as touch (Maravita et al., 2003) and hearing (Jacquin-Courtois et al., 2010). Based on these findings, it has been proposed that PA may influence the activity of some visuomotor structures included in the dorsal visual stream and supposed to further mediate both motor and attentional processes (Corbetta and Shulman, 2002; Milner and Goodale, 2006). This interpretation supports the influence that PA might have on perceptual processes based on the interaction between areas of the dorsal and ventral visual stream – superior temporal gyrus (STG) and inferior parietal lobe (Sarri et al., 2006). Indeed visuo-spatial neglect has been linked to TPJ, including STG (Karnath et al., 2001; Halligan et al., 2003), and neglect patients with lesioned TPJ show deficits also in stimulus-driven reorienting attention (Rengachary et al., 2011). Yet, the exact role of TPJ in spatial attention is still controversial based on data in healthy subjects showing that stimulus-driven attentional processes recruit in addition to the right TPJ (Shulman et al., 2010) also insula, and inferior and medial frontal gyri (Corbetta and Shulman, 2002). Conversely, it has been reported that TPJ activity may also decrease in visual attention tasks (Shulman et al., 1997; Gusnard and Raichle, 2001). On the other hand, the activation of TPJ during egocentric visuo-spatial perspective changes (Maguire et al., 1998; Vallar et al., 1999; Ruby and Decety, 2001), and during social perception tasks (Narumoto et al., 2001; Winston et al., 2002) is consistent with clinical and experimental data in self-related processes (Blanke et al., 2004; Blanke and Arzy, 2005; Blanke et al., 2005). In summary there seems to be a functional overlap in the TPJ between processes related to attention and bodily self-consciousness associated with bilateral recruitment in experimental work in healthy subjects and right lateralized TPJ recruitment in patient studies.
Conclusion and Perspectives
Here we have reviewed behavioral (Ehrsson, 2007; Lenggenhager et al., 2007, 2009) brain imaging (Arzy et al., 2006; Ionta et al., 2011) and clinical evidence (Brugger et al., 1997; Blanke et al., 2004; Blanke and Mohr, 2005; De Ridder et al., 2007) about three aspects of bodily self-consciousness: self-location, first-person perspective, and self-identification. Clinical findings showed that these three components are dissociable, suggesting that they rely on different neural bases. Behavioral studies showed that such dissociation can be experimentally induced also in healthy subjects via the imposition of multi-sensory conflicts. Brain imaging evidence showed that, as a multi-sensory body-related integration area, the TPJ is involved in all these three aspects of bodily self-consciousness. However, it is worth noting that also other areas including the precuneus (Northoff and Bermpohl, 2004), as well as the prefrontal (Gusnard et al., 2001; Ionta et al., 2010), somatosensory (Ruby and Decety, 2001), and the vestibular cortex (Lopez et al., 2008) are expected to contribute to bodily self-consciousness. Furthermore, recent studies showed the importance of proprioception (Palluel et al., 2011), acoustic information (Aspell et al., 2010), and pain perception (Hansel et al., 2011). Based on the reviewed findings, we conclude that multi-sensory integration is a key brain mechanism for self-consciousness. We suggest that future work should not only investigate mechanisms of visuo-tactile integration, but also their interaction with vestibular, proprioceptive, and cognitive motor signals (i.e., Kannape et al., 2010). We finally suggest that only by using a multi-disciplinary approach combining behavioral and cognitive neuroscience, engineering, and virtual reality with neuroimaging, will it become possible to unravel the detailed mechanisms of bodily self-consciousness and other aspects of self-consciousness.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
Aimola Davies, A. M., White, R. C., Thew, G., Aimola, N. M., and Davies, M. (2010). Visual capture of action, experience of ownership, and the illusion of self-touch: a new rubber hand paradigm. Perception 39, 830–838.
Annett, M. K., and Bischof, W. F. (2010). Investigating the application of virtual reality systems to psychology and cognitive neuroscience research. Presence Teleop. Virt. 19, 131–141.
Arzy, S., Thut, G., Mohr, C., Michel, C. M., and Blanke, O. (2006). Neural basis of embodiment: distinct contributions of temporoparietal junction and extrastriate body area. J. Neurosci. 26, 8074–8081.
Aspell, J. E., Lavanchy, T., Lenggenhager, B., and Blanke, O. (2010). Seeing the body modulates audiotactile integration. Eur. J. Neurosci. 31, 1868–1873.
Aspell, J. E., Lenggenhager, B., and Blanke, O. (2009). Keeping in touch with one’s self: multisensory mechanisms of self-consciousness. PLoS ONE 4, e6488. doi:10.1371/journal.pone.0006488
Bertelson, P., Vroomen, J., and De Gelder, B. (2003). Visual recalibration of auditory speech identification: a McGurk aftereffect. Psychol. Sci. 14, 592–597.
Blanke, O., and Arzy, S. (2005). The out-of-body experience: disturbed self-processing at the temporo-parietal junction. Neuroscientist. 11, 16–24.
Blanke, O., and Gassert, R. (2009). Total control in virtual reality and robotics. Front. Neurosci. 3, 110–111.
Blanke, O., Ionta, S., Fornari, E., Mohr, C., and Maeder, P. (2010). Mental imagery for full and upper human bodies: common right hemisphere activations and distinct extrastriate activations. Brain Topogr. 23, 321–332.
Blanke, O., Landis, T., Spinelli, L., and Seeck, M. (2004). Out-of-body experience and autoscopy of neurological origin. Brain 127, 243–258.
Blanke, O., and Metzinger, T. (2009). Full-body illusions and minimal phenomenal selfhood. Trends Cogn. Sci. (Regul. Ed.) 13, 7–13.
Blanke, O., Metzinger, T., and Lenggenhager, B. (2008). Olaf Blanke et al’s response to Kaspar Meyer’s E-letter. Sci. E Lett.
Blanke, O., and Mohr, C. (2005). Out-of-body experience, heautoscopy, and autoscopic hallucination of neurological origin implications for neurocognitive mechanisms of corporeal awareness and self-consciousness. Brain Res. Brain Res. Rev. 50, 184–199.
Blanke, O., Mohr, C., Michel, C. M., Pascual-Leone, A., Brugger, P., Seeck, M., Landis, T., and Thut, G. (2005). Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal junction. J. Neurosci. 25, 550–557.
Blanke, O., Ortigue, S., Landis, T., and Seeck, M. (2002). Stimulating illusory own-body perceptions. Nature 419, 269–270.
Brandt, C., Brechtelsbauer, D., Bien, C. G., and Reiners, K. (2005). Out-of-body experience as possible seizure symptom in a patient with a right parietal lesion. Nervenarzt 76, 1259, 1261–1252.
Bremmer, F., Klam, F., Duhamel, J. R., Ben Hamed, S., and Graf, W. (2002). Visual-vestibular interactive responses in the macaque ventral intraparietal area (VIP). Eur. J. Neurosci. 16, 1569–1586.
Brugger, P. (2002). Reflective mirrors: perspective-taking in autoscopic phenomena. Cogn. Neuropsychiatry 7, 179–194.
Brugger, P., Regrad, M., and Landis, T. (1997). Illusory reduplication of own’s own body: phenomenology and classification of autoscopic phenomena. Cogn. Neuropsychiatry 19–38.
Carlin, A. S., Hoffman, H. G., and Weghorst, S. (1997). Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behav. Res. Ther. 35, 153–158.
Corbetta, M., and Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3, 201–215.
Corbetta, M., and Shulman, G. L. (2011). Spatial neglect and attention networks. Annu. Rev. Neurosci. 34, 569–599.
De Ridder, D., Van Laere, K., Dupont, P., Menovsky, T., and Van De Heyning, P. (2007). Visualizing out-of-body experience in the brain. N. Engl. J. Med. 357, 1829–1833.
Dieguez, S., Mercier, M. R., Newby, N., and Blanke, O. (2009). Feeling numbness for someone else’s finger. Curr. Biol. 19, R1108–R1109.
Dueñas, J., Chapuis, C., Pfeiffer, C., Martuzzi, R., Ionta, S., Blanke, O., and Gassert, R. (2011). Neuroscience robotics to investigate multisensory integration and bodily awareness. Proc. IEEE Eng. Med. Biol. Conf. 8348–8352.
Duhamel, J. R., Colby, C. L., and Goldberg, M. E. (1998). Ventral intraparietal area of the macaque: congruent visual and somatic response properties. J. Neurophysiol. 79, 126–136.
Ehrsson, H. H., Spence, C., and Passingham, R. E. (2004). That’s my hand! Activity in premotor cortex reflects feeling of ownership of a limb. Science 305, 875–877.
Ellis, R. R., Flanagan, J. R., and Lederman, S. J. (1999). The influence of visual illusions on grasp position. Exp. Brain Res. 125, 109–114.
Ernst, M. O., and Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433.
Ernst, M. O., and Bulthoff, H. H. (2004). Merging the senses into a robust percept. Trends Cogn. Sci. (Regul. Ed.) 8, 162–169.
Farrell, M. J., and Robertson, I. H. (2000). The automatic updating of egocentric spatial relationships and its impairment due to right posterior cortical lesions. Neuropsychologia 38, 585–595.
Farrer, C., Franck, N., Georgieff, N., Frith, C. D., Decety, J., and Jeannerod, M. (2003). Modulating the experience of agency: a positron emission tomography study. Neuroimage 18, 324–333.
Flanagan, J. R., and Wing, A. M. (1997). The role of internal models in motion planning and control: evidence from grip force adjustments during movements of hand-held loads. J. Neurosci. 17, 1519–1528.
Frith, C. (2005). The self in action: lessons from delusions of control. Conscious. Cogn. 14, 752–770.
Frith, U., and Frith, C. D. (2003). Development and neurophysiology of mentalizing. Philos. Trans. R. Soc. Lond. B Biol. Sci. 358, 459–473.
Gassert, R., Burdet, E., and Chinzei, K. (2008a). MRI-compatible robotics. IEEE Eng. Med. Biol. Mag. 27, 12–14.
Gassert, R., Burdet, E., and Chinzei, K. (2008b). Opportunities and challenges in MR-compatible robotics: reviewing the history, mechatronic components, and future directions of this technology. EEE Eng. Med. Biol. Mag. 27, 15–22.
Gassert, R., Yamamoto, A., Chapuis, D., Dovat, L., Bleuler, H., and Burdet, E. (2006). Actuation methods for applications in MR environments. Concepts Magn. Reson. Part B Magn. Reson. Eng. 29B, 191–209.
Graziano, M. S., Cooke, D. F., and Taylor, C. S. (2000). Coding the location of the arm by sight. Science 290, 1782–1786.
Grusser, O. J., Pause, M., and Schreiter, U. (1990). Localization and responses of neurones in the parieto-insular vestibular cortex of awake monkeys (Macaca fascicularis). J. Physiol. (Lond.) 430, 537–557.
Gusnard, D. A., Akbudak, E., Shulman, G. L., and Raichle, M. E. (2001). Medial prefrontal cortex and self-referential mental activity: relation to a default mode of brain function. Proc. Natl. Acad. Sci. U.S.A. 98, 4259–4264.
Gusnard, D. A., and Raichle, M. E. (2001). Searching for a baseline: functional imaging and the resting human brain. Nat. Rev. Neurosci. 2, 685–694.
Haggard, P., Taylor-Clarke, M., and Kennett, S. (2003). Tactile perception, cortical representation and the bodily self. Curr. Biol. 13, R170–R173.
Halligan, P. W., Fink, G. R., Marshall, J. C., and Vallar, G. (2003). Spatial cognition: evidence from visual neglect. Trends Cogn. Sci. (Regul. Ed.) 7, 125–133.
Hansel, A., Lenggenhager, B., von Kanel, R., Curatolo, M., and Blanke, O. (2011). Seeing and identifying with a virtual body decreases pain perception. Eur. J. Pain. 15, 874–879.
Heatherton, T. F. (2011). Neuroscience of self and self-regulation. Annu. Rev. Psychol. 62, 363–390.
Held, R., and Freedman, S. J. (1963). Plasticity in human sensorimotor control. Science 142, 455–462.
Heydrich, L., Lopez, C., Seeck, M., and Blanke, O. (2011). Partial and full own-body illusions of epileptic origin in a child with right temporoparietal epilepsy. Epilepsy Behav. 20, 583–586.
Holden, M. K. (2005). Virtual environments for motor rehabilitation: review. Cyberpsychol. Behav. 8, 187–211; discussion 212–189.
Ionta, S., Ferretti, A., Merla, A., Tartaro, A., and Romani, G. L. (2010). Step-by-step: the effects of physical practice on the neural correlates of locomotion imagery revealed by fMRI. Hum. Brain Mapp. 31, 694–702.
Ionta, S., Heydrich, L., Lenggenhager, B., Mouthon, M., Fornari, E., Chapuis, D., Gassert, R., and Blanke, O. (2011). Multisensory mechanisms in temporo-parietal cortex support self-location and first-person perspective. Neuron 70, 363–374.
Iriki, A., Tanaka, M., and Iwamura, Y. (1996). Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport 7, 2325–2330.
Irwin, H. J. (1985). Flight of Mind: A Psychological Study of the Out-of-Body Experience. Metuchen, NJ: Scarecrow Press.
Jacquin-Courtois, S., Rode, G., Pavani, F., O’Shea, J., Giard, M. H., Boisson, D., and Rossetti, Y. (2010). Effect of prism adaptation on left dichotic listening deficit in neglect patients: glasses to hear better? Brain 133(Pt 3), 895–908.
Jang, D. P., Ku, J. H., Choi, Y. H., Wiederhold, B. K., Nam, S. W., Kim, I. Y., and Kim, S. I. (2002). The development of virtual reality therapy (VRT) system for the treatment of acrophobia and therapeutic case. IEEE Trans. Inf. Technol. Biomed. 6, 213–217.
Jeannerod, M. (2006). Motor Cognition: What Actions Tell the Self. Oxford, NY: Oxford University Press.
Kannape, O. A., Schwabe, L., Tadi, T., and Blanke, O. (2010). The limits of agency in walking humans. Neuropsychologia 48, 1628–1636.
Karnath, H. O. (1994). Subjective body orientation in neglect and the interactive contribution of neck muscle proprioception and vestibular stimulation. Brain 117(Pt 5), 1001–1012.
Karnath, H. O., Ferber, S., and Himmelbach, M. (2001). Spatial awareness is a function of the temporal not the posterior parietal lobe. Nature 411, 950–953.
Krebs, H. I., Brashers-Krug, T., Rauch, S. L., Savage, C. R., Hogan, N., Rubin, R. H., Fischman, A. J., and Alpert, N. M. (1998). Robot-aided functional imaging: application to a motor learning study. Hum. Brain Mapp. 6, 59–72.
Lenggenhager, B., Mouthon, M., and Blanke, O. (2009). Spatial aspects of bodily self-consciousness. Conscious. Cogn. 18, 110–117.
Lenggenhager, B., Tadi, T., Metzinger, T., and Blanke, O. (2007). Video ergo sum: manipulating bodily self-consciousness. Science 317, 1096–1099.
Lopez, C., Halje, P., and Blanke, O. (2008). Body ownership and embodiment: vestibular and multisensory mechanisms. Neurophysiol. Clin. 38, 149–161.
Lopez, C., Heydrich, L., Seeck, M., and Blanke, O. (2010). Abnormal self-location and vestibular vertigo in a patient with right frontal lobe epilepsy. Epilepsy Behav. 17, 289–292.
Maguire, E. A., Burgess, N., Donnett, J. G., Frackowiak, R. S., Frith, C. D., and O’Keefe, J. (1998). Knowing where and getting there: a human navigation network. Science 280, 921–924.
Maillard, L., Vignal, J. P., Anxionnat, R., and Taillandiervespignani, L. (2004). Semiologic value of ictal autoscopy. Epilepsia 45, 391–394.
Maravita, A., and Iriki, A. (2004). Tools for the body (schema). Trends Cogn. Sci. (Regul. Ed.) 8, 79–86.
Maravita, A., McNeil, J., Malhotra, P., Greenwood, R., Husain, M., and Driver, J. (2003). Prism adaptation can improve contralesional tactile perception in neglect. Neurology 60, 1829–1831.
Metzinger, T. (2008). Empirical perspectives from the self-model theory of subjectivity: a brief summary with examples. Prog. Brain Res. 168, 215–245.
Milner, A. D., and Goodale, M. A. (2006). The Visual Brain in Action. New York: Oxford University Press.
Mizumoto, M., and Ishikawa, M. (2005). Immunity to error through misidentification and the bodily illusion experiment. J. Conscious. Stud. 12, 3–19.
Narumoto, J., Okada, T., Sadato, N., Fukui, K., and Yonekura, Y. (2001). Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Brain Res. Cogn. Brain Res. 12, 225–231.
Northoff, G., and Bermpohl, F. (2004). Cortical midline structures and the self. Trends Cogn. Sci. (Regul. Ed.) 8, 102–107.
Palluel, E., Aspell, J. E., and Blanke, O. (2011). Leg muscle vibration modulates bodily self-consciousness: integration of proprioceptive, visual, and tactile signals. J. Neurophysiol. 105, 2239–2247.
Pavani, F., Spence, C., and Driver, J. (2000). Visual capture of touch: out-of-the-body experiences with rubber gloves. Psychol. Sci. 11, 353–359.
Penfield, W. (1955). The twenty-ninth Maudsley lecture: the role of the temporal cortex in certain psychical phenomena. J. Ment. Sci. 101, 451–465.
Petkova, V. I., and Ehrsson, H. H. (2008). If I were you: perceptual illusion of body swapping. PLoS ONE 3, e3832. doi:10.1371/journal.pone.0003832
Redding, G. M., and Wallace, B. (1997). Prism adaptation during target pointing from visible and nonvisible starting locations. J. Mot. Behav. 29, 119–130.
Rengachary, J., He, B. J., Shulman, G. L., and Corbetta, M. (2011). A behavioral analysis of spatial neglect and its recovery after stroke. Front. Hum. Neurosci. 5:29. doi:10.3389/fnhum.2011.00029
Rode, G., Klos, T., Courtois-Jacquin, S., Rossetti, Y., and Pisella, L. (2006). Neglect and prism adaptation: a new therapeutic tool for spatial cognition disorders. Restor. Neurol. Neurosci. 24, 347–356.
Ruby, P., and Decety, J. (2001). Effect of subjective perspective taking during simulation of action: a PET investigation of agency. Nat. Neurosci. 4, 546–550.
Samson, D., Apperly, I. A., Kathirgamanathan, U., and Humphreys, G. W. (2005). Seeing it my way: a case of a selective deficit in inhibiting self-perspective. Brain 128(Pt 5), 1102–1111.
Sanchez-Vives, M. V., and Slater, M. (2005). From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6, 332–339.
Sarri, M., Kalra, L., Greenwood, R., and Driver, J. (2006). Prism adaptation changes perceptual awareness for chimeric visual objects but not for chimeric faces in spatial neglect after right-hemisphere stroke. Neurocase 12, 127–135.
Saxe, R., and Kanwisher, N. (2003). People thinking about thinking people. The role of the temporo-parietal junction in “theory of mind.” Neuroimage 19, 1835–1842.
Saxe, R., and Wexler, A. (2005). Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia 43, 1391–1399.
Sforza, A., Bufalari, I., Haggard, P., and Aglioti, S. M. (2010). My face in yours: Visuo-tactile facial stimulation influences sense of identity. Soc. Neurosci. 5, 148–162.
Shadmehr, R., and Holcomb, H. H. (1997). Neural correlates of motor memory consolidation. Science 277, 821–825.
Shulman, G. L., Corbetta, M., Buckner, R. L., Raichle, M. E., Fiez, J. A., Miezin, F. M., and Petersen, S. E. (1997). Top-down modulation of early sensory cortex. Cereb. Cortex 7, 193–206.
Shulman, G. L., Pope, D. L., Astafiev, S. V., McAvoy, M. P., Snyder, A. Z., and Corbetta, M. (2010). Right hemisphere dominance during spatial selective attention and target detection occurs outside the dorsal frontoparietal network. J. Neurosci. 30, 3640–3651.
Slater, M., Spanlang, B., Sanchez-Vives, M. V., and Blanke, O. (2010). First person experience of body transfer in virtual reality. PLoS ONE 5, e10564. doi:10.1371/journal.pone.0010564
Striemer, C. L., and Danckert, J. A. (2010). Through a prism darkly: re-evaluating prisms and neglect. Trends Cogn. Sci. (Regul. Ed.) 14, 308–316.
Tarr, M. J., and Warren, W. H. (2002). Virtual reality in behavioral neuroscience and beyond. Nat. Neurosci. 5(Suppl.), 1089–1092.
Tsakiris, M. (2010). My body in the brain: a neurocognitive model of body-ownership. Neuropsychologia 48, 703–712.
Tsakiris, M., and Haggard, P. (2005). The rubber hand illusion revisited: visuotactile integration and self-attribution. J. Exp. Psychol. Hum. Percept. Perform. 31, 80–91.
Tsakiris, M., Hesse, M. D., Boy, C., Haggard, P., and Fink, G. R. (2007). Neural signatures of body ownership: a sensory network for bodily self-consciousness. Cereb. Cortex 17, 2235–2244.
Vallar, G., Lobel, E., Galati, G., Berthoz, A., Pizzamiglio, L., and Le Bihan, D. (1999). A fronto-parietal system for computing the egocentric spatial frame of reference in humans. Exp. Brain Res. 124, 281–286.
Vogeley, K., and Fink, G. R. (2003). Neural correlates of the first-person-perspective. Trends Cogn. Sci. (Regul. Ed.) 7, 38–42.
Vogeley, K., May, M., Ritzl, A., Falkai, P., Zilles, K., and Fink, G. R. (2004). Neural correlates of first-person perspective as one constituent of human self-consciousness. J. Cogn. Neurosci. 16, 817–827.
Winston, J. S., Strange, B. A., O’Doherty, J., and Dolan, R. J. (2002). Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nat. Neurosci. 5, 277–283.
Keywords: self consciousness, body, multi-sensory integration, neuroscience robotics
Citation: Ionta S, Gassert R and Blanke O (2011) Multi-sensory and sensorimotor foundation of bodily self-consciousness – an interdisciplinary approach. Front. Psychology 2:383. doi: 10.3389/fpsyg.2011.00383
Received: 12 July 2011;
Paper pending published: 28 September 2011;
Accepted: 05 December 2011;
Published online: 23 December 2011.
Edited by:
Angelo Maravita, University of Milano Bicocca, ItalyCopyright: © 2011 Ionta, Gassert and Blanke. This is an open-access article distributed under the terms of the Creative Commons Attribution Non Commercial License, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.
*Correspondence: Olaf Blanke, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne (EPFL), Station 19, 1015 Lausanne, Switzerland. e-mail: olaf.blanke@epfl.ch