Skip to main content

ORIGINAL RESEARCH article

Front. Hum. Neurosci., 30 July 2009
Sec. Cognitive Neuroscience

Brain mechanisms underlying human communication

1
Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
2
Department of Cognitive Psychology and Ergonomics, University of Twente, Enschede, The Netherlands
3
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
4
Centre for Cognition, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.

Introduction

We tend to think of human communication as basically involving the coding-decoding of a conventional symbol system, but framing human communication in terms of shared codes neglects its inferential nature (Levinson, 2000 ; Sperber and Wilson, 2001 ). Human communication rides on a large background of pragmatic inference – otherwise ironies, sarcasms, hints, and indirections would pass us by. Nor are we troubled by the vagueness or multiple ambiguities and semantic generalities in every utterance. The same system that resolves the coded messages probably lies behind our ability to communicate without any pre-existing conventions at all, as in the gestures one might use behind the boss’ back, or to signal to others out of earshot. A number of converging paths of evidence suggest that this faculty is distinct from our language abilities, and is ontogenetically and phylogenetically primitive to language (Levinson, 2006 ), yet at the same time constitutes the foundation for effective language use. This paper investigates the cognitive and cerebral bases of this faculty.
Given the pervasive ambiguity of communicative signals (Levinson, 1995 ; Sperber and Wilson, 2001 ), effective communication requires heuristics for selecting and interpreting the communicative intention of an observable behavior from a potentially infinite search-space. A recent and influential suggestion assumes that communication could occur “without any cognitive mediation” by means of an automatic sensorimotor resonance between the sender of a message and its receiver (Rizzolatti and Craighero, 2004 ). This framework, grounded in the discovery of ‘mirror neurons’ responding to both execution and observation of a given behavior (di Pellegrino et al., 1992 ), postulates that the intention conveyed by an observed behavior can be understood by means of a sensorimotor simulation (Gallese et al., 2004 ). However, communicative actions cannot be exclusively guided by predictions of the sensory consequences of motor commands acting on one’s own body (Wolpert et al., 2003 ), since they need to be selected by taking into account the receiver’s knowledge (Clark and Carlson, 1982 ) and they are designed to trigger a mental state, not an observable sensory event.
Here, we address the generation of human communicative actions, testing the hypothesis that effective communicative behavior relies on a predictive mechanism constrained by conceptual knowledge (Goldman, 2006 ; Nichols and Stich, 2003 ), rather than by sensorimotor routines. The problem facing a sender is how to select a communicative action appropriate to convey a specific intention to a receiver. A sender could solve this problem by predicting the intention that a receiver would attribute to the sender’s action (Levinson, 2006 ). Crucially, we hypothesize that this prediction relies on the sender’s intention recognition system, taking knowledge and beliefs of the receiver into account. This hypothesis implies a computational overlap between selection and recognition of a communicative behavior in senders and receivers, respectively. A stringent test of this cognitive scenario is that the same cerebral structures support the planning of communicative acts (in the sender) and the recognition of the intentions conveyed by those acts (in the receiver), and that these cerebral activities are modulated by the ambiguity in meaning of the communicative acts, rather than by their sensorimotor complexity.
Most human communication is a compound of coded conventional symbolic meaning (as in language), and inferences about communicator intent and recipients’ abilities to infer it. In order to focus on the latter system, we tested the hypothesis of shared computational overlap in communicators and recipients in the context of a controlled and unfamiliar communication system that prevented the participants from using pre-established linguistic conventions, forcing them to generate and interpret new communicative visuomotor behaviors. Using a novel interactive game set in a real, on-line social context, we could study the mechanisms that create new communicative conventions, rather than the utilization of such conventions, while manipulating the communicative ambiguity of different experimental trials. Furthermore, by using a controlled communicative setting, it becomes possible to design control trials devoid of communicative purposes, but matched in sensory and motor features with the communicative trials. We have called this new experimental protocol the Tacit Communication Game (TCG).
Pairs of participants [labeled as sender (male) and receiver (female), each controlling one token on a common game board (Figure 1 )] were asked to jointly reproduce the spatial configuration of two target tokens (goals). When the goals were shown to the Sender only (COMMUNICATIVE trials), solving the game required him to communicate to the receiver her own goal. Therefore, the Sender had to communicate to the Receiver the position and the orientation of her token. The Sender could achieve this only by moving his own token over the game board (Figure 1 , phase 4). The receiver could then move to her own goal position, as inferred from the Sender’s movements.
Figure 1. Sequence of events in a COMMUNICATIVE trial of the tacit communication game (TCG).
  1. Sender and receiver see a 3 × 3 game board (in grey) on separate screens, with their own tokens (yellow, blue) positioned below and above the board, respectively.
  2. The goal configuration appears on the board. During COMMUNICATIVE trials, the sender, but not the receiver, can see the goal configuration to be achieved at the end of the trial. The sender needs to share this information with the receiver, and he can do so only by moving his token over the board.
  3. When ready to move, the sender presses a start button and his token moves to the centre of the board, being visible to both players.
  4. Within 5 s, the sender needs to move his token on the board (with the controller shown) to inform the receiver about her goal position and to reach his own goal position. The sender’s token was visible to both players. The double arrow indicates repeated (vertical) movements of the sender’s token.
  5. The receiver can plan her movements while the sender’s token remains visible to both players.
  6. When ready to move, the receiver presses a start button and her token moves to the centre of the board, being visible to both players.
  7. Within 5 s, the receiver needs to move her token on the board (with the controller shown). The receiver’s token was visible to both players. The curved arrow indicates a 90° rotation of the receiver’s token.
  8. A green (correct) or red (incorrect) box appears indicating if both players successfully matched the goal configuration.
By using time-resolved event-related functional Magnetic Resonance Imaging (fMRI), we could measure neurophysiological correlates of planning a communicative action (in senders) and recognizing its communicative intention (in receivers), comparing these effects to the activity evoked during planning and observation of non-communicative actions, and distinguishing these effects from sensory and motor events occurring during the same trials. Given that sensorimotor- and conceptually-based accounts of mind-reading make opposite predictions on the involvement of the motor system during communicative behavior, it becomes possible to distinguish between these general frameworks by examining the sensory/motor characteristics of the cerebral activity evoked during the selection of a communicative action.

Materials and Methods

Participants

We recruited 56 right-handed participants, aged between 18 and 26 years, with normal or corrected-to-normal vision. Participants gave informed consent according to institutional guidelines of the local ethics committee (CMO region Arnhem-Nijmegen, Netherlands), and were either offered a financial payment or given credits towards completing a course requirement. Twenty-four male–female pairs participated in the first experiment. Eight participants participated in the second experiment (six males). In the second experiment, the Sender (male, 31 years old) was an accomplice. For ease of explanation, in the following sections we consider a male sender and a female receiver.
It is known that cognitive abilities, stress responses, and corresponding regional patterns of cerebral activity are heavily influenced by the menstrual cycle (see for instance Fernandez et al., 2003 ). In order to control this source of variability, we have chosen to scan almost exclusively males. Although this approach limits the scope of our inferences, it goes beyond the scope of this study to extensively assess the influence of gender differences in communicative abilities and their cerebral correlates.

Materials

The tacit communication game (TCG) involves two players, a sender and a receiver, moving a token on a game board displayed on a monitor. Participants were trained in front of two 19-inch computer monitors, playing the TCG with hand-held controllers (Figure 1 ). The spatial lay-out of the buttons on the hand-held controller allowed for unique mappings between finger and token movements: four face buttons moved the token to the left, right, up and down; two shoulder buttons rotated the token clockwise and counter-clockwise; a third shoulder button was used as a start button (see below). Players sat on opposing sides of a long table, each facing their own computer monitor, wearing sound-proof head sets and ear plugs (to minimize the influence of sounds in the environment and incidental noises produced by the players). The game was programmed using Presentation version 9.2 and was run on a Windows XP personal computer.
During scanning, one participant lay supine in the bore of the magnetic resonance (MR) scanner, playing the TCG with an MR-compatible hand-held controller. The other participant played the game from another room while wearing a sound-proof head set. Experiment 1 lasted about two and a half hours (30 min training, 45 min first fMRI session, 10 min rest, 10 min training, 45 min second fMRI, and 10 min anatomical scan). Experiment 2 lasted about 1 h and 30 min (30 min training, 20 min first fMRI session, 10 min rest, 20 min second fMRI session, 10 min anatomical scan).

Procedures

Procedures – Experiment 1

The fMRI experiment consisted of two sessions: In one session 40 COMMUNICATIVE trials were presented, and in the other session 40 NON-COMMUNICATIVE trials were presented. The exact same stimuli (including tokens and goal configurations) were used in the COMMUNICATIVE and the NON-COMMUNICATIVE sessions. As described above (see also Figure 1 ), the COMMUNICATIVE trials required the sender to move his token over the game board to indicate to the receiver where she needed to go. During the NON-COMMUNICATIVE trials it was made clear to the sender that the receiver also saw the goal configuration. Hence, there was no need for the sender to communicate to the receiver the position and orientation of her token. Further instructions ensured that the actions of the sender were similar during the COMMUNICATIVE and NON-COMMUNICATIVE trials. Namely, during NON-COMMUNICATIVE trials, the sender was instructed to first move his token to the target position of the receiver, match the rotation of the target token as closely as possible, and then move to his own position. This procedure ensured that, during both types of trials, senders planned similar actions with their tokens. In contrast, during COMMUNICATIVE trials, senders were meant to plan actions with a communicative value (for the receiver). The order of the two sessions was counterbalanced over participants.
To allow for comparisons between COMMUNICATIVE and NON-COMMUNICATIVE trials over fMRI sessions both sessions also presented 40 basic CONTROL trials, which were the same in each session. This construction allowed for the comparison between different session events through the comparison of a shared control event. For a CONTROL trial it was made clear to the sender that the receiver could see the goal configuration (similar to a NON-COMMUNICATIVE trial). The CONTROL trials were simplified by asking senders to directly move their token to the correct location, and thereby to completely ignore the token of the receiver (in contrast to the NON-COMMUNICATIVE and COMMUNICATIVE trials where the position and orientation of the token of the receiver played a crucial role).
Two further aspects were considered in the design: motoric complexity and communicative ambiguity. Motoric complexity varied naturally as some trials required planning of more moves (i.e. more button presses) and these could take longer to execute. Communicative ambiguity was varied by subdividing the COMMUNICATIVE trials in 30 EASY COMMUNICATIVE trials and 10 DIFFICULT COMMUNICATIVE trials. The difference between these two types of trials was that for easy trials senders did not face “orientation” problems. An example of an orientation problem is depicted in Figure 1 : The sender has to indicate the rectangle needs to rotate with a token (a circle) that does not have a visible rotation. Pilot studies had shown us that without the orientation problem sender–receiver pairs quickly build up a set of successful un-ambiguous communicative actions. In contrast, with the orientation problem the actions of the sender stay ambiguous to the receiver for a longer time.
The experimental design was originally conceived to optimize the contrast between COMMUNICATIVE and NON-COMMUNICATIVE conditions. As will become clear in the “Results” section, the most relevant comparison turned out to lay in the contrast between DIFFICULT COMMUNICATIVE and EASY COMMUNICATIVE trials. While the numerical disparity between Difficult and Easy trials might lead to imbalances in the estimation of the signal associated with each of these two conditions, the effects we report are relative to the reliability of their estimates.
Participants were trained extensively before playing the TCG. In a first training session, the participants were individually familiarized with the procedure of translating and rotating their token around the game board. There were three different tokens: circles, triangles, and rectangles. In the second training session (10 trials) participants learned that the token placed below the game board was controlled by the sender, whereas the token placed above the game board was controlled by the receiver. During this training both sender and receiver could see two more (target) tokens inside the game board. These target tokens indicated the position and orientation that sender’s and receiver’s (playing) tokens should have at the end of the trial. The goal of this training was that each player positioned their own token in the designated position.
In the third training session, the participants were familiarized (15 trials) with condition-specific procedures of the TCG. There were two separate training sessions for COMMUNICATIVE and NON-COMMUNICATIVE trials, counterbalanced according to the order used for the ensuing fMRI sessions. The sender was informed about the trial type (COMMUNICATIVE/NON-COMMUNICATIVE or CONTROL) by the hue of the receiver’s token. A bright receiver’s token was used during CONTROL trials, a semi-transparent token was used during COMMUNICATIVE/NON-COMMUNICATIVE trials.

Procedures – Experiment 2

In Experiment 2, we were interested in localizing brain regions involved in the observation/interpretation of communicative actions, in the context of the TCG. Accordingly, in this experiment we measured BOLD fMRI signals evoked in the receiver. The fMRI experiment consisted of two sessions: In one session 40 COMMUNICATIVE trials were presented, and in the other session 40 NON-COMMUNICATIVE trials were presented. The communicative trials were the same in Experiments 1 and 2 (labelled as COMMUNICATIVERECEIVER trials). Furthermore, we introduced a new type of non-communicative trial (labelled as NON-COMMUNICATIVERECEIVER trials). These trials ensured that receivers were monitoring the actions of the sender, but without attaching any communicative value to the movements. This was achieved by asking the receiver to move her token to the position in the game board where the sender last moved (i.e. translated or rotated) his token at least twice, and then rotate her token twice. We also informed the receiver that the sender moved according to our specific instructions, rather than trying to communicate to the receiver her target configuration. In fact, unknowingly to the receiver, we played back the sender’s moves of his actual movements performed during the communicative trials scanning session. This procedure ensured that the receiver was presented with identical visual input (from the sender) and followed the actions of the sender in both scanning sessions. Most importantly, receivers only observed actions with a communicative value (from the sender) in a COMMUNICATIVERECEIVER trial.
Differently from Experiment 1, here we compared COMMUNICATIVERECEIVER and NON-COMMUNICATIVERECEIVER trials over fMRI sessions. We achieved this by relying on shared events in both sessions to serve as a baseline event (in this case the execution phase of the receiver).
Participants were trained extensively on the game procedures before playing the TCG during fMRI acquisition. The first two training sessions were identical to those of Experiment 1. The third training session consisted of 20 COMMUNICATIVERECEIVER trials, during which the receiver could not see the target configuration (Figure 1 ). Following this scanning session, the receiver was informed that he needed to solve a different problem (the above mentioned visual following task). The receiver practiced this new task for 20 trials. During the ensuing scanning session, the receiver was in the MR scanner, and performed 40 NON-COMMUNICATIVERECEIVER trials). There were 30 DIFFICULT COMMUNICATIVERECEIVER trials, and 10 EASY COMMUNICATIVERECEIVER trials. We changed the ratio for easy and difficult trials from Experiments 1 to 2, because we had learned from Experiment 1 (as will be shown below) that the difficult trials were most relevant.

Behavioral Data Analysis

For Experiment 1 we calculated mean planning times (senders; time between the onset of the goal configuration and the moment the sender pressed start), mean number of moves (senders), and mean accuracy scores (sender–receiver pairs). These dependent variables were analyzed using paired t-tests (threshold, p < 0.05) for COMMUNICATIVE VS. NON-COMMUNICATIVE trials, and EASY COMMUNICATIVE VS. DIFFICULT COMMUNICATIVE trials. We considered a highly conservative level of chance performance by taking into account only the position (and not the orientation) of the target token of the receiver. If the receiver randomly places a token in the game board there is a chance of one out of eight (12.5%) that it is correct (in terms of position), given that the sender knows where to position his token, the receiver cannot position her token on top of the token of the sender, and there are nine initial positions in the gameboard. In reality, tokens often had to be rotated adding additional options for the receiver, but this would further lower the chance level and the conservative estimate of 12.5% sufficed to show that sender–receiver pairs scored far above chance (see below).
During Experiment 2, mean planning times (receivers, time between the end of the visible movements of the sender and the moment the receiver pressed start), mean number of moves (receivers), and mean accuracy scores (sender–receiver pairs) were analyzed using paired t-tests (threshold, p < 0.05) for COMMUNICATIVERECEIVER vs. NON-COMMUNICATIVERECEIVER trials and EASY COMMUNICATIVERECEIVER vs. DIFFICULT COMMUNICATIVERECEIVER trials. Only the trials in which the receiver moved to the correct position were taken into account for calculating planning times and number of moves.

Image Acquisition

Images were acquired using a 3-Tesla Trio scanner (Siemens, Erlangen, Germany). Blood oxygenation level dependent (BOLD) sensitive functional images were acquired using a single shot gradient echo planar imaging (EPI) sequence (TR/TE 2.50 s/40 ms, 34 transversal slices, interleaved acquisition, voxel size 3.5 × 3.5 × 3.5 mm). At the end of the scanning session, structural images were acquired using a MP-RAGE sequence (TR/TE/TI 2300 ms/3.9 ms/1100 ms, voxel size 1 × 1 × 1 mm).

Image Analysis

Functional data were pre-processed and analyzed with SPM2 (Statistical Parametric Mapping) 1 . The first four volumes of each participant’s timeseries were discarded to allow for T1 equilibration. The image timeseries were spatially realigned using a sinc interpolation algorithm that estimates rigid body transformations (translations, rotations) by minimizing head-movements between each image and the reference image (Friston et al., 1995 ). The timeseries for each voxel was realigned temporally to acquisition of the middle slice. Subsequently, images were normalized onto a custom Montreal Neurological Institute (MNI)-aligned EPI template (based on 28 male brains acquired on the Siemens Trio at the Donders Centre) using both linear and nonlinear transformations and resampled at an isotropic voxel size of 2 mm. Finally, the normalized images were spatially smoothed using an isotropic 8 mm full-width-at-half-maximum Gaussian kernel. Each participant’s structural image was spatially coregistered to the mean of the functional images (Ashburner and Friston, 1997 ) and spatially normalized by using the same transformation matrix applied to the functional images. The fMRI timeseries were analyzed using an event-related approach in the context of the General Linear Model (GLM).

Statistical Model and Inference – Experiment 1

We considered 10 event types 2 for each scanning session (see Table 1 ), following the sequence of events described in Figure 1 .
During the non-communicative session, we defined the same 10 event types, replacing COMMUNICATIVE with NON-COMMUNICATIVE trials. In both sessions, each event timeseries was convolved with a canonical hemodynamic response function and used as a regressor in the SPM multiple regression analysis. In addition, we considered the modulatory effects of two further parameters, adding four further effects to the statistical model (for each session). First, we considered the effects of planning movements with different number of moves on the planning-related activities of the sender. This was modelled as a parametric modulation of the number of moves that the sender executed on a given trial on each of the three planning periods of the sender (events 1, 2, 3 in Table 1 ). Second, we considered the effects of executing movements of different duration on the execution-related activity of the sender. This was modelled as a parametric modulation of the movement time of the sender on the execution effect (event 4 in Table 1 ). We assumed a linear relation between number of sender moves and BOLD signal, as well as between the duration of sender movement phases and BOLD signal. The corresponding regressors were introduced in the GLM on a subject-by-subject basis. Finally, the statistical model also considered separate covariates describing the head-related movements (as estimated by the spatial realignment procedure) and their first and second derivatives over time. Several studies have included these derivatives of realignment parameters to improve the sensitivity of their statistical analyses (e.g. Lund et al., 2005 ; Salek-Haddadi et al., 2003 ; Verhagen et al., 2008 ). Data were high-pass filtered (cut-off 128 s) to remove low frequency confounds, such as scanner drifts. Temporal autocorrelation was modelled as an AR(1) process.
yes
The main focus of interest in this experiment was related to the difference (in the sender) between planning COMMUNICATIVE and NON-COMMUNICATIVE actions. This difference was isolated by testing planning-related activity for COMMUNICATIVE versus NON-COMMUNICATIVE trials (having subtracted out CONTROL trials from each condition). This contrast can be expressed in terms of events in the fMRI design (see Table 1 ): The activity that was greater for events 1 and 2 (versus event 3) in the communicative sessions than for events 1 and 2 (versus event 3) in the non-communicative sessions. We also tested the reverse contrast to examine possible de-activations for COMMUNICATIVE trials. Finally, we also considered a more generic effect, related to the difference between planning actions that required the sender to take into account the target configuration of the receiver (i.e., COMMUNICATIVE and NON-COMMUNICATIVE trials) and planning action that were independent from the receiver (i.e., CONTROL trials). This difference was isolated by testing for a difference in planning-related activity between COMMUNICATIVE and NON-COMMUNICATIVE trials, as compared to the CONTROL trials.
Session-specific parameter estimates were calculated at each voxel for each subject, and contrasts of the parameter estimates were calculated for the effects of sender planning COMMUNICATIVE actions, sender planning NON-COMMUNICATIVE actions, and sender planning CONTROL actions. These contrasts were entered into a one-way, repeated measures analysis of variance (ANOVA), treating subjects as a random variable. The degrees of freedom were corrected for nonsphericity at each voxel.
We report the results of a random effects analysis, with inferences drawn at the cluster level, corrected for multiple comparisons over the whole brain using family-wise error correction (p < 0.05) (Friston et al., 1996 ). Furthermore, to improve the sensitivity of the crucial test of the hypotheses described above, we have assessed the results of the first two contrasts on the basis of independent anatomical information (Friston, 1997 ), i.e. published stereotactical coordinates of areas related to conceptually-based accounts of mind-reading (as studied with Theory of Mind tasks – first cerebral network) or to sensorimotor-based accounts (as implemented in the Mirror Neuron System – second cerebral network). Whenever necessary, we converted the published coordinates into MNI space. This procedure constrained our search space to a series of volumes of interests (VOI) 3 , ensuring increased and matched sensitivity for each cerebral network. We defined the first cerebral network on the basis of (Saxe and Wexler, 2005 ; Saxe et al., 2004 ), positioning six VOIs along the left and right temporo-parietal junction (TPJ) (−48, −69, 21; 54, −54, 24), the left and right posterior superior temporal sulcuc (pSTS) (−54, −42, 9; 54, −42, 9), the medial prefrontal cortex (mPFC) (0, 60, 12), and the posterior cingulate (3, −60, 24). We defined the second cerebral network on the basis of (Iacoboni et al., 1999 ), positioning six VOIs along the left and right inferior frontal gyrus (−51, 12, 14; 51, 12, 14), the left and right parietal operculum (−59, −26, 33; 59, −26, 33), and the left and right intraparietal sulcus (−37, −44, 60; 37, −44, 60). We determined our VOIs on the basis of those particular reports given that, in our judgement, they provided landmark references for defining the cerebral correlates of Theory of Mind and Mirror Neuron responses. Those studies were also instrumental in defining tasks most commonly associated with Theory of Mind (i.e. false belief stories, recognizing intentions from human actions) and the Mirror Neuron System (action observation and execution), and they have been highly influential in determining the theoretical positions of these two accounts of intention understanding.

Statistical Model and Inference – Experiment 2

We considered five event types for each scanning session (see Table 2 ), following the sequence of events described in Figure 1 .
During the non-communicative session, we defined the same five event types, replacing COMMUNICATIVERECEIVER with NON-COMMUNICATIVERECEIVER trials. In both sessions, each event timeseries was convolved with a canonical hemodynamic response function and used as a regressor in the SPM multiple regression analysis. We also included separate covariates describing the head-related movements (as estimated by the spatial realignment procedure) and their first and second derivatives over time. We left out the parametric modulations related to movements for Experiment 2 because the moves for the receiver, unlike the moves for the sender, were only related to simply moving their token to the target position. Data was high-pass filtered (cut-off 128 s), and temporal autocorrelation was modelled as an AR(1) process.
yes
The main focus of interest in this experiment was related to the difference (in the receiver) between observing COMMUNICATIVE and NON-COMMUNICATIVE actions. This difference was isolated by testing planning related activity for COMMUNICATIVE versus NON-COMMUNICATIVE trials (each having subtracted out commonmovement-related activity). This contrast can be expressed in terms of events in the fMRI design (see Table 2 ): The activity that was greater for event 2R (versus event 4R) in the COMMUNICATIVERECEIVER sessions than for event 2R (versus event 4R) in the NON-COMMUNICATIVERECEIVER sessions.
Session-specific parameter estimates were calculated at each voxel for each subject, and contrasts of the parameter estimates were calculated as described above. These subjects-specific contrasts were entered into a non-parametric test, treating subjects as a random variable. We report the results of a random effects analysis, corrected for multiple comparisons over the whole brain using family-wise error correction (p < 0.05). We employed the non-parametric variant of SPM (SnPM; Nichols and Holmes, 2002 ). We used a locally pooled variance estimate (pseudo-t), with a Gaussian kernel of 8 mm FWHM (Nichols and Holmes, 2002 ). To optimize statistical sensitivity for both spatially extended clusters and high intensity signals, we used a combined threshold on the basis of voxel-intensity and cluster size (Hayasaka and Nichols, 2004 ), using a pseudo-t value of 3 (corresponding to p ≈ 0.002) for identification of supra-threshold clusters.

Results

fMRI Data

Table 3 provides an overview of the results obtained from the main contrasts in the random effects analysis. We found that planning novel communicative acts and understanding the communicative intention of these acts relied on the same cerebral tissue, namely the posterior part of the superior temporal sulcus (pSTS) of the right hemisphere (Figures 2 A–C,E). During the planning phase of the TCG there was no change in sensory input or motor output (phase 2 in Figure 2 ). Therefore, this differential planning-related activity cannot be driven by visual motion or hand movements. Yet, it is possible that this particular pSTS cluster is sensitive to sensory stimuli or to motor responses, having been implicated in the perception of biological motion (Peelen et al., 2006 ) and in receiving reafferent motor-related activity (Iacoboni et al., 2001 ). We tested this possibility by assessing the BOLD activity measured during the execution of the communicative movements by the Sender (phase 4 in Figure 2 ). During this period the Sender moved his fingers over the controller and perceived his token moving over the game board. There was no reliable activity during this phase (Figure 2 D, “Sender moves” bar). Taken together, these responses indicate that this portion of the right pSTS is not responsive to visual motion or to hand movements per se. Rather, this cluster appears to be involved in processing visual motion when this becomes relevant for inferring the communicative intentions of the agent (i.e. for the Receiver).
yes
Figure 2. Sequence of events in a COMMUNICATIVE trial of the TCG (all conventions as in Figure 1 ), and cerebral activity evoked in the sender and in the receiver during relevant trial epochs. (A) Planning communicative actions increased metabolic activity in the posterior part of pSTS of the sender’s brain (in red, MNI coordinates: 50, −42, 14, p < 0.05 corrected for multiple comparisons).as compared to planning similar movements during NON-COMMUNICATIVE trials [(C); effect size: parameter estimates of a multiple regression analysis in standard error (SE) units], i.e. trials in which both players could see the goal configuration. The execution of the movements evoked no significant changes in the right pSTS activity of the sender’s brain (D). The observation of the same communicative actions increased right pSTS activity in the receiver’s brain [(B), in red, MNI: coordinates: 56, −38, 6, p < 0.05 corrected for multiple comparisons], as compared to observing the same movements during non-communicative trials (E). The execution of the movements evoked no significant changes in the pSTS activity of the receiver’s brain (F).
We reasoned that if the right pSTS is specifically involved in planning communicative intentions, then the response of this region should be modulated by communicative ambiguity, and not by the motoric complexity of the action used to convey the relevant information to the Receiver. Therefore, we tested whether the pSTS activity (measured during the planning phase) was sensitive to (1) the different communicative complexity of EASY and DIFFICULT COMMUNICATIVE trials; (2) the number of moves performed in the execution phase of the COMMUNICATIVE trials; or (3) the time spent moving during the execution phase of the same trials. There was no reliable linear relationship between pSTS activity and motoric complexity (Figure 2 D, “Number of moves” and “Movement Time” bars), but there was a strong effect of communicative complexity (Figure 3 ).
Figure 3. Planning-related activity in the right pSTS of the sender’s brain was modulated by the difficulty of selecting an adequate communicative behaviour. (A) When the sender could indicate the goal position of the receiver’s token (blue rectangle) by aligning his token (yellow rectangle) with her goal position, there was no significant right pSTS activity (in magenta). (B) When the sender could not match the goal orientation of the receiver’s token (blue rectangle) with his token (yellow circle), there was robust right pSTS activity (in orange), both for correct and incorrect trials. Other conventions as in Figures 1 and 2 .
The pSTS activity was co-activated with the medial prefrontal cortex (mPFC) (Figure 4 A) previously associated with conceptually-based accounts of the human ability to make inferences about mental states of other agents (Frith and Frith, 2006b ; Saxe et al., 2004 ). The mirror-system regions hypothesized to provide efferent copies of motor commands to the pSTS (Iacoboni et al., 2001 ) showed increased de-activations during the planning of communicative actions (Figure 4 B). The difference between COMMUNICATIVE and NON-COMMUNICATIVE trials was present in the right pSTS, but not the left pSTS (Figure 4 C).
Figure 4. Planning-related activity in cerebral regions of the sender’s brain during COMMUNICATIVE and NON-COMMUNICATIVE trials. (A) There was robust activity during both trial types in the mPFC (MNI coordinates: 4, 54, 18, p < 0.05 corrected for multiple comparisons), a region previously associated with conceptually-based accounts of human mind-reading abilities. (B) There were robust decreases in activity in the left parietal operculum (MNI coordinates: −52, −24, 44, p < 0.05, corrected for multiple comparisons) and intraparietal sulcus (MNI coordinates: −38, −34, 64, p < 0.05, corrected for multiple comparisons) regions previously associated with sensorimotor accounts of human mind-reading abilities. The decreases of metabolic activity, a sign of reduced neural activity, were significantly stronger during the planning of communicative actions. (C) The plots show effect sizes for left and right pSTS (in standard error units) of cerebral activity from volumes of interest (VOIs – 10 mm centered on published stereotactical coordinates for the pSTS and adjusted for effects of interest) evoked in the sender during the planning epochs of COMMUNICATIVE and NON-COMMUNICATIVE trials.
In receivers, the same right pSTS activity was evoked during the recognition of the communicative intentions of the senders (Figures 2 B,E). Similar to the sender, this activity was modulated by communicative ambiguity (Figure 5 ) and not present during motor execution (Figure 2 F). The anatomical overlap between the activity evoked in the sender during the planning of communicative actions and the activity evoked in the receiver during the observation of the communicative actions (Figure 6 ) was unlikely (p = 0.017) to have occurred by chance, even at the relatively coarse spatial scale of group fMRI studies 4 .
Figure 5. Activity evoked in the receiver during EASY and DIFFICULT COMMUNICATIVE trials in the right pSTS (MNI coordinates: 56, −38, 6). The plots show effect sizes (in standard error units) of cerebral activity evoked in the receiver during the observation epochs of EASY and DIFFICULT COMMUNICATIVE trials. The effect sizes were larger for DIFFICULT than EASY COMMUNICATIVE trials.
Figure 6. Illustration of the anatomical overlap of brain activity in the right pSTS of the sender and the receiver. Cerebral activity evoked in the sender during the planning of communicative actions is shown in yellow/red; cerebral activity evoked in the receiver during the observation of the communicative actions is shown in: (A) 3D rendering, and (B) sagittal section).

Behavioral Data

Senders had longer planning times, made more errors, and made more moves on COMMUNICATIVE than on NON-COMMUNICATIVE trials, t(23) = 4.9, p < 0.001, t(23) = 7.9, p < 0.001, and t(23) = 4.7, p = 0.01, respectively (Figure 7 A). Receivers made more moves on NON-COMMUNICATIVERECEIVER than COMMUNICATIVERECEIVER trials, t(7) = 5.9, p = 0.001 (Figure 7 B).There were no differences for the planning times and the accuracy scores obtained in these two types of trials (both t(7) < 1). These results suggest that the receivers were able to infer the communicative intentions of the sender, and that the motoric demands of the NON-COMMUNICATIVERECEIVER trials were actually larger than those of the COMMUNICATIVERECEIVER trials.
Figure 7. Performance during COMMUNICATIVE and NON-COMMUNICATIVE trials – behavioral data for the fMRI experiment. (A) Planning times (ms), accuracy scores (%), and number of moves of the sender for COMMUNICATIVE and NON-COMMUNICATIVE trials. (B) Planning times (ms), accuracy scores (%), and number of moves of the receiver for COMMUNICATIVE and NON-COMMUNICATIVE trials. Error bars indicate standard errors.
Senders had longer planning times, made more errors, and made more moves on DIFFICULT COMMUNICATIVE than on EASY COMMUNICATIVE trials, t(23) = 6.8, p < 0.001, t(23) = 13.4, p < 0.001, and t(23) = 3.6, p = 0.01, respectively (Figure 8 A). Crucially, performance remained well above chance level also in the DIFFICULT COMMUNICATIVE trials [t(23) = 7.2, p < 0.001, given the most conservative estimate of chance for every trial (12.5%, see Materials and Methods)], despite showing a significant increase in error rate as compared to the other trial types. Receivers made more moves on DIFFICULT COMMUNICATIVE than on EASY COMMUNICATIVE trials, t(7) = 21.5, p < 0.001 (Figure 8 B). There were no significant differences for planning times and accuracy scores, t(7) = 1.5, p = 0.19, and t(7) = 2, p = 0.09, respectively. It is obvious that the behavioral difference between easy and difficult was not so pronounced for the receiver (actually only significantly so for the number of moves). For the receiver the difference between trials was less pronounced than for the sender because in both trial types, the participant needed to move her token to a single target position on the board, and in this experiment the sender was a confederate, ensuring a reliable and homogeneous communicative behavior across trial types. The number of moves used by the receiver to provide the response showed a significant difference between the two trial types, but this difference is trivially accounted by the additional rotations of the receiver’s token required when solving the difficult trials.
Figure 8. Performance during EASY and DIFFICULT COMMUNICATIVE trials – behavioral data for the fMRI experiment. (A) Planning times (ms), accuracy scores (%), and number of moves of the sender for EASY COMMUNICATIVE and DIFFICULT COMMUNICATIVE trials (error bars indicate standard errors). (B) Planning times (ms), accuracy scores (%), and number of moves of the receiver for EASY COMMUNICATIVE and DIFFICULT COMMUNICATIVE trials (error bars indicate standard errors).

Discussion

We draw two main conclusions from these results. First, and most important, the findings indicate that the same cerebral region, the pSTS, is involved in recognizing communicative intentions and in planning communicative actions, supporting the hypothesis that both types of cognition involve similar conceptual inferences. These findings fit with known properties of this region, namely the involvement of the pSTS in inferring the intentionality of observed actions, both in humans and macaques (Barraclough et al., 2005 ; Jellema et al., 2000 ; Pelphrey and Morris, 2006 ). We extend the scope of those findings, showing that the contribution of the pSTS to intention recognition is not bound to the processing of biological cues, for example gaze or eye-hand joint movements (Barraclough et al., 2005 ; Puce et al., 1998 ), or point-light displays (Puce and Perrett, 2003 ). Crucially, here we show that the contributions of the pSTS extend beyond perceiving social cues, and include the generation of communicative actions. We suggest that this generative component is similar to what happens when people predict what type of action another person will do next (Aichhorn et al., 2006 ; Castelli et al., 2000 ; Jellema et al., 2000 , 2004 ; Saxe et al., 2004 ). However, instead of making a prediction concerning the goals and intentions behind actions of others, a sender of a communicative action predicts the intentions that might be attributed by another person (i.e. the receiver) to that particular communicative action. In other words, pSTS is involved with the prediction of a forthcoming intention attribution, possibly on the basis of previous experience with how people have interpreted one’s actions (Frith and Frith, 2006a ; Schultz et al., 2005 ; Zilbovicius et al., 2006 ). Such a mechanism of predicting forthcoming intention recognition processes could explain the involvement of this region in different phenomena, from the recognition of biological motion (Allison et al., 2000 ), to the attribution of intentions (Castelli et al., 2000 ; Saxe et al., 2004 ), or the parsing of observed behaviors into conceptually relevant units (Zacks et al., 2001 ). Accordingly, it might not be surprising that developmental alterations in these basic perceptual mechanisms, as found in autism-spectrum disorder patients (Dakin and Frith, 2005 ; Zilbovicius et al., 2006 ), have serious consequences for human social behavior. Further research will be needed to determine how biological motion and nonverbal communication are related. Perhaps, the same mechanisms putatively involved in the processing of biological motion, i.e. the extraction of posture sequences (Giese and Poggio, 2003 ), could also support the parsing of a string of object motions into communicative segments. In addition, although the present fMRI data support our apriori hypothesis of a computational overlap between recognizing communicative intentions and planning communicative actions, we cannot exclude the possibility that different parts of the pSTS support qualitatively different functions. In this respect, future single-subject electrophysiological studies might be able to test whether the same neuronal populations are involved in planning communicative acts and in recognizing the intentions conveyed by those acts, avoiding the anatomical scatter associated with the residual anatomical variability of fMRI comparisons (Petersson et al., 1999 ).
A second basic conclusion is that these findings do not support the view that the mirror-system provides the foundations for human communication (Rizzolatti and Craighero, 2004 ). The pSTS activity was indifferent to sensory input and motor output, but it was sensitive to the ambiguity in meaning of the communicative acts (Figure 3 ). In addition, the mirror-system regions hypothesized to provide efferent copies of motor commands to the pSTS (Iacoboni et al., 2001 ) showed increased de-activations during the planning of communicative actions (Figure 4 B), an indication that generating intentional communicative behavior actually reduce metabolic activity (Shmuel et al., 2006 ) in the mirror-system (Rizzolatti and Craighero, 2004 ). These observations are not consistent with the idea that sensorimotor simulations can account for human communicative abilities. Nevertheless, it is theoretically conceivable that, in the pSTS, there is a non-linear relationship between number of moves/moving time and BOLD signal. In this case, our current approach would not capture this relationship, i.e. we cannot exclude the presence of higher order curvilinear relationships between motoric complexity and pSTS activity. However, given that these higher-order relationships would be devoid of any linear trend (since that would have been captured by our current analysis), it is not immediately obvious how they could be functionally interpreted.
Other interpretations of the findings seem ruled out by details of the experiment. For example, the pSTS response cannot simply reflect visual imagery of the movements, since it did not occur in non-communicative trials. Similarly, the pSTS response cannot be a consequence of its putative role in matching efferent copies of motor commands with visual inputs (Iacoboni, 2005 ; Iacoboni et al., 2001 ; Keysers and Perrett, 2004 ). That hypothesis predicts that the pSTS should be metabolically active during both action execution and action observation (Keysers and Perrett, 2004 ), but in this study the pSTS responses were strong during movement observation and absent during movement execution (Figures 2 D,F).
It might be argued that the pSTS response reflects subjects’ verbalizations, but the right-hemispheric lateralization of the effect (Figures 2 A and 4 C) is not consistent with the left-hemispheric dominance for phonological and syntactic processing (Frost et al., 1999 ). Actually, the present findings confirm the crucial role of the right pSTS for processing pragmatic aspects of linguistic material (Jung-Beeman et al., 2004 ; Mashal et al., 2007 ), an instance of the right-hemispheric dominance for inferring the communicative intentions of a conversational partner (Sabbagh, 1999 ). It might be argued that the differential pSTS response during easy and difficult communicative trials (Figures 3 and 5 ) is driven by differential communicative success (Figure 8 ). However, this interpretation is not consistent with the finding that both correct and incorrect outcomes evoke significantly positive and equivalent responses in the right pSTS of senders during difficult communicative trials (Figure 3 B). In fact, analysis of the senders’ movements during communicative trials suggests a different possibility. In the easy communicative trials, senders generated the same type of communicative action (namely, move to the target position of the receiver, pause, and then move to his own target position). This behavior was consistent across trials and across participants, and the senders developed it during the task familiarization period prior to the scanning session. In contrast, during the difficult trials, the sender generated different communicative movements according to the geometry of his token, of the receiver’s token, and the past history of communicative trials. Accordingly, we suggest that the right pSTS distinguishes between trials requiring the formation of new semiotic conventions (difficult communicative trials), and trials exploiting a recently established communicative behaviour (easy communicative trials). Finally, it should be emphasized that our findings pertain to the generation of non-verbal communicative behaviors that are independent from shared conventional codes. It could be argued that this experimentally controlled scenario is artificial, failing to capture relevant aspects of human linguistic communication. However, we all start out as infants without access to the local communication conventions. In that respect, by addressing the human ability to quickly build new semiotic conventions, we deal with a crucial pre-condition for using gestures and language as a communicative tool (Galantucci, 2005 ; Levinson, 2006 ). Accordingly, the current findings are in line with ancillary evidence that points to distinct origins, in both ontogeny and phylogeny, of the foundational mechanisms for communication on the one hand, and language on the other (Levinson, 2006 ; Tomasello and Carpenter, 2005 ).
It has been argued that communicative actions are special (Frith, 2007 ). Their planning is special, since the immediate goal of the movement does not overlap with its actual communicative purpose. Their understanding is special, since they rely on the ability to recognize their communicative value, such that sender and receiver can share an intention (Tomasello et al., 2005 ). The present findings provide a cognitive and cerebral ground for this special human faculty, supporting the notion that our communicative abilities are distinct both from sensorimotor processes (with distinct areas of activation) and language abilities (with their largely left-hemisphere localization).

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgements

The present study was supported by the EU-Project “Joint Action Science and Technology” (IST-FP6-003747). We would like to thank Roger Newman-Norlund, Bram Daams, and Paul Gaalman for technical advice and assistance.

Footnotes

  1. ^ www.fil.ion.ucl.ac.uk/spm
  2. ^ During a pilot phase, and after the collection of the imaging data, we tested whether there were high correlations between relevant regressors in the design matrix described above. The experimental design and the statistical model described above ensured that the maximum correlation between planning-related and execution-related regressors was <30%. Previous experience with these types of experimental designs have shown the validity of this approach, and its ability to effectively dissociate planning- and execution-related effects (Thoenissen et al., 2002 ; Toni et al., 1999 , 2002 ).
  3. ^ The radius of each VOI was set at 10 mm, using the WFU PickAtlas software toolbox (Maldjian et al., 2003 , 2004 ), correcting for multiple comparisons over the joint volume spanned by the individual VOIs. This resulted in a t-value of 3 (corresponding to p ≈ 0.002) for identification of supra-threshold clusters. Note that this threshold is only used to define clusters, and does not denote the threshold for significance of activations.
  4. ^ A conservative estimate of this probability was obtained by using a coarse but automated parcellation of the human brain in 116 unique structures (Tzourio-Mazoyer et al., 2002 ). Given that we performed independent experiments, on different groups of subjects, the probability of finding activity in the same region for the receiver as for the sender is given by the number of regions found for the receiver [2] divided by the number of regions in which activity might have occurred [116] (2/116 ≈ 0.017).

References

Aichhorn, M., Perner, J., Kronbichler, M., Staffen, W., and Ladurner, G. (2006). Do visual perspective tasks need theory of mind? NeuroImage 30, 1059–1068.
Allison, T., Puce, A., and McCarthy, G. (2000). Social perception from visual cues: role of the STS region. Trends Cogn. Sci. 4, 267–278.
Ashburner, J., and Friston, K. (1997). Multimodal image coregistration and partitioning – a unified framework. Neuroimage 6, 209–217.
Barraclough, N. E., Xiao, D., Baker, C. I., Oram, M. W., and Perrett, D. I. (2005). Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. J. Cogn. Neurosci. 17, 377–391.
Castelli, F., Happe, F., Frith, U., and Frith, C. (2000). Movement and mind: a functional imaging study of perception and interpretation of complex intentional movement patterns. Neuroimage 12, 314–325.
Clark, H. H., and Carlson, T. B. (1982). Speech acts and hearers’ beliefs. In Mutual Knowledge, N. V. Smith, ed (New York, Academic Press), pp. 1–36.
Dakin, S., and Frith, U. (2005). Vagaries of visual perception in autism. Neuron 48, 497–507.
di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., and Rizzolatti, G. (1992). Understanding motor events: a neurophysiological study. Exp. Brain Res. 91, 176–180.
Fernandez, G., Weis, S., Stoffel-Wagner, B., Tendolkar, I., Reuber, M., Beyenburg, S., Klaver, P., Fell, J., de Greiff, A., Ruhlmann, J., Reul, J., and Elger, C. E. (2003). Menstrual cycle-dependent neural plasticity in the adult human brain is hormone, task, and region specific. J. Neurosci. 23, 3790–3795.
Friston, K. J. (1997). Testing for anatomically specified regional effects. Hum. Brain Mapp. 5, 133–136.
Friston, K. J., Holmes, A., Poline, J. B., Price, C. J., and Frith, C. D. (1996). Detecting activations in PET and fMRI: levels of inference and power. Neuroimage 4(Pt 1), 223–235.
Friston, K. J., Holmes, A. P., Worsley, K. J., Poline, J. B., Frith, C., and Frackowiak, R. S. (1995). Statistical parametric maps in functional imaging: a general linear approach. Hum. Brain Mapp. 2, 189–210.
Frith, C. D. (2007). The social brain? Philos. Trans. R. Soc. Lond., B, Biol. Sci 362, 671–678.
Frith, C. D., and Frith, U. (2006a). How we predict what other people are going to do. Brain Res. 1079, 36–46.
Frith, C. D., and Frith, U. (2006b). The neural basis of mentalizing. Neuron 50, 531–534.
Frost, J. A., Binder, J. R., Springer, J. A., Hammeke, T. A., Bellgowan, P. S. F., Rao, S. M., and Cox, R. W. (1999). Language processing is strongly left lateralized in both sexes: evidence from functional MRI. Brain 122, 199–208.
Galantucci, B. (2005). An experimental study of the emergence of human communication systems. Cogn. Sci. 25, 737–767.
Gallese, V., Keysers, C., and Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends Cogn. Sci. 8, 396–403.
Giese, M. A., and Poggio, T. (2003). Neural mechanisms for the recognition of biological movements. Nat. Rev. Neurosci. 4, 179–192.
Goldman, A. I. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. Oxford, Oxford University Press.
Hayasaka, S., and Nichols, T. E. (2004). Combining voxel intensity and cluster extent with permutation test framework. NeuroImage 23, 54–63.
Iacoboni, M. (2005). Neural mechanisms of imitation. Curr. Opin. Neurobiol. 15, 632–637.
Iacoboni, M., Koski, L. M., Brass, M., Bekkering, H., Woods, R. P., Dubeau, M.-C., Mazziotta, J. C., and Rizzolatti, G. (2001). Reafferent copies of imitated actions in the right superior temporal cortex. PNAS 98, 13995–13999.
Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C., and Rizzolatti, G. (1999). Cortical Mechanisms of Human Imitation. Science 286, 2526–2528.
Jellema, T., Baker, C. I., Wicker, B., and Perrett, D. I. (2000). Neural representation for the perception of the intentionality of actions. Brain Cogn. 44, 280–302.
Jellema, T., Maassen, G., and Perrett, D. I. (2004). Single cell integration of animate form, motion and location in the superior temporal cortex of the macaque monkey. Cereb. Cortex 14, 781–790.
Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., Reber, P. J., and Kounios, J. (2004). Neural activity when people solve verbal problems with insight. PLoS Biol. 2, 500–510.
Keysers, C., and Perrett, D. I. (2004). Demystifying social cognition: a Hebbian perspective. Trends Cogn. Sci. 8, 501–507.
Levinson, S. C. (1995). Interactional biases in human thinking. In Social Intelligence and Interaction, E. N. Goody, ed (Cambridge, University Press), pp. 221–260.
Levinson, S. C. (2000). Presumptive Meanings. Cambridge, MA, MIT Press.
Levinson, S. C. (2006). On the human “interactional engine”. In Roots of Human Sociality: Culture Cognition, and Interaction, N. J. Enfield and S. C. Levinson, eds (Oxford, Berg), pp. 39–69.
Lund, T. E., Norgaard, M. D., Rostrup, E., Rowe, J. B., and Paulson, O. B. (2005). Motion or activity: their role in intra- and inter-subject variation in fMRI. Neuroimage 26, 960–964.
Maldjian, J. A., Laurienti, P. J., and Burdette, J. H. (2004). Precentral gyrus discrepancy in electronic versions of the Talairach atlas. Neuroimage 21, 450–455.
Maldjian, J. A., Laurienti, P. J., Kraft, R. A., and Burdette, J. H. (2003). An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets. Neuroimage 19, 1233–1239.
Mashal, N., Faust, M., Hendler, T., and Jung-Beeman, M. (2007). An fMRI investigation of the neural correlates underlying the processing of novel metaphoric expressions. Brain Lang. 100, 115–126.
Nichols, S., and Stich, S. P. (2003). Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford, Clarendon Press.
Nichols, T. E., and Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum. Brain Mapp. 15, 1–25.
Peelen, M. V., Wiggett, A. J., and Downing, P. E. (2006). Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron 49, 815–822.
Pelphrey, K. A., and Morris, J. P. (2006). Brain Mechanisms for interpreting the actions of others from biological-motion cues. Curr. Dir. Psychol. Sci. 15, 136–140.
Petersson, K. M., Nichols, T. E., Poline, J. B. and Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I. Non-inferential methods and statistical methods. Philos. Trans. R. Soc. Lond., B. Biol. Sci. 354, 1239–1260.
Puce, A., Allison, T., Bentin, S., Gore, J. C., and McCarthy, G. (1998). Temporal cortex activation in humans viewing eye and mouth movements. J. Neurosci. 18, 2188–2199.
Puce, A., and Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 358, 435–445.
Rizzolatti, G., and Craighero, L. (2004). The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192.
Sabbagh, M. A. (1999). Communicative intentions and language: evidence from right-hemisphere damage and autism. Brain Lang. 70, 29–69.
Salek-Haddadi, A., Lemieux, L., Merschhemke, M., Friston, K. J., Duncan, J. S., and Fish, D. R. (2003). Functional magnetic resonance imaging of human absence seizures. Ann. Neurol. 53, 663–667.
Saxe, R., Carey, S., and Kanwisher, N. (2004). Understanding other minds: linking developmental psychology and functional neuroimaging. Annu. Rev. Psychol. 55, 87–124.
Saxe, R., and Wexler, A. (2005). Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia 43, 1391–1399.
Saxe, R., Xiao, D.-K., Kovacs, G., Perrett, D. I., and Kanwisher, N. (2004). A region of right posterior superior temporal sulcus responds to observed intentional actions. Neuropsychologia 42, 1435–1446.
Schultz, J., Friston, K. J., O’Doherty, J., Wolpert, D. M., and Frith, C. D. (2005). Activation in posterior superior temporal sulcus parallels parameter inducing the percept of animacy. Neuron 45, 625–635.
Shmuel, A., Augath, M., Oeltermann, A., and Logothetis, N. K. (2006). Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nat. Neurosci. 9, 569–577.
Sperber, D., and Wilson, D. (2001). Relevance: Communication and Cognition, 2nd Edn. Oxford, Blackwell Publishers.
Thoenissen, D., Zilles, K., and Toni, I. (2002). Differential involvement of parietal and precentral regions in movement preparation and motor intention. J. Neurosci. 22, 9024–9034.
Tomasello, M., and Carpenter, M. (2005). The emergence of social cognition in three young chimpanzees. Monogr. Soc. Res. Child Dev. 70, vii-132.
Tomasello, M., Carpenter, M., Call, J., Behne, T., and Moll, H. (2005). Understanding and sharing intentions: the origins of cultural cognition. Behav. Brain Sci. 28, 675–691; discussion 691–735.
Toni, I., Schluter, N. D., Josephs, O., Friston, K., and Passingham, R. E. (1999). Signal-, set- and movement-related activity in the human brain: an event-related fMRI study [published erratum appears in Cereb Cortex 1999 Mar; 9, 196]. Cereb. Cortex 9, 35–49.
Toni, I., Shah, N. J., Fink, G. R., Thoenissen, D., Passingham, R. E., and Zilles, K. (2002). Multiple movement representations in the human brain: an event-related fMRI study. J. Cogn. Neurosci. 14, 769–784.
Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., Mazoyer, B., and Joliot, M. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15, 273–289.
Verhagen, L., Dijkerman, H. C., Grol, M. J., and Toni, I. (2008). Perceptuo-motor interactions during prehension movements. J. Neurosci. 28, 4726–4735.
Wolpert, D. M., Doya, K., and Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 358, 593–602.
Zacks, J. M., Braver, T. S., Sheridan, M. A., Donaldson, D. I., Snyder, A. Z., Ollinger, J. M., Buckner, R. L., and Raichle, M. E. (2001). Human brain activity time-locked to perceptual event boundaries. Nat. Neurosci. 4, 651–655.
Zilbovicius, M., Meresse, I., Chabane, N., Brunelle, F., Samson, Y., and Boddaert, N. (2006). Autism, the superior temporal sulcus and social perception. Trends Neurosci. 29, 359–366.
Keywords:
social neuroscience, interactive game, fMRI, superior temporal sulcus
Citation:
Noordzij ML, Newman-Norlund SE, de Ruiter JP, Hagoort P, Levinson SC and Toni I (2009). Brain mechanisms underlying human communication. Front. Hum. Neurosci. 3:14. doi: 10.3389/neuro.09.014.2009
Received:
15 January 2009;
 Paper pending published:
23 February 2009;
Accepted:
08 July 2009;
 Published online:
30 July 2009.

Edited by:

Jennifer S. Beer, University of Texas at Austin, USA

Reviewed by:

Malia Mason, Columbia University(NYC), USA
Stephanie D. Preston, University of Michigan, USA
Copyright:
© 2009 Noordzij, Newman-Norlund, de Ruiter, Hagoort, Levinson and Toni. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.
*Correspondence:
Matthijs L. Noordzij, Department of Cognitive Psychology and Ergonomics, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands. e-mail: m.l.noordzij@utwente.nl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.