Although it is widely accepted that nouns and verbs are functionally independent linguistic entities, it is less clear whether their processing recruits different brain areas. This issue is particularly relevant for those theories of lexical semantics (and, more in general, of cognition) that suggest the embodiment of abstract concepts, i.e., based strongly on perceptual and motoric representations. This paper presents a formal meta-analysis of the neuroimaging evidence on noun and verb processing in order to address this dichotomy more effectively at the anatomical level. We used a hierarchical clustering algorithm that grouped fMRI/PET activation peaks solely on the basis of spatial proximity. Cluster specificity for grammatical class was then tested on the basis of the noun-verb distribution of the activation peaks included in each cluster. Thirty-two clusters were identified: three were associated with nouns across different tasks (in the right inferior temporal gyrus, the left angular gyrus, and the left inferior parietal gyrus); one with verbs across different tasks (in the posterior part of the right middle temporal gyrus); and three showed verb specificity in some tasks and noun specificity in others (in the left and right inferior frontal gyrus and the left insula). These results do not support the popular tenets that verb processing is predominantly based in the left frontal cortex and noun processing relies specifically on temporal regions; nor do they support the idea that verb lexical-semantic representations are heavily based on embodied motoric information. Our findings suggest instead that the cerebral circuits deputed to noun and verb processing lie in close spatial proximity in a wide network including frontal, parietal, and temporal regions. The data also indicate a predominant—but not exclusive—left lateralization of the network.
This study harnessed control ratings of the contribution of different types of information (sensation, action, emotion, thought, social interaction, morality, time, space, quantity, and polarity) to 400 individual abstract and concrete verbal concepts. These abstract conceptual feature (ACF) ratings were used to generate a high dimensional semantic space, from which Euclidean distance measurements between individual concepts were extracted as a metric of the semantic relatedness of those words. The validity of these distances as a marker of semantic relatedness was then tested by evaluating whether they could predict the comprehension performance of a patient with global aphasia on two verbal comprehension tasks. It was hypothesized that if the high-dimensional space generated from ACF control ratings approximates the organization of abstract conceptual space, then words separated by small distances should be more semantically related than words separated by greater distances, and should therefore be more difficult to distinguish for the comprehension-impaired patient, SKO. SKO was significantly worse at identifying targets presented within word pairs with low ACF distances. Response accuracy was not predicted by Latent Semantic Analysis (LSA) cosines, any of the individual feature ratings, or any of the background variables. It is argued that this novel rating procedure provides a window on the semantic attributes of individual abstract concepts, and that multiple cognitive systems may influence the acquisition and organization of abstract conceptual knowledge. More broadly, it is suggested that cognitive models of abstract conceptual knowledge must account for the representation not only of the relationships between abstract concepts but also of the attributes which constitute those individual concepts.
Two views on the semantics of concrete words are that their core mental representations are feature-based or are reconstructions of sensory experience. We argue that neither of these approaches is capable of representing the semantics of abstract words, which involve the representation of possibly hypothetical physical and mental states, the binding of entities within a structure, and the possible use of embedding (or recursion) in such structures. Brain based evidence in the form of dissociations between deficits related to concrete and abstract semantics corroborates the hypothesis. Neuroimaging evidence suggests that left lateral inferior frontal cortex supports those processes responsible for the representation of abstract words.
Comprehension of words is an important part of the language faculty, involving the joint activity of frontal and temporo-parietal brain regions. Transcranial Magnetic Stimulation (TMS) enables the controlled perturbation of brain activity, and thus offers a unique tool to test specific predictions about the causal relationship between brain regions and language understanding. This potential has been exploited to better define the role of regions that are classically accepted as part of the language-semantic network. For instance, TMS has contributed to establish the semantic relevance of the left anterior temporal lobe, or to solve the ambiguity between the semantic vs. phonological function assigned to the left inferior frontal gyrus (LIFG). We consider, more closely, the results from studies where the same technique, similar paradigms (lexical-semantic tasks) and materials (words) have been used to assess the relevance of regions outside the classically-defined language-semantic network—i.e., precentral motor regions—for the semantic analysis of words. This research shows that different aspects of the left precentral gyrus (primary motor and premotor sites) are sensitive to the action-non action distinction of words' meanings. However, the behavioral changes due to TMS over these sites are incongruent with what is expected after perturbation of a task-relevant brain region. Thus, the relationship between motor activity and language-semantic behavior remains far from clear. A better understanding of this issue could be guaranteed by investigating functional interactions between motor sites and semantically-relevant regions.
Theories of embodied cognition propose that language comprehension is based on perceptual and motor processes. More specifically, it is hypothesized that neurons processing verbs describing bodily actions, and those that process the corresponding physical actions, fire simultaneously during action verb learning. Thus the concept and motor activation become strongly linked. According to this view, the language-induced activation of the neural substrates for action is automatic. By contrast, a weak view of embodied cognition proposes that activation of these motor regions is modulated by context. In recent studies it was found that action verbs in literal sentences activate the motor system, while mixed results were observed for action verbs in non-literal sentences. Thus, whether the recruitment of motor regions is automatic or context dependent remains a question. We investigated functional magnetic resonance imaging activation in response to non-literal and literal sentences including arm and leg related actions. The sentence structure was such that the action verb was the last word in the subordinate clause. Thus, the constraining context was presented well before the verb. Region of interest analyses showed that action verbs in literal context engage the motor regions to a greater extent than non-literal action verbs. There was no evidence for a semantic somatotopic organization of the motor cortex. Taken together, these results indicate that during comprehension, the degree to which motor regions are recruited is context dependent, supporting the weak view of embodied cognition.
We examined the effect of hand grip on object recognition by studying the modulation of the mu rhythm when participants made object decisions to objects and non-objects shown with congruent or incongruent hand-grip actions. Despite the grip responses being irrelevant to the task, mu rhythm activity on the scalp over motor and pre-motor cortex was sensitive to the congruency of the hand grip—in particular the event-related desynchronization of the mu rhythm was more pronounced for familiar objects grasped with an appropriate grip than for objects given an inappropriate grasp. Also the power of mu activity correlated with RTs to congruently gripped objects. The results suggest that familiar motor responses evoked by the appropriateness of a hand grip facilitate recognition responses to objects.