The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
The development of reaching is crucially dependent on the progressive control of the trunk, yet their interrelation has not been addressed in detail. Previous studies on seated reaching evaluated infants during fully supported or unsupported conditions; however, trunk control is progressively developed, starting from the cervical/thoracic followed by the lumbar/pelvic regions for the acquisition of independent sitting. Providing external trunk support at different levels to test the effects of controlling the upper and lower regions of the trunk on reaching provides insight into the mechanisms by which trunk control impacts reaching in infants. Ten healthy infants were recruited at 2.5 months of age and tested longitudinally, until 8 months. During the reaching test, infants were placed in an upright seated position and an adjustable support device provided trunk fixation at pelvic and thoracic levels. Kinematic and electromyographic data were collected. Results showed that prior to independent sitting, postural instability was higher when infants were provided with pelvic compared to thoracic support. Associated reaches were more circuitous, less smooth and less efficient. In response to the instability, there was increased postural muscle activity and arm muscle co-activation. Differences between levels of support were not observed once infants acquired independent sitting. These results suggest that trunk control is acquired in a segmental sequence across the development of upright sitting, and it is tightly correlated with reaching performance.
Research studies in psychology typically use two-dimensional (2D) images of objects as proxies for real-world three-dimensional (3D) stimuli. There are, however, a number of important differences between real objects and images that could influence cognition and behavior. Although human memory has been studied extensively, only a handful of studies have used real objects in the context of memory and virtually none have directly compared memory for real objects vs. their 2D counterparts. Here we examined whether or not episodic memory is influenced by the format in which objects are displayed. We conducted two experiments asking participants to freely recall, and to recognize, a set of 44 common household objects. Critically, the exemplars were displayed to observers in one of three viewing conditions: real-world objects, colored photographs, or black and white line drawings. Stimuli were closely matched across conditions for size, orientation, and illumination. Surprisingly, recall and recognition performance was significantly better for real objects compared to colored photographs or line drawings (for which memory performance was equivalent). We replicated this pattern in a second experiment comparing memory for real objects vs. color photos, when the stimuli were matched for viewing angle across conditions. Again, recall and recognition performance was significantly better for the real objects than matched color photos of the same items. Taken together, our data suggest that real objects are more memorable than pictorial stimuli. Our results highlight the importance of studying real-world object cognition and raise the potential for applied use in developing effective strategies for education, marketing, and further research on object-related cognition.
Prehension, the capacity to reach and grasp objects, comprises two main components: reaching, i.e., moving the hand towards an object, and grasping, i.e., shaping the hand with respect to its properties. Knowledge of this topic has gained a huge advance in recent years, dramatically changing our view on how prehension is represented within the dorsal stream. While our understanding of the various nodes coding the grasp component is rapidly progressing, little is known of the integration between grasping and reaching. With this Mini Review we aim to provide an up-to-date overview of the recent developments on the coding of prehension. We will start with a description of the regions coding various aspects of grasping in humans and monkeys, delineating where it might be integrated with reaching. To gain insights into the causal role of these nodes in the coding of prehension, we will link this functional description to lesion studies. Finally, we will discuss future directions that might be promising to unveil new insights on the coding of prehension movements.
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
In many daily activities, and especially in sport, it is necessary to predict the effects of others' actions in order to initiate appropriate responses. Recently, researchers have suggested that the action–observation network (AON) including the cerebellum plays an essential role during such anticipation, particularly in sport expert performers. In the present study, we examined the influence of task-specific expertise on the AON by investigating differences between two expert groups trained in different sports while anticipating action effects. Altogether, 15 tennis and 16 volleyball experts anticipated the direction of observed tennis and volleyball serves while undergoing functional magnetic resonance imaging (fMRI). The expert group in each sport acted as novice controls in the other sport with which they had only little experience. When contrasting anticipation in both expertise conditions with the corresponding untrained sport, a stronger activation of AON areas (SPL, SMA), and particularly of cerebellar structures, was observed. Furthermore, the neural activation within the cerebellum and the SPL was linearly correlated with participant's anticipation performance, irrespective of the specific expertise. For the SPL, this relationship also holds when an expert performs a domain-specific anticipation task. Notably, the stronger activation of the cerebellum as well as of the SMA and the SPL in the expertise conditions suggests that experts rely on their more fine-tuned perceptual-motor representations that have improved during years of training when anticipating the effects of others' actions in their preferred sport. The association of activation within the SPL and the cerebellum with the task achievement suggests that these areas are the predominant brain sites involved in fast motor predictions. The SPL reflects the processing of domain-specific contextual information and the cerebellum the usage of a predictive internal model to solve the anticipation task.