- 1The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- 2Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
- 3Hap2u, Saint-Martin-d’Hères, France
- 4Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
- 5Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
- 6Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Introduction
In everyday life, vision supports crucial functions that enable us to successfully interact with our environment, such as manipulation of objects as well as spatial orientation and navigation in space. These functions depend on the correct acquisition and maintenance of mental representations of our environment and the objects within it. In sighted individuals, vision typically predominates these functions and spatial abilities more generally (e.g., Welch and Warren, 1980; Knudsen and Knudsen, 1989; Schutz and Lipscomb, 2007). However, visual impairments affect nearly 300 million people globally, with another ~36 million suffering from complete loss of vision (World Health Organization, 2000). This calls for effective rehabilitation methods, including sensory substitution approaches.
Studies in visually impaired individuals document the extensive neuroplasticity of both non-visual functions, as well as within visual cortices. For example, visual deprivation enhances tactile acuity not only in sighted individuals (Pascual-Leone and Hamilton, 2001; Merabet et al., 2007; Norman and Bartholomew, 2011), but also in blind and visually impaired patients (Goldreich and Kanics, 2003; Lederman and Klatzky, 2009). It is now well-established that cross-modal plasticity can promote functions that are supported predominantly by vision. Tactile information has been most widely utilized to date to train functions such as reading (e.g., Braille) and exploration of space (e.g., white cane). Specifically, object geometry and form judgments based on haptics have been demonstrated to activate visual areas along the so-called dorsal pathway (Prather et al., 2004; Sathian, 2005). Furthermore, visual areas have been found to be activated during Braille reading in functional imaging studies (Sadato et al., 1996, 1998, 2002; Burton et al., 2002; Amedi et al., 2003). Sathian et al. (1997) were the first to demonstrate, via haemodynamic imaging, that discrimination of orientation of tactile gratings activates the same extrastriate areas as those observed active during visual orientation discrimination. This cross-modal functional recruitment of nominally visual cortices for tactile perceptual functions most likely results from cross-modal plasticity operating via the interplay between unisensory and multisensory neurons (Amedi et al., 2001; Kitada et al., 2006). More generally, there is now convergent and consistent evidence for visual cortex activation during tactile perception in both blind and sighted individuals (reviewed in Lacey and Sathian, 2014).
In addition to evidence pointing to the involvement of visual cortices in tactile discrimination, spatial functions can also be achieved in a modality-independent fashion, including based solely on tactile information. For example, studies of mental rotation where participants need to judge whether an image is portrayed in its normal or mirror-reversed form demonstrate a typical increase in reaction times (RTs) with increasing rotation of the image (Shepard and Metzler, 1971; Lacey et al., 2007a,b). Marmor and Zaback (1976) showed that the same mental rotation effect occurs with tactile stimuli. This and other findings have led to the belief that spatial properties can be encoded in a modality-independent format (Lacey and Campbell, 2006), and may thus engage a common spatial representational system (Lacey and Sathian, 2012; Lee Masson et al., 2016).
The discovery of modality-independence of spatial representations has opened a new avenue for vision rehabilitation, i.e., tactile-based sensory substitution. One particularly striking example here is the successful use of haptic stimulation of the tongue with the tongue-display unit (TDU) to retrain “tactile-visual” acuity (TDU, Chebat et al., 2007). The TDU is a sensory substitution device (SSD) that converts a visual stimulus into electro-tactile pulses delivered to the tongue via a grid of electrodes. Visually impaired individuals were able to discriminate various orientations of the letter E (i.e., the Snellen E test) based solely on stimulation with the TDU (Chebat et al., 2007). While such efforts are impressive, they risk remaining limited in their applications. However, this is at least partially addressed in new technologies for digitization of information, such as tablets digitally rendering tactile information (e.g., Xplore Touch1). This digitization of information has led to significant improvements in healthcare, including reduced costs and increased accessibility and reliability of treatments (Noffsinger and Chin, 2000; Dwivedi et al., 2002). Currently, visually impaired individuals require persistent training for the rehabilitation of visual functions that support basic everyday activities such as cooking, cleaning, and navigating one’s environment. This involves numerous hours of work together with therapists. Digitalizing the method of delivery of therapeutic procedures would likely allow visually impaired patients to be more independent and, so, successful, in their training. For one, the therapeutic programs could be created online and then easily downloaded onto a digital device. Second, patients would be able to practice and improve their tactile acuity as well as their form and object perception abilities without the constant presence of a therapist.
It is known that spatial operations such as mental rotation can be supported solely by tactile stimuli such as Plexiglas forms or wooden blocks (Marmor and Zaback, 1976; Carpenter and Eisenberg, 1978, for recent reviews see Prather and Sathian, 2002; Lacey et al., 2007a). By contrast, it is unknown whether individuals can create and manipulate mental representations of objects based solely on simulated haptic representations. If spatial functions can be rehabilitated with digital devices, this should substantially improve both the speed and the extent of the recovery as well as the independence of visually impaired patients. Haptic tablets thus promise to open up unprecedented possibilities for recovery of visual functions for blind and visually impaired individuals, due to the ease of delivery of digital information and of the transfer of the learnt information from tablet to veridical environments. Being able to mentally rotate digitally presented haptic objects would serve as an important proof-of-concept for the successful acquisition of a representation of a simulated haptic space.
To this end, the present study investigated whether participants would be able to successfully mentally rotate representations of letters in their normal and mirror-reversed forms, experienced solely via digitally-rendered haptic feedback. We focused on the distinction of letter forms (i.e., normal vs. mirror-reversed), because judgments of letter identity (for example the distinction between a letter and a number) do not necessarily implicate mental rotation (White, 1980). We hypothesized that normally-sighted participants should show the prototypical mental rotation effect, with steadily decreasing accuracy (and increasing RTs) with increasing angular disparity from the prototypical upright letter orientation, which would translate into a main effect of angle. Moreover, we expected that participants would show better performance with letters in their normal form compared to mirror-reversed letters, due to the well-investigated effect of stimulus familiarity on mental rotation (White, 1980; Bethell-Fox and Shepard, 1988; Prather and Sathian, 2002). We also expected a main effect of training, meaning that participants would perform better with letters which they had trained with, compared to letters that were untrained.
Materials and Methods
Participants
All participants provided written informed consent to procedures approved by the cantonal ethics committee in accordance with the Declaration of Helsinki. We tested 17 adults (12 women and five men; age range 25–37 years, mean ± stdev: 28.9 ± 3.5 years), who volunteered for our experiment. Participants reported normal or corrected-to-normal vision. No participant had a history of or current neurological or psychiatric illnesses. Handedness was assessed via the Short Form of the Edinburgh Handedness Inventory (Oldfield, 1971). Two of our participants were left-handed, while the remainder were right-handed. We also asked our participants about their experience playing a musical instrument, due to evidence of increased cortical representation of the hands of musical instrument players (see e.g., Elbert et al., 1995). Nine participants were active instrument players (i.e., actively played instruments at the time of the testing session), five had formerly played instruments (i.e., during childhood, adolescence and early adulthood, however they were not actively practising at the time of testing), and three played no instruments.
Apparatus
Haptic stimulation was delivered via a tablet with a TFT capacitive 7-inch touchscreen with a resolution of 1,024 × 600 pixels. The screen of the tablet is controlled by a Raspberry Pi 3 based system, and the operating system is Raspbian (Linux). The processor of the tablet is a Broadcom ARMv7, quadcore 1.2 GHz and it has 1 Go RAM and Rev C WaveShare. The tablet comes with a haptic creation tool, which is a software that allows for user control of haptic textures. Several other APIs based on C++ or Java are installed, such as library tools that allow the implementation of haptics on other applications. Figures in jpeg format were re-coded in haptic format using a kit written in C++. For more technical details describing the rendering of the haptic feedback, see Vezzoli et al. (2016, 2017) and Rekik et al. (2017).
Stimuli
Stimuli consisted of four capital letters—L, P, F and G—created in Paint (see e.g., Carpenter and Eisenberg, 1978; see also Figure 1). We chose these capital letters as their mirror-image counterparts do not confuse, as compared to for example lower-case “d,” whose mirror image is “b” and “b,” whose mirror image is “d” (Corballis and McLaren, 1984). Moreover, these letters have previously been used in mental rotation tasks (Cohen and Polich, 1989; Rusiak et al., 2007; Weiss et al., 2009), including tasks with tactile objects (e.g., Carpenter and Eisenberg, 1978). The letters were resized to always be presented centrally on the screen of the haptic tablet, which has a pixel resolution smaller than that used to generate the images. Letters were then rotated to 0°, 90°, 180° and 270° and mirrored in Matlab. Letter size was 935 × 509 pixels. With regard to the image-to-haptic conversion, the letters appeared centrally on a white background. White pixels did not produce the feeling of a texture on a finger (i.e. “empty” pixels). All non-white pixels were then coded with the same haptic texture, which was created using the hap2u pre-installed Texture Editor software. The ultrasonic vibration was adjusted to have a square shape, as this offers the most intense and quick reduction of the friction of the screen under the finger, thus conferring a rather sharp and pointy sensation, in contrast to a sinusoidal-shaped wave, which would confer a rather smooth perception. The period of the window of one square ultrasonic signal was chosen to be 3,500 μm (which is considered a “coarse” texture, see Hollins and Risner, 2000), and the amplitude was set at 100%, meaning roughly 2 μm (as the friction reduction hits a plateau at this value, see e.g., Sednaoui et al., 2017).
Figure 1. Figure 1. (A) Stimuli used in the experiment. These images are based on reverse translation of the haptic “image.” The checkered portions refer to regions with no haptic texture. The letters were created to have the same proportions on the haptic tablet screen, and thus they appear slightly distorted. Normal stimuli and their mirror images were rotated at 90°, 180°, 270° and were individually presented to participants on the tablet. (B) Transformation of the stimuli into haptic renderings was possible via a pre-installed kit. The transformation takes a cell (8 × 8 pixels) from the picture file and codes the cells into textures with the help of a haptic library where different textures are defined. Participants were then able to feel the vibrations on the tablet screen only at those places where the cells were transformed. (C) Experimental setup. Participants had their eyes blindfolded and wore noise-canceling headphones in order to prevent any other external stimulation interfering with the haptic sensation. After exploring the letter on the tablet for 30 s with a single finger, they indicated if the letter was normal or mirrored via a computer mouse button-press with their non-dominant hand, which would then initiate the passage to the next trial.
Procedure and Task
Participants were tested in a sound-attenuated, darkened room (WhisperRoom MDL 102126E). Subjects were blindfolded and wore noise-canceling headphones (Bose model QuietComfort 2), in order to block any residual light and the sounds of the ultrasonic vibrations produced by the tablet. None of the participants had any prior visual or haptic exposure to the stimuli used in the paradigm, minimizing any cross-modal facilitation (Lacey et al., 2007a,b). The participant’s task was a two-alternative forced choice that required discrimination of normal and mirror-reversed letters via a mouse click (left mouse press for the normal form, right mouse press for the mirrored form; same for all participants). Participants were instructed to use a finger from their dominant hand for tablet exploration, and the non-dominant one for responses. The task was to feel the letter on the haptic tablet for 30 s, recognize the letter, and if needed, to mentally rotate the letter to the 0° form, in order to decide whether the normal or the mirror-reversed form had been presented. We used explicit instructions, since it has been reported that this is not a determinant of whether a mental rotation effect is observed (reviewed in Prather and Sathian, 2002). Stimuli were presented for a duration of 30 s. Next, participants had 20 s for responding, and were instructed to respond as quickly and as accurately as possible. After the response, the next trial was initiated and was preceded by an inter-trial interval randomly ranging between 500 and 1,000 ms. Each participant completed three blocks of training, each comprising 16 trials (two per condition; informed by a pilot study). Participants were trained on pairs of two letters—either L and P or F and G—that they were assigned in a counterbalanced manner across individuals. We grouped these letters given their perceptual closeness, which allowed a progressive learning procedure. We decided to focus the training on a particular letter pairing in order to investigate skill transfer to new, untrained stimuli. Participants were first trained to explore the tablet screen via lateral sweeps [(Stilla and Sathian, 2008), see e.g., (Lederman and Klatzky, 1993) for a discussion of which tactile exploration strategies are particularly appropriate to disclose specific object characteristics, and (Hollins and Risner, 2000) for a discussion of how dynamic vs. static exploration affects coarse (>100 μm) as compared to fine texture discrimination], using only one finger at a time. Subjects were allowed to change the finger they used for exploration, due to a common complaint about adaptation of their tactile sensation during the pilot experiments or during the training blocks. However, they were not allowed to change the hand used for exploration. Subjects were then taught how to discriminate horizontal from vertical lines, and finally, how to discriminate between the two letters that they were trained on. The experimenter gave subjects verbal instructions and feedback throughout the training session. The testing phase comprised four blocks of 32 trials, making 128 trials in total per participant (i.e., eight trials per each condition, in total 16 conditions). During the experiment, participants were allowed to take regular breaks between blocks of trials to maintain high concentration and prevent fatigue. Stimulus delivery and behavioral response collection were controlled by Psychopy software (Peirce, 2007).
Behavioral Analysis
Data were pre-processed in Matlab and analyzed in R (R Core Team, 2018) and SPSS (IBM Corp, 2017). First, we excluded all trials with RTs longer than 15 s (5% of trials), as well as missed trials (2.5% of trials), which were trials where a response was not given within 20 s. We then excluded any remaining outlier trials on a single subject basis (i.e., for each subject and condition), applying a mean ± 2 standard deviations criterion to their RTs (2.7% of trials, see Ratcliff, 1993; Field et al., 2012). Accuracy was then calculated. RT data were not further analyzed, since responses were only provided after stimulus offset followed by a subsequent cue. Data from three participants were excluded due to very low accuracy for the 0° condition (<50%). We compared Accuracy with a 2 × 2 × 4 repeated measures ANOVA with factors Training (trained/untrained), Condition (normal/mirror) and Angle (0°, 90°, 180°, 270°), after not having found a significant deviation from the Normal distribution and from homoscedasticity.
Results
Mean accuracy rates are displayed in Figure 2. The 2 × 2 × 4 ANOVA with factors of Training (trained/untrained), Condition (normal/mirror) and Angle (0°, 90°, 180°, 270°) revealed a significant interaction and two main effects. The Angle × Trained interaction was significant (F(1,13) = 4.912; p < 0.05, = 0.274), and there were main effects of Training (F(1,13) = 5.88; p = 0.03, = 0.314), with generally higher accuracy scores for trained vs. untrained letters, and Condition (F(1,13) = 6.02; p = 0.02, = 0.317), with generally higher accuracy scores for normal compared to mirrored stimuli. Given this significant interaction, we carried out separate 2 × 4 ANOVAs (Condition × Angle) for trained and untrained letters. Untrained letters revealed no interactions or main effects (F ≤ 0.6). By contrast, trained letters exhibited a main effect of Condition (F(1,13) = 11.46, p < 0.01, = 0.470) and a main effect of Angle (F(3,13) = 6.625, p = 0.02, = 0.338). Trained letters in their normal form had higher accuracy scores compared to trained letters in their mirrored form, and accuracy generally decreased with increasing angular disparity. Performance on untrained normal letters was more similar to performance on mirrored letters than to normal trained letters.
Figure 2. Group-averaged (N = 14) accuracy data for normal and mirrored stimuli (SEM indicated). The left column displays results for normal stimuli, while the right displays results for mirrored stimuli. Red lines refer to trained stimuli, while the blue lines represent untrained stimuli.
Discussion
We provide the first demonstration that digitally-rendered haptic stimuli can support the creation of mental representations of objects that can then be spatially manipulated. Participants’ accuracy scores decreased with greater angular disparity of the presented letters from upright, indicating a prototypical mental rotation effect for trained letters (Shepard and Metzler, 1971). Moreover, letters in their mirrored form were less accurately detected compared to letters in their normal form, consistent with the stimulus familiarity effect that has been previously found to influence mental rotation with real visual stimuli (White, 1980). Specifically, normally sighted participants performed significantly better when tested on previously trained compared to untrained letters. This effect was observed for letters presented in their canonical form, and less for letters in their mirrored form. In addition, our results show that a short training session of about 45 min on the haptic tablet was sufficient to significantly increase the ability to correctly identify the correct form of haptic letters. These results extend previous efforts to support rehabilitation of spatial functions using SSDs, and open new avenues for applications of digital haptic technology.
Mental rotation of objects created by haptic feedback successfully modulated accuracy of object recognition; increasing angular disparity away from the prototypical orientation linearly reduced recognition accuracy. As expected, performance was significantly higher for normal letters, compared to mirrored letters, and for trained letters, as compared to untrained letters. Accuracy for letters in their normal upright form decreased up to 180°, with a slight increase for stimuli rotated at 270°. Similar results have previously been found in mental rotation tasks with stimuli of different kinds (see e.g., Kosslyn et al., 1998; Hyun and Luck, 2007; Milivojevic et al., 2011; Zeugin et al., 2017), further corroborating that our experimental manipulation was effective and that mental rotation of our haptic letter stimuli indeed took place. The significant interaction between factors Condition and Angle illustrates the fact that mental rotation of familiar stimuli was more successful than for unfamiliar stimuli. To be precise, given that the stimuli were letters, they can generally be considered familiar stimuli, however only letters presented in their normal form can be considered overlearned (White, 1980), while letters in their mirrored form can be considered unfamiliar, as individuals are seldomly using mirrored letters in their everyday lives. In addition, the significant effect of the factor Training indicates that with only little training on the task and limited exposure to haptic stimulation before the testing, participants were able to improve their performance, which was not the case for untrained letters.
Our findings replicate and extend prior studies of mental rotation based on haptic information. Mental rotation has been studied with Plexiglas letters and objects (Carpenter and Eisenberg, 1978; Hunt et al., 1989), abstract Braille-like stimuli (Röder et al., 1997), as well as with haptic versions of the Shepard and Metzler (1971) stimuli (Robert and Chevrier, 2003). These and other similar works have likewise shown that performance worsens with increasing angular displacement from upright, independently of whether an explicit instruction was provided to use a strategy based on mental rotation (reviewed in Prather and Sathian, 2002). By contrast, evidence of mental rotation with tactile stimuli does appear to vary with task. Tasks requiring mirror-image discrimination yield mental rotation effects, whereas those requiring identification of isolated stimuli generally do not (Prather and Sathian, 2002). Our study required participants to discriminate whether each stimulus was normal vs. mirror-reversed, and we indeed observed a mental rotation effect for trained letters. Our accuracy rates are consistent with, albeit somewhat lower than, what has been reported in sighted participants presented with physical objects (~80%–90% in Marmor and Zaback, 1976; Röder et al., 1997; Robert and Chevrier, 2003). However, two important distinctions in our study are the use of digital haptics, and moreover, that participants could only use a single finger to explore the stimulus. Ongoing efforts are working to enhance the haptic perceptual qualia as well as to permit exploration by multiple fingers simultaneously. Such notwithstanding, this limitation may nonetheless help us hone in on specific exploration and haptic learning strategies. Minimally, our results demonstrate that mental representations of haptic objects and their discrimination can be ascertained using information acquired with a single digit.
To summarize, our results indicate that participants were able to mentally manipulate internal representations of familiar stimuli that they learned solely in a haptic manner, through interaction with a digitally created texture. While our results have potential applications in the simulation of tactile sensorial perceptions in virtual reality, we do not have the space to discuss these at length here. Instead, we would like to focus on the important implications that our results have for cognitive models of spatial functions, as well as on the implications for the rehabilitation thereof in patients suffering from impairments due to vision loss. In what follows, we will discuss these latter two points.
Implications for Models of Spatial Functions
Our results have implications for current models of cortical mechanisms that decode spatial characteristics of objects. Recently, evidence has been accumulating for a decoding mechanism that is modality-independent, with spatial features of objects and spaces being communicated through visual (Koenderink et al., 1992; Erens et al., 1993), haptic (Kappers and Koenderink, 1999; Prather et al., 2004; Snow et al., 2014; Lee Masson et al., 2018), and auditory (Amedi et al., 2007, 2002) information alone, as well as through multisensory information (Lacey et al., 2009; Sathian et al., 2011; Lacey and Sathian, 2014; Lee Masson et al., 2016, 2017). Moreover, it was demonstrated that multisensory regions, such as V1, IPS, and LOC, specifically encode spatial characteristics such as shape, but not object identity (Amedi et al., 2002). Our results further support such modality-independent models of spatial representations. In particular, it was possible for us to convey the shapes of haptic objects (i.e., letters) to participants through unisensory haptic stimuli. This indicates that spatial features of objects, and, specifically, of object shape, can be decoded from a variety of stimulus formats—be it visual, auditory, or somatosensory. However, sensory impressions coming from haptic and visual information are very different (Rose, 1994), and vision and touch use different metrics and geometries (Kappers and Koenderink, 1999). Nevertheless, there is substantial neuroimaging evidence showing that vision and touch are intimately connected even if there is no direct, one-to-one mapping (see Amedi et al., 2005; Sathian, 2005 for reviews). For one, cerebral cortical areas previously regarded as exclusively unisensory in nature are activated by sensory inputs in a task- and stimulus-specific manner (Lacey et al., 2007a). New evidence also supports high similarities between visual and haptic representations of object perceptual spaces (Cooke et al., 2007; Wallraven et al., 2014; Lee Masson et al., 2016). These results have been further complemented by neuroimaging studies, that helped in corroborating the result of high correlations between perceptual spaces reconstructed using tactile vs. visual information (Snow et al., 2014; Smith and Goodale, 2015). Indeed, clinical cortical lesion studies demonstrate that lesions of visual brain areas, such as the inferior occipito-temporal cortex, or the anterior intraparietal sulcus, are accompanied by tactile agnosia for objects, despite intact somatosensory cortical areas (Feinberg et al., 1986; James et al., 2002). Collectively, our results support a task-specificity, as compared to a stimulus-specificity, of spatial functions.
Implications for Rehabilitation of Spatial Functions
Our study further validates efforts of rehabilitation of spatial functions through SSDs. Cross-modal and multisensory integration are the drivers of neuroplasticity in visual areas (Kirkwood et al., 1996; Amedi et al., 2004; Merabet et al., 2005; Pascual-Leone et al., 2005; Murray et al., 2015), which promotes a task-selective and modality-independent re-specialization of these cortical structures. Besides the known applications of tactile sensory substitutions such as the Braille alphabet, white cane, or the TDU, our results open new avenues for mitigation of deficiencies of spatial functions in the blind and visually impaired. Indeed, it has been demonstrated numerous times that tactile information can support spatial functions in blind, visually impaired, and sighted subjects (Marmor and Zaback, 1976; Carpenter and Eisenberg, 1978; Grant et al., 2000; Ptito et al., 2005; Sathian, 2005; Chebat et al., 2007; Wan et al., 2010; Rovira et al., 2011; Vinter et al., 2012). However, the main innovation introduced by our study is the digital simulated nature of the tactile stimuli. As digital information is easily recoded and reproduced, our results open new exciting venues for increased accessibility of traditionally visual functions, such as reading, navigation, etc., to visually impaired people.
In addition, such tactile substitution and multisensory techniques can also be used to retrain spatial functions after sight restoration. Specifically, patients with long-lasting cataracts have deficient depth perception after cataract removal (Hartung, 1962; Gregory, 2003; McKyton et al., 2015), despite normal low-level visual perception. Thus, as auditory information is unable to confer spatial information (Amedi et al., 2002), one could imagine complementing rehabilitation programs with tactile spatial information, in order to confer distance relations in a multisensory manner. Another exciting endeavor for further research that we are now also pursuing in the laboratory is the extent to which simulated haptic information can support the encoding of entire familiar and new spaces. In short, simulated tactile information has critical implications for applications in rehabilitation regimes. Besides being specifically able to convey spatial relations, as opposed to auditory information, simulated tactile stimuli have the added value of accessibility. This benefit renders tactile tablets a promising solution for the mitigation of complete or partial loss of spatial abilities due to sensory loss or deprivation.
Conclusion
We trained normally-sighted participants on a haptic mirror-image discrimination task, using a new technology that digitally simulates texture. After only a short exposure and habituation to the new sensation, and relatively little training on the task, participants were able to mentally manipulate internal representations of the trained letters. This indicates that spatial functions and attributes such as object shape rely on a modality-independent mechanism, and that multiple sensory modalities are capable of supporting spatial computations. Furthermore, our results have important implications for research on virtual simulated sensorial perception, as well as for neural plasticity and visual rehabilitation, and highlight the merit of restoring visual functions through SSDs.
Data Availability
The datasets generated for this study are available on request to the corresponding author.
Author Contributions
RT and MM designed the study. TR and CC provided essential equipment. RT, MM, and J-FK set up the stimuli and paradigm. RT, MM and NT completed the data analysis. RT, FA, JR, and MM interpreted the results. All authors contributed to the writing and final revisions to the manuscript.
Funding
This research has been supported by a grantor advised by Carigest (to MM) and by grants from the Swiss National Science Foundation (Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung, grants 320030-149982 and 320030-169206 to MM and grant PZ00P1_174150 to PM), the Fondation Asile des Aveugles (to MM and PM), and by the Pierre Mercier Foundation (to PM).
Conflict of Interest Statement
CC is the CEO and Founder of Hap2U. CC thus has commercial, proprietary and financial interest in Hap2U, which provided the haptic tablet device instrument related to this article. TR is a paid employee of Hap2U.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We thank Dr. Olivier Lorentz for comments on an earlier version of this work.
Footnotes
References
Amedi, A., Floel, A., Knecht, S., Zohary, E., and Cohen, L. G. (2004). Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects. Nat. Neurosci. 7, 1266–1270. doi: 10.1038/nn1328
Amedi, A., Jacobson, G., Hendler, T., Malach, R., and Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cereb. Cortex 12, 1202–1212. doi: 10.1093/cercor/12.11.1202
Amedi, A., Malach, R., Hendler, T., Peled, S., and Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nat. Neurosci. 4, 324–330. doi: 10.1038/85201
Amedi, A., Merabet, L. B., Bermpohl, F., and Pascual-Leone, A. (2005). The occipital cortex in the blind: lessons about plasticity and vision. Curr. Dir. Psychol. Sci. 14, 306–311. doi: 10.1111/j.0963-7214.2005.00387.x
Amedi, A., Raz, N., Pianka, P., Malach, R., and Zohary, E. (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nat. Neurosci. 6, 758–766. doi: 10.1038/nn1072
Amedi, A., Stern, W. M., Camprodon, J. A., Bermpohl, F., Merabet, L., Rotman, S., et al. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nat. Neurosci. 10, 687–689. doi: 10.1038/nn1912
Bethell-Fox, C. E., and Shepard, R. N. (1988). Mental rotation: effects of stimulus complexity and familiarity. J. Exp. Psychol. Hum. Percept. Perform. 14, 12–23. doi: 10.1037//0096-1523.14.1.12
Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., and Raichle, M. E. (2002). Adaptive changes in early and late blind: a fMRI study of Braille reading. J. Neurophysiol. 87, 589–607. doi: 10.1152/jn.00285.2001
Carpenter, P. A., and Eisenberg, P. (1978). Mental rotation and the frame of reference in blind and sighted individuals. Percept. Psychophys. 23, 117–124. doi: 10.3758/BF03208291
Chebat, D. R., Rainville, C., Kupers, R., and Ptito, M. (2007). Tactile-‘visual’ acuity of the tongue in early blind individuals. Neuroreport 18, 1901–1904. doi: 10.1097/WNR.0b013e3282f2a63
Cohen, W., and Polich, J. (1989). No hemispheric differences for mental rotation of letters or polygons. Bull. Psychon. Soc. 27, 25–28. doi: 10.3758/BF03329887
Cooke, T., Jäkel, F., Wallraven, C., and Bülthoff, H. H. (2007). Multimodal similarity and categorization of novel, three-dimensional objects. Neuropsychologia 45, 484–495. doi: 10.1016/j.neuropsychologia.2006.02.009
Corballis, M. C., and McLaren, R. (1984). Winding one’s Ps and Qs: mental rotation and mirror-image discrimination. J. Exp. Psychol. Hum. Percept. Perform. 10, 318–327. doi: 10.1037/0096-1523.10.2.318
Dwivedi, A. N., Bali, R. K., James, A. E., Naguib, R., and Johnston, D. (2002). “Merger of knowledge management and information technology in healthcare: opportunities and challenges,” in IEEE CCECE2002. Canadian Conference on Electrical and Computer Engineering. Conference Proceedings (Cat. No.02CH37373) (Winnipeg, Canada: IEEE), 1194–1199.
Elbert, T., Pantev, C., Wienbruch, C., Rockstroh, B., and Taub, E. (1995). Increased cortical representation of the fingers of the left hand in string players. Science 270, 305–307. doi: 10.1126/science.270.5234.305
Erens, R., Kappers, A., and Koenderink, J. (1993). Perception of local shape from shading. Percept. Psychophys. 54, 145–156. doi: 10.3758/bf03211750
Feinberg, T. E., Rothi, L. J. G., and Heilman, K. M. (1986). Multimodal agnosia after unilateral left hemisphere lesion. Neurology 36, 864–864. doi: 10.1212/wnl.36.6.864
Field, A., Miles, J., and Field, Z. (2012). Discovering Statistics Using R. London: Sage Publications.
Goldreich, D., and Kanics, I. M. (2003). Tactile acuity is enhanced in blindness. J. Neurosci. 23, 3439–3445. doi: 10.1523/JNEUROSCI.23-08-03439.2003
Grant, A. C., Thiagarajah, M. C., and Sathian, K. (2000). Tactile perception in blind braille readers: a psychophysical study of acuity and hyperacuity using gratings and dot patterns. Percept. Psychophys. 62, 301–312. doi: 10.3758/bf03205550
Hartung, F. E. (1962). Space and sight: the perception of space and shape in the congenitally blind before and after operation. By Marius von Senden. Translated by Peter Heath. Glencoe, Illinois: The Free Press, 1960, 348 pp. $10.00. Soc. Forces 41, 91–93. doi: 10.2307/2572931
Hollins, M., and Risner, S. R. (2000). Evidence for the duplex theory of tactile texture perception. Percept. Psychophys. 62, 695–705. doi: 10.3758/bf03206916
Hunt, L. J., Janssen, M., Dagostino, J., and Gruber, B. (1989). Haptic identification of rotated letters using the left or right hand. Percept. Mot. Skills 68, 899–906. doi: 10.2466/pms.1989.68.3.899
Hyun, J. S., and Luck, S. J. (2007). Visual working memory as the substrate for mental rotation. Psychon. Bull. Rev. 14, 154–158. doi: 10.3758/bf03194043
IBM Corp. (2017). MATLAB and Statistics Toolbox Release, (Massachusetts, Natick: The MathWorks, Inc.).
James, T. W., Humphrey, G. K., Gati, J. S., Servos, P., Menon, R. S., and Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia 40, 1706–1714. doi: 10.1016/s0028-3932(02)00017-9
Kappers, A. M. L., and Koenderink, J. J. (1999). Haptic perception of spatial relations. Perception 28, 781–795. doi: 10.1068/p2930
Kirkwood, A., Rioult, M. G., and Bear, M. F. (1996). Experience-dependent modification of synaptic plasticity in visual cortex. Nature 381, 526–528. doi: 10.1038/381526a0
Kitada, R., Kito, T., Saito, D. N., Kochiyama, T., Matsumura, M., Sadato, N., et al. (2006). Multisensory activation of the intraparietal area when classifying grating orientation: a functional magnetic resonance imaging study. J. Neurosci. 26, 7491–7501. doi: 10.1523/JNEUROSCI.0822-06.2006
Knudsen, E. I., and Knudsen, F. (1989). Vision calibrates sound localization in developing barn owls. J. Neurosci. 9, 3306–3313. doi: 10.1523/JNEUROSCI.09-09-03306.1989
Koenderink, J. J., van Doorn, A. J., and Kappers, A. M. L. (1992). Surface perception in pictures. Percept. Psychophys. 52, 487–496. doi: 10.3758/bf03206710
Kosslyn, S. M., Di Girolamo, G. J., Thompson, W. L., and Alpert, N. M. (1998). Mental rotation of object versus hands: neural mechanisms revealed by positron emission tomography. Psychophysiology 35, 151–161. doi: 10.1111/1469-8986.3520151
Lacey, S., and Campbell, C. (2006). Mental representation in visual/haptic crossmodal memory: evidence from interference effects. Q. J. Exp. Psychol. 59, 361–376. doi: 10.1080/17470210500173232
Lacey, S., Campbell, C., and Sathian, K. (2007a). Vision and touch: multiple or multisensory representations of objects? Neurosci. Biobehav. Rev. 36, 1513–1521. doi: 10.1068/p5850
Lacey, S., Peters, A., and Sathian, K. (2007b). Cross-modal object recognition is viewpoint-independent. PLoS One 2:e890. doi: 10.1371/journal.pone.0000890
Lacey, S., and Sathian, K. (2012). “Representation of object form in vision and touch,” in The Neural Bases of Multisensory Processes, eds M. M. Murray and M. T. Wallace (Boca Raton, FL: CRC Press/Taylor and Francis), 179–190.
Lacey, S., and Sathian, K. (2014). Visuo-haptic multisensory object recognition, categorization, and representation. Front. Psychol. 5:730. doi: 10.3389/fpsyg.2014.00730
Lacey, S., Tal, N., Amedi, A., and Sathian, K. (2009). A putative model of multisensory object representation. Brain Topogr. 21, 269–274. doi: 10.1007/s10548-009-0087-4
Lederman, S. J., and Klatzky, R. L. (1993). Extracting object properties through haptic exploration. Acta Psychol. Amst. 84, 29–40. doi: 10.1016/0001-6918(93)90070-8
Lederman, S. J., and Klatzky, R. L. (2009). Haptic perception: a tutorial. Atten. Percept. Psychophys. 71, 1439–1459. doi: 10.3758/app.71.7.1439
Lee Masson, H., Bulthé, J., Op De Beeck, H. P., and Wallraven, C. (2016). Visual and haptic shape processing in the human brain: unisensory processing, multisensory convergence, and top-down influences. Cereb. Cortex 26, 3402–3412. doi: 10.1093/cercor/bhv170
Lee Masson, H., Kang, H. M., Petit, L., and Wallraven, C. (2018). Neuroanatomical correlates of haptic object processing: combined evidence from tractography and functional neuroimaging. Brain Struct. Funct. 223, 619–633. doi: 10.1007/s00429-017-1510-3
Lee Masson, H., Wallraven, C., and Petit, L. (2017). “Can touch this”: cross-modal shape categorization performance is associated with microstructural characteristics of white matter association pathways. Hum. Brain Mapp. 38, 842–854. doi: 10.1002/hbm.23422
Marmor, G. S., and Zaback, L. A. (1976). Mental rotation by the blind: does mental rotation depend on visual imagery? J. Exp. Psychol. Hum. Percept. Perform. 2, 515–521. doi: 10.1037//0096-1523.2.4.515
McKyton, A., Ben-Zion, I., Doron, R., and Zohary, E. (2015). The limits of shape recognition following late emergence from blindness. Curr. Biol. 25, 2373–2378. doi: 10.1016/j.cub.2015.06.040
Merabet, L. B., Rizzo, J. F., Amedi, A., Somers, D. C., and Pascual-Leone, A. (2005). What blindness can tell us about seeing again: merging neuroplasticity and neuroprostheses. Nat. Rev. Neurosci. 6, 71–77. doi: 10.1038/nrn1586
Merabet, L. B., Swisher, J. D., McMains, S. A., Halko, M. A., Amedi, A., Pascual-Leone, A., et al. (2007). Combined activation and deactivation of visual cortex during tactile sensory processing. J. Neurophysiol. 97, 1633–1641. doi: 10.1152/jn.00806.2006
Milivojevic, B., Hamm, J. P., and Corballis, M. C. (2011). About turn: how object orientation affects categorisation and mental rotation. Neuropsychologia 49, 3758–3767. doi: 10.1016/j.neuropsychologia.2011.09.034
Murray, M. M., Matusz, P. J., and Amedi, A. (2015). Neuroplasticity: unexpected consequences of early blindness. Curr. Biol. 25, R998–R1001. doi: 10.1016/j.cub.2015.08.054
Noffsinger, R., and Chin, S. (2000). Improving the delivery of care and reducing healthcare costs with the digitization of information. J. Healthc. Inf. Manag. 14, 23–30.
Norman, J. F., and Bartholomew, A. N. (2011). Blindness enhances tactile acuity and haptic 3-D shape discrimination. Atten. Percept. Psychophys. 73, 2323–2331. doi: 10.3758/s13414-011-0160-4
Oldfield, R. C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9, 97–113. doi: 10.1016/0028-3932(71)90067-4
Pascual-Leone, A., Amedi, A., Fregni, F., and Merabet, L. B. (2005). The plastic human brain cortex. Annu. Rev. Neurosci. 28, 377–401. doi: 10.1146/annurev.neuro.27.070203.144216
Pascual-Leone, A., and Hamilton, R. (2001). “The metamodal organization of the brain,” in Vision: From Neurons to Cognition Progress in Brain Research, eds C. Casanova and M. Ptito (Amsterdam: Elsevier Science), 427–445.
Peirce, J. W. (2007). PsychoPy—psychophysics software in Python. J. Neurosci. Methods 162, 8–13. doi: 10.1016/j.jneumeth.2006.11.017
Prather, S. C., and Sathian, K. (2002). Mental rotation of tactile stimuli. Cogn. Brain Res. 14, 91–98. doi: 10.1016/s0926-6410(02)00063-0
Prather, S. C., Votaw, J. R., and Sathian, K. (2004). Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia 42, 1079–1087. doi: 10.1016/j.neuropsychologia.2003.12.013
Ptito, M., Moesgaard, S. M., Gjedde, A., and Kupers, R. (2005). Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 128, 606–614. doi: 10.1093/brain/awh380
Ratcliff, R. (1993). Methods for dealing with reaction time outliers. Psychol. Bull. 114, 510–532. doi: 10.1037//0033-2909.114.3.510
R Core Team. (2018). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available online at: https://www.R-project.org
Rekik, Y., Vezzoli, E., and Grisoni, L. (2017). “Understanding users’ perception of simultaneous tactile textures,” in MobileHCI’17 Proceedings of the 19th International Conference on Human-Computer Interaction With Mobile Devices and Services (Vienna, Austria: ACM), 229–238.
Robert, M., and Chevrier, E. (2003). Does men’s advantage in mental rotation persist when real three-dimensional objects are either felt or seen? Mem. Cognit. 31, 1136–1145. doi: 10.3758/BF03196134
Röder, B., Rösler, F., and Hennighausen, E. (1997). Different cortical activation patterns in blind and sighted humans during encoding and transformation of haptic images. Psychophysiology 34, 292–307. doi: 10.1111/j.1469-8986.1997.tb02400.x
Rose, S. A. (1994). “From hand to eye: findings and issues in infant cross-modal transfer,” in The Development of Intersensory Perception: Comparative Perspectives, eds D. J. Lewkowicz and R. Lickliter (Hove: Lawrence Erlbaum Associates Ltd.), 265–284.
Rovira, K., Deschamps, L., and Baena-Gomez, D. (2011). Mental rotation in blind and sighted adolescents: the effects of haptic strategies. Rev. Eur. Psychol. Appl. 61, 153–160. doi: 10.1016/j.erap.2011.05.001
Rusiak, P., Lachmann, T., Jaskowski, P., and Van Leeuwen, C. (2007). Mental rotation of letters and shapes in developmental dyslexia. Perception 36, 617–631. doi: 10.1068/p5644
Sadato, N., Okada, T., Honda, M., and Yonekura, Y. (2002). Critical period for cross-modal plasticity in blind humans: a functional MRI study. Neuroimage 16, 389–400. doi: 10.1006/nimg.2002.1111
Sadato, N., Pascual-Leone, A., Grafman, J., Deiber, M. P., Ibañez, V., and Hallett, M. (1998). Neural networks for Braille reading by the blind. Brain 121, 1213–1229. doi: 10.1093/brain/121.7.1213
Sadato, N., Pascual-Leone, A., Grafman, J., Ibañez, V., Deiber, M. P., Dold, G., et al. (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380, 526–528. doi: 10.1038/380526a0
Sathian, K. (2005). Visual cortical activity during tactile perception in the sighted and the visually deprived. Dev. Psychobiol. 46, 279–286. doi: 10.1002/dev.20056
Sathian, K., Lacey, S., Stilla, R., Gibson, G. O., Deshpande, G., Hu, X., et al. (2011). Dual pathways for haptic and visual perception of spatial and texture information. Neuroimage 57, 462–475. doi: 10.1016/j.neuroimage.2011.05.001
Sathian, K., Zangaladze, A., Hoffman, J. M., and Grafton, S. T. (1997). Feeling with the mind’s eye. Neuroreport 8, 3877–3881. doi: 10.1097/00001756-199712220-00008
Schutz, M., and Lipscomb, S. (2007). Hearing gestures, seeing music: vision influences perceived tone duration. Perception 36, 888–898. doi: 10.1068/p5635
Sednaoui, T., Vezzoli, E., Dzidek, B., Lemaire-Semail, B., Chappaz, C., and Adams, M. (2017). Friction reduction through ultrasonic vibration part 2: experimental evaluation of intermittent contact and squeeze film levitation. IEEE Trans. Haptics 10, 208–216. doi: 10.1109/toh.2017.2671376
Shepard, R. N., and Metzler, J. (1971). Mental rotation of three-dimensional objects. Science 171, 701–703. doi: 10.1126/science.171.3972.701
Smith, F. W., and Goodale, M. A. (2015). Decoding visual object categories in early somatosensory cortex. Cereb. Cortex 25, 1020–1031. doi: 10.1093/cercor/bht292
Snow, J. C., Strother, L., and Humphreys, G. W. (2014). Haptic shape processing in visual cortex. J. Cogn. Neurosci. 26, 1154–1167. doi: 10.1162/jocn_a_00548
Stilla, R., and Sathian, K. (2008). Selective visuo-haptic processing of shape and texture. Hum. Brain Mapp. 29, 1123–1138. doi: 10.1002/hbm.20456
Vezzoli, E., Sednaoui, T., Amberg, M., Giraud, F., and Lemaire-Semail, B. (2016). “Texture rendering strategies with a high fidelity-capacitive visual-haptic friction control device,” in Proceedings, Part I, of the 10th International Conference on Human Haptic: Sensing and Touch Enabled Computer Applications (Cham: Springer), 251–260.
Vezzoli, E., Vidrih, Z., Giamundo, V., Lemaire-Semail, B., Giraud, F., Rodic, T., et al. (2017). Friction reduction through ultrasonic vibration part 1: modelling intermittent contact. IEEE Trans. Haptics 10, 196–207. doi: 10.1109/toh.2017.2671432
Vinter, A., Fernandes, V., Orlandi, O., and Morgan, P. (2012). Exploratory procedures of tactile images in visually impaired and blindfolded sighted children: how they relate to their consequent performance in drawing. Res. Dev. Disabil. 33, 1819–1831. doi: 10.1016/j.ridd.2012.05.001
Wallraven, C., Bülthoff, H. H., Waterkamp, S., van Dam, L., and Gaißert, N. (2014). The eyes grasp, the hands see: metric category knowledge transfers between vision and touch. Psychon. Bull. Rev. 21, 976–985. doi: 10.3758/s13423-013-0563-4
Wan, C. Y., Wood, A. G., Reutens, D. C., and Wilson, S. J. (2010). Congenital blindness leads to enhanced vibrotactile perception. Neuropsychologia 48, 631–635. doi: 10.1016/j.neuropsychologia.2009.10.001
Weiss, M. M., Wolbers, T., Peller, M., Witt, K., Marshall, L., Buchel, C., et al. (2009). Rotated alphanumeric characters do not automatically activate frontoparietal areas subserving mental rotation. Neuroimage 44, 1063–1073. doi: 10.1016/j.neuroimage.2008.09.042
Welch, R. B., and Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychol. Bull. 88, 638–667. doi: 10.1037//0033-2909.88.3.638
White, M. J. (1980). Naming and categorization of tilted alphanumeric characters do not require mental rotation. Bull. Psychon. Soc. 15, 153–156. doi: 10.3758/bf03334494
World Health Organization. (2000). Blindness: Vision 2020-The Global Initiative for the Elimination of Avoidable Blindness, Fact Sheet 213. Available online at: https://www.who.int/mediacentre/factsheets/fs213/en/
Keywords: haptic, object, multisensory, mental rotation, sensory substitution, low vision, vision impairment
Citation: Tivadar RI, Rouillard T, Chappaz C, Knebel J-F, Turoman N, Anaflous F, Roche J, Matusz PJ and Murray MM (2019) Mental Rotation of Digitally-Rendered Haptic Objects. Front. Integr. Neurosci. 13:7. doi: 10.3389/fnint.2019.00007
Received: 17 December 2018; Accepted: 25 February 2019;
Published: 14 March 2019.
Edited by:
Andreas K. Engel, University Medical Center Hamburg-Eppendorf, GermanyReviewed by:
Giuseppe Pellizzer, University of Minnesota, United StatesYong Gu, Shanghai Institutes for Biological Sciences (CAS), China
Copyright © 2019 Tivadar, Rouillard, Chappaz, Knebel, Turoman, Anaflous, Roche, Matusz and Murray. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ruxandra I. Tivadar, ruxandra-iolanda.tivadar@chuv.ch
Micah M. Murray, micah.murray@chuv.ch