Since the beginning of linguistics, the main object of study has been words, their composition and arrangement. In this vein, linguists have produced vast amounts of illuminating research on history, typology, and theory of language structure, using words and their arrangement as data. The idea that the object of study is instantiated entirely in words and their arrangement is in large part a function of the technology of writing systems, which made it possible to record these parts of language, so that they could be studied scientifically. As a result, the elements that can be recorded in writing, the principles behind them, and the meanings associated with them, became the primary data, and the technology of writing systems facilitated linguistic analysis. But by the same token, writing systems hampered linguistic exploration, since many communicative signals of the body defy written representation and have therefore been excluded from the domain of linguistics.
Interestingly, much of the influential research on sign languages – languages which are visual and not written – has adopted approaches from spoken language research. Although sign language research has inevitably included description of bodily actions, the prevailing linguistic paradigms ignore the visual side of language, so that even sign language research has been indirectly influenced by writing systems.
In recent decades, technology for recording language has greatly expanded. It has now become possible to capture and study the auditory and visual signals that are the physical substance of language -- that which we actually produce and perceive. Technology has allowed us further to transcend the written word, to include visual signals, in the form of gestures and facial expressions, in the study of language, and has spawned new and burgeoning fields of co-speech gesture and multimodal language.
Our observation is that language theory has not caught up with these advances. Linguistics continues to ignore the visual, and gesture studies are primarily interested in cognitive issues, and typically do not attempt to interface directly with linguistic form and organization.
The traditional distinctions in linguistics in the mentalist-verbalist paradigm between verbal and nonverbal, and between mind and body, are too strong, and lead us to ignore visually perceived expressions of the body. We suggest that it is precisely these bodily actions that reflect the substance and organization of human language as a communication system. The production and interpretation of these gestures and facial expressions are crucial to conveying our messages, and they are therefore important components of the language faculty.
Sign language is the most highly sophisticated and self-contained language system of the body. Instead of searching in sign language for linguistic structures that are derived from spoken language theory, the sign language papers in the proposed Topic will begin to determine what systematic bodily actions in sign language can contribute to general theory.
Gesture researchers have refrained from defining gestures as ‘linguistic’ (though they correctly insist that they are part of ‘language’), because they do not conform to certain properties that linguists consider defining properties, such as strict compositional structure and syntactic rules. But if we take a step back, we see that such definitions and concomitant exclusions are not necessarily enlightening. First of all, not all of language is strictly compositional. Second, not all languages have the fancy syntax that we are so fond of analyzing. Third, in real linguistic interaction, actions of the body (including the face) are a crucial part of the message, which in many cases cannot stand alone or would be misunderstood without the body actions. Most importantly, without bodily and facial gestures, we would not have natural human language. So, rather than excluding them from the party, we ought to see how we can incorporate these elements in models of human language.
We propose a Frontiers Research Topic on Visual Language that will begin to bridge the theoretical gap between linguistics, sign linguistics, gesture studies, and language evolution. We intend to welcome papers from scholars in sign language, linguistics, gesture and facial expression, multimodal communication, language evolution, and other relevant disciplines, who will rethink their own work in this context. The goal is to arrive at more than a collection of papers with a loose connection between them, and instead to pave new common ground in the field of visual language.
Since the beginning of linguistics, the main object of study has been words, their composition and arrangement. In this vein, linguists have produced vast amounts of illuminating research on history, typology, and theory of language structure, using words and their arrangement as data. The idea that the object of study is instantiated entirely in words and their arrangement is in large part a function of the technology of writing systems, which made it possible to record these parts of language, so that they could be studied scientifically. As a result, the elements that can be recorded in writing, the principles behind them, and the meanings associated with them, became the primary data, and the technology of writing systems facilitated linguistic analysis. But by the same token, writing systems hampered linguistic exploration, since many communicative signals of the body defy written representation and have therefore been excluded from the domain of linguistics.
Interestingly, much of the influential research on sign languages – languages which are visual and not written – has adopted approaches from spoken language research. Although sign language research has inevitably included description of bodily actions, the prevailing linguistic paradigms ignore the visual side of language, so that even sign language research has been indirectly influenced by writing systems.
In recent decades, technology for recording language has greatly expanded. It has now become possible to capture and study the auditory and visual signals that are the physical substance of language -- that which we actually produce and perceive. Technology has allowed us further to transcend the written word, to include visual signals, in the form of gestures and facial expressions, in the study of language, and has spawned new and burgeoning fields of co-speech gesture and multimodal language.
Our observation is that language theory has not caught up with these advances. Linguistics continues to ignore the visual, and gesture studies are primarily interested in cognitive issues, and typically do not attempt to interface directly with linguistic form and organization.
The traditional distinctions in linguistics in the mentalist-verbalist paradigm between verbal and nonverbal, and between mind and body, are too strong, and lead us to ignore visually perceived expressions of the body. We suggest that it is precisely these bodily actions that reflect the substance and organization of human language as a communication system. The production and interpretation of these gestures and facial expressions are crucial to conveying our messages, and they are therefore important components of the language faculty.
Sign language is the most highly sophisticated and self-contained language system of the body. Instead of searching in sign language for linguistic structures that are derived from spoken language theory, the sign language papers in the proposed Topic will begin to determine what systematic bodily actions in sign language can contribute to general theory.
Gesture researchers have refrained from defining gestures as ‘linguistic’ (though they correctly insist that they are part of ‘language’), because they do not conform to certain properties that linguists consider defining properties, such as strict compositional structure and syntactic rules. But if we take a step back, we see that such definitions and concomitant exclusions are not necessarily enlightening. First of all, not all of language is strictly compositional. Second, not all languages have the fancy syntax that we are so fond of analyzing. Third, in real linguistic interaction, actions of the body (including the face) are a crucial part of the message, which in many cases cannot stand alone or would be misunderstood without the body actions. Most importantly, without bodily and facial gestures, we would not have natural human language. So, rather than excluding them from the party, we ought to see how we can incorporate these elements in models of human language.
We propose a Frontiers Research Topic on Visual Language that will begin to bridge the theoretical gap between linguistics, sign linguistics, gesture studies, and language evolution. We intend to welcome papers from scholars in sign language, linguistics, gesture and facial expression, multimodal communication, language evolution, and other relevant disciplines, who will rethink their own work in this context. The goal is to arrive at more than a collection of papers with a loose connection between them, and instead to pave new common ground in the field of visual language.