Language and Speech

© Springer-Verlag Berlin Heidelberg 2009

Margaritis Z. PimenidisThe Neurobiology of Orthodontics10.1007/978-3-642-00396-7_5

5. Language and Speech

Margaritis Z. Pimenidis 
(1)

Marathonos Street 22, 152 33 Halandri, Athens, Greece
 
 
Margaritis Z. Pimenidis

5.1 Introduction

Lines of evidence indicate that there are no innate language representations in the cortex, and that regions of the cerebral cortex that normally support language can support other functions. On the other hand high-density event-related potential (HD-ERP) studies in infants have suggested that some regions of the cortex, such as the left temporal lobe, may be particularly efficient at processing speech input from the first few months of life. These findings have sometimes been characterized as evidence of “innate language neural module,” but they are also consistent with a more probabilistic epigenetic view of language function [102]. Accordingly, the view that holds is that speech and language are learned sensorimotor functions. This means that the child does not progress from crying, to babbling, to talking without help. The child must be taught to talk, and the parents are the child’s first teachers [132, 214].
Speech is amplified and filtered in the mouth that provides a resonating cavity that helps shape the sound for speech or song. Almost any intervention that a dentist, orthodontist or prosthodontist undertakes in the oral cavity of the patient can have varying degrees of temporary or permanent effects on oral proprioceptive perception and oral motor control, and therefore speech production. Most people, however, adapt surprisingly well to structural modifications of their oral cavity.
On the other hand, patients may seek help from their orthodontist in order to improve their speech, if the speech problem is related to dentition or occlusion of the teeth. A basic understanding of how language and speech development is accomplished might be useful to practitioners working in the mouth.

5.2 Language and Speech Development

A few months after birth the infant is engaged in considerable amount of oral activity. For example, from the age of 5 months everything the infant picks up is taken into the mouth. This mouth and hand experience can provide stimulation to all senses with which the baby introduces himself to the world. The mouth and hand sensory experience (mouthing) persists until the baby is really adept with his hands [70]. At 6 months the oral cavity has grown considerably and the tongue no longer fills the whole mouth. The larynx starts to sink down in the neck and the velum or soft palate now can be elevated to close the velopharyngeal sphincter, which regulates the nasality of the speech sound [148].
Speech is generated in the oral-pharyngeal cavity, which filters the rough sound signal produced in the vocal cords. In most people the air is amplified and filtered in the mouth that provides a resonating cavity that helps shape the sound for speech or song. For these sounds the velum must be elevated to close the velopharyngeal sphincter so that no air goes through the nose [148]. Thus, the low position of the larynx in the human neck, the control of the velopharyngeal sphincter mechanism, and most important the preprogrammed brain to develop language if there is linguistic input in the environment, are the main special features that predispose for speech production [102, 148].
The central functions of language comprise the cognitive functions and the motor aspects of vocalization. The cognitive functions involve the processing of linguistic input in Wernicke’s cortical area of the temporal lobe of the left cerebral hemisphere, which comprises the center of semantic meaning of words or the unspoken word function center. The ability to produce spoken words (linguistic articulation center) is located in the premotor area of the same hemisphere, specifically in the areas 44 and 45 of the inferior frontal gyrus known as Broca’s area. Isolated lesions of Broca’s area result in motor aphasia, demonstrated as motor apraxia, in other words, loss of articulate speech but without demonstrable paralysis of muscles associated with speech [36].
At birth the cerebral cortex is an “unwritten” page. The eyes and occipital lobes are not yet developed. The eyes are open but the infant cannot see, hear, smell, or touch. Thus, the child’s sphere of sensations is limited at this early age. These acts require the activity of definite groups of muscles, which cannot be controlled by the newborn child. For example, to see an object it is essential that the optical axis of both eyes converge on the object. This can only be effected with the help of the muscles, which turn the eyes in all directions. The newborn child cannot do this. Its look is always uncertain, i.e., it is not fixed on definite objects. After one or two months the child learns to perform these movements through the sensations that impart to the cerebral cortex the musculovisual reflex association. At the same time the neurons of the optic tracts are ready to transmit the impulses from the retina to the occipital lobes so that the infant can see the mother [68].
Similarly, a long time passes before the infant learns to hear sounds and words (listen). This prelistening stage of development is expressed as beginning to babble. In other words, the infant is babbling, with coordinated movements of the lips and mandible, producing simple repetitive consonant-vowel sequence sounds (mamama … bababa) without meaning. Up to this stage a baby’s vocalizations are largely unrelated to its race or hearing [148]. But soon the factor of imitation of learned sounds appears, through memory development, usually between six and ten months of age [68].
In the listening stage of speech development the brain is in an attentive state during which learning occurs [15]. A decisive role is played by the child’s desire to imitate the articulation of sounds and their combination into words, acting on its eardrum. The process of articulation of sounds consists of association of sensations caused by the contraction of muscles participating in speech with auditory sensations induced by the sounds of the individual’s own speech (babbling). These acts are reflexes arising in the brain, through memory stimulation, and end in muscular movements. These reflex sensations are acquired by learning through frequent repetition, and their reproduction requires memory [15, 68].
The act of auditory attention or listening is similar to the described convergence of the optical axes on the object for clear vision. Listening is manifestly confined to this external act when separate simple sounds are perceived. In the vocabulary of a child there is not a single word that it has not been acquired by learning.
The process of articulating sounds is actually the same in the child and in the parrot. But what is the difference in their faculty of speech? While the parrot learns to pronounce only a few phrases in ten years, the child learns thousands. The parrot pronounces the words in a purely mechanical way, whereas the speech of a child, even at a very early age, bears, so to speak an intelligent character [68].
Only gold members can continue reading. Log In or Register to continue

Oct 16, 2015 | Posted by in Orthodontics | Comments Off on Language and Speech
Premium Wordpress Themes by UFO Themes