StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Speech Perception Research - Essay Example

Summary
The paper "Speech Perception Research" produces the study of perception closely related to areas such as phonetics, cognitive psychology, linguistics, and perception in psychology. The process of speech perception begins with the sound signal and the process of auditioning…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.4% of users find it useful

Extract of sample "Speech Perception Research"

Name Affiliate Institution Speech Perception Speech perception has many objects including speaking which is characterized by the generation of a stream of meaningful sounds. Empirically, some research studies have studied the physical level of speech using a spectrogram, which reveals the frequency/ amplitude patterns of audible features. The first empirical research on speech perception intended to create an automated reading machine which would assist the blind by replacing letters with sounds. The failure of this project helped develop essential theory for speech perception. The study of perception has close relations to the fields such as phonetics, cognitive psychology, linguistics and perception in psychology. When one carries out research on speech perception it is because they are seeking to understand how the person who listens is able to recognize sounds and finally use this information to understand the language spoken. The study of speech perception aids in the improvement of recognition of speech for the purposes of hearing, impaired listeners and teaching foreign-language. The process of speech perception begins with the sound signal and the process of auditioning. After the initial auditory signal has been processed, the speech sounds are processed to phonetic information and extract acoustic cues. The information here can then be used for language processes that are at a higher level for instance, word recognition. Some models are there to work on speech perception or production of speech solely and the other models do a combination of speech perception and production together. According to Massaro 1989 (398-421), some of the models were produced in 1900’s and the other models are being modified day in day out. They include the trace model, the motor theory model, and categorical perception. In the categorical perception is whereby phonemes in speech are divided categorically once they are produced. The speech is divided into categories such as voice onset time and place of articulation Chiat (2000, 15), studies the difficulties in children’s languages. He exposes the challenges that individual children face. When we look at the physical level of speech production, a spectrogram gives a revelation of the patterns of amplitude and frequency that ground the features that are audible. The stream of sound is like a complex acoustic structure that involves patterns of qualities that are audible over time. The stream in real sense appears to be segmented, but in an unfamiliar language it always seems like it is unsegmented stream. Words are the most salient elements and they are also referred to as the meaningful units. The ones that have been discerned are the segments that have been corresponded to syllables in the stream. The syllables do not have meaning but then they combined to form words in a way that is loosely analogous to the way words can combine to form sentences. The syllables however contain sounds that are distinguishable. For instance when we look at the word bad it has three sounds /b/, /ӕ/, /d/. Such units or phonemes that form patterns that can be recognized and distinguished have been the major focus in the research conducted on speech perception. The main objects of speech perception are the phonemes. Phonemes are units that are specific to language. They are perceptual equivalences that come about as a result of phones, which are known to contain all the possible sound that can be distinguished. The English class of phonemes differs from the Japanese class of phonemes even though some of the phonemes are shared. Whereas English treat /l/ and /r/ as phonemes that re distinct Japanese treats them as allophones instead or variants of a phoneme that is common. Chinese language has put a distinction to the phonemes that correspond to the allophones of the phoneme of English /p/. Writing can arise as the translation of audible speech into written form. Consequently, teachers teach young children to sound out the written words. In the research by the people who pioneered speech perception aimed their research on the development of a reading machine that is automated for the blind and this worked by individual letters being replaced with sounds that are specific. The listeners here could not resolve the sequence of individual sounds that were required to detect the words at the normal rates of speech. There are two theories of speech perception the Motor theory, Direct Realist Theory of speech Perception. The motor theory is the hypothesis that one look at the spoken words and this is done by identification of the vocal tract gestures and this is with which they are pronounced other than identification of the sound patterns that the speech come up with. It claims that speech perception can be done by a particular module that is distinctive and very specific to human beings. The idea of this module still remains that the major function of the speech motor system is to detect and also produce speech articulations. This theory has more interest outside the field of speech perception than inside. Since the discovery of mirror neuron which links the perception and production of the movements of the motor which includes those that are made by the vocal tract have gained ore interest. The theory was brought about by the Haskins Laboratories in the 1950s and this was done by Franklin S. Cooper and Alvin Liberman. It was developed further by Douglas Whalen, Michael Kennedy. Donald Shankwellr and Ignatius Mattingly. The direct realist theory came as an alternative of the Motor theory of perception which was developed by Carol Fowler who is working at the Haskins Laboratories. Like the motor theory the direct realist theory also has claims that the speech perception objects are articulatory other than acoustic events. The direct realist theory has the assertion that articulatory objects of perception are real, vocal tract movement, gestures or phonetically structured and that they are not events that are antecedented casually to the above movements. The direct realist theory differs from the motor theory whereby it denies the fact that the mechanisms that are specialized play a major role in speech perception. When we look at the theory in a general perspective we see that speech perception can be characterized as visual perception of the surface layout. He summarized this by saying that the system of perception has very many functions that are universal. They contain the means by which the animals can know their niches. They serve the function in one way; they use a structure that has been used lawfully caused by events in the same environment which becomes information for the events. According to Klatt 1976 (1208-1226) the major puzzle of speech perception is that there cannot be a direct obvious, correspondence that is consistent and that it is between the properties on the surface of an acoustic signal that is physical and the phonemes that are perceived when one is listening to a speech. There are various manifestations. First there is no invariant property of a signal of a sound that is similar to the phoneme. In a word what may sound like one phoneme maybe very different acoustic correlates which depend on the speaker or the mood of the speaker. For example, /da/ and /de/ when pronounced audibly we realize that they share the phoneme/d/. When we look at the acoustic signal that corresponds to /d/ is very different here. In this case we see that /da/ starts with a high frequency formant that has a rising intonation and /de/ on the other hand contains a low frequency formant that has a falling intonation. Co-articulation arises from the lack of invariance. What precedes or what follows a phoneme determines how the speaker will articulate a phoneme. When the phoneme /d/ is followed by /a/ other than/e/ causes an impact on the pronunciation but when we begin with the letter/d/ the impact will impact on the vowel. Co-articulation causes the signal to lose the clear segmentation of phonemes that are categorically perceived which have been tied to beads on a string. According to Garnes et al 1976 (285-293)The other approach is that aspects of gestures that are used in the pronunciation of phonemes,, the ways in which one moves their throat, tongue and mouth and they are invariant in all contexts. When one pronounces the phoneme /d/ the tip of the tongue touches the alveolar ridge that is located on the upper jaw behind the teeth. The alveolar consonants differ in such a way that others are voiced and the others voiceless and some have a vocal fold movement and others do not. This brings about an alteration of the overall acoustic signature of all the gestures that are associated with /d/. The gestures that are produce other than the acoustic signals that are produced, make intelligible how one can individualized phonemes. Some then conclude that how phones are perceived have the functions of reception of information on the articulatory objects from the acoustic signal. We can say that the motor theory and the direct realist theory are two different versions of this approach. The gesture of articulation makes the candidates for objects that are plausible for phoneme of reception. There are those candidates that are imperfect for they do not escape the worries about the context that is dependent and lack of segmentation that is discrete and that it stems from the fluid co-articulation The claim has been supported by a finding that visual processes have an impact on the auditory experience of speech. In the perception of speech perception of gestures, involves perception of speech. They are not surprised that the evidence of vision for the gestures of articulation should be measured against the auditory evidence. Some of the researchers who think that the intended and the actual gestures are just the best candidates because they think that speech perception are special. When we hear in general auditioning the objects of speech perception differ from the sounds and the acoustic structures. The process of auditioning has claims that auditioning has objects that are distinct that motivates the claim that perception of speech has perceptual processes that are distinctive. The motor theory of perception against the other auditory theories has the role of integrating the explanations of speech production and perception. One might come to terms with the fact that phonemes that are perceived can be identified with the gestures of articulation but reject the fact that it makes speech special. Auditory perception implicates happenings of the environment and sources of sound then the gestures and all the other activities that are associated with production of speech are not distinctive among the audition objects. The processes that involve speech perception do not need to be understood as if they have a distinct function among the audition objects. With this we can say that human beings are special in possessing the capacity to individualized sound of speech. The processes that are associated with speech do not need to be continuous like those of the general audition. The claim is well-matched with acuity that is high for sounds of speech; it also enables special speech sounds selectivity. If the hearing of speech organizes the resources that are perceptual and they are continuous with those that are devoted to the sound hearing and the events in the environment that one is in, it is surprising that no resources and processes were devoted to speech perception. Research also supports specially a status that is for speech that is among other things that we perceive auditory. According to Ingram 2007(113-127), the evidence that came up shows that the human neonates have a preference for speech sounds to non-speech. The second evidence is that adults are most likely to distinguish non-speech from speech based on the visual cues alone. The third one is that the infants can distinguish and detect a language difference auditorily. Finally the infants that are 4-6 months can sense and this is based on the visual cues alone when a speaker shifts from one language to another. Those that are in a household of bilingual speakers lose the ability by 8 months. The deafened subjects that undergo implantation when they are adults have very positive gains in the field of speech perception when the information that is auditory from a cochlear and the information that is visual by spreading the lip is available. This shows that some congenitally adults that are deafened are able to integrate information that is auditory which is provided by the implant of the cochlear implant. In conclusion, we can say that there are no acoustic that are obvious that relate to phonetic segment that are heard in speech. The acoustic cues that are complex are used to trigger the experiences of phonemes that are perceptual. The gestures that are articulatory are good candidates for objects of speech perception. This does not mean that speech perception involves different kinds of processes and objects from those that are ordinary and non-linguistic audition, it still does not mean that speech perception is a human capacity that is unique. However, speech is special for humans in that they have a sensitivity that is special for speech sounds. Speech perception has promised that it will reward the future attention that is philosophical. References Chiat, S. (2000). Understanding Children with Language Problems. Cambridge: Cambridge University Press. Garnes, S., Bond, Z.S. (1976). “The relationship between acoustic information and semantic expectation”. Phonological. Innsbruck.285-293 Ingram, John. C. L. (2007). Neurolinguistics: An Introduction to Spoken Language Processing and its Disorders. Cambridge: Cambridge University Press. 113-127 Klatt, D.H. (1976). “Linguistics uses of segmental duration in English: Acoustic and perceptual evidence.” Journal of the Acoustical Society of America. 59 (2): 1208- 1221 Massaro, D.W. (1989). “Testing between the trace Model and the Fuzzy Logical Model of Speech Perception” Cognitive psychology. (398-421) Read More
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us