StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Native Arabic Speakers and English Speakers - Dissertation Example

Cite this document
Summary
In the research paper “Native Arabic Speakers and English Speakers” the author has scientifically shown that all superior temporal gyri areas in both hemispheres of the brain are engaged bilaterally during the processing of speech as well as melody…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.5% of users find it useful
Native Arabic Speakers and English Speakers
Read Text Preview

Extract of sample "Native Arabic Speakers and English Speakers"

Laterality Differences in Native Arabic Speakers and English Speakers Measured by Dichotic Listening Test Introduction The greater superiority ofthe left hemisphere in processing linguistic stimuli and laterality issues It has been scientifically shown and is therefore undeniable that all superior temporal gyri areas in both hemispheres of the brain are engaged bilaterally during the processing of speech as well as melody (see Hickok & Poeppel, 2000). However, evidence from functional neuroimaging (fMRI) studies show that a greater processing of auditory input occurs in left superior temporal areas. According to cognitive science studies, researchers (e.g. Price et al, 2005) found that left superior temporal areas are involved to a greater degree in the recognition and control of meaningful linguistic stimuli relative to the rapid changes of temporal cues when the task requires realization of speech (meaningful stimuli) as opposed to non-speech (meaningless stimuli). Despite the fact that processing of intelligible speech is predominantly confined to the left hemisphere (LH), this hemisphere is also found to be active in specific non-linguistic processes. However, it is generally agreed that the right hemisphere is a property of prosodic features. Scott, S.K. et al. (2000) pointed out that posterior cortical regions in the left hemisphere are involved in non-linguistic processes when stimuli require tracking the changes of temporal cues (acoustic parameters) at the conceptual level of decisions (based on evidence extracted from the ability of readers to recognize word categories such as plurals). 2. Processing the prosodic features in right hemisphere and laterality issues Although the left hemisphere is active most during speech processing, a right anterior temporal area and right mid-temporal region are also involved and contribute to speech processing, especially when there are “tasks that tap voice relative to verbal content” (Price et al, 2005). After all, speech involves not only conceptual and syntactic processes but also prosodic processes (see Patel & Daniele, 2003). However, as Price et al. (2005) points out, despite the complex characteristics of speech as distinguished from other acoustic signals, there do no appear to be any regions of the brain entirely dedicated to speech processing alone. 3. Importance of the left hemisphere in language processing The left hemisphere (LH) is specialized for the symbolic neural transmission that underlines the processing of auditory input, which is vital to realizing perceived speech sounds. Therefore, the left hemisphere is involved in the perception of specific sounds, especially consonants. There are two issues that should be considered regarding left hemisphere superiority for perceiving some sounds as speech sounds and some sounds as non-speech sounds among native and non-native speakers for different languages. These issues are the ability of the LH auditory mechanism to process identifiable acoustic properties of native speech sounds for native listeners and the natural processing of the stored phoneme system of the language. Several studies have investigated this phenomenon of which a selection will now be reviewed. For example, Zulu speakers use phonologically contrastive oral clicks as speech sounds in their language, which take the place of consonant sounds. Researchers compared English speakers to Zulu speakers in terms of the ability of listeners to discriminate these oral clicks. They found that English speakers perceive these oral clicks as non-speech sounds (Best et al., 1988), signifying their inability to associate these rapid acoustic variations with any linguistic significance that could then be interpreted by their LH. Zulu speakers on the other hand, showed an LH advantage in perceiving these oral clicks and identifying them as recognisable consonants. According to Schwartz and Tallal (1980), the auditory mechanism specializes in the perception and discrimination of the complex and rapidly changing acoustic properties of stop consonants and vowels. On the other hand, Liberman and Matingely (1989) for example, are of the view that the LH only deals with processing and interpreting the linguistic features of sounds including the phonetic features of consonants. The differences of LH processing among speakers are linked to the differences of acoustic properties of the phonological segments in each language (Schwartz & Tallal, 1980). Interestingly, English speakers showed higher LH performance in processing the Zulu oral clicks once they had experienced hearing them as isolated clicks rather than in syllables. The Zulu speaker should naturally have shown a higher LH performance in processing these clicks, whether in isolated position or in syllable context. Nonetheless, it is evident that “LH superiority for consonant perception is not determined simply by the acoustic properties of the stimuli” (Best & Avery, 1999), rather, as the dichotic matching task showed, more so on linguistically significant information. 4. The relationship between the acoustic properties of each language (pitch) and the functional differences in lateralization of perceiving tone Each language has unique acoustic properties that help to differentiate and classify languages in groups. Pitch difference is a characteristic of all languages, but the functional use of the tonal property of a certain language is different when compared across several languages. For example, some languages, classified as tone languages, employ variation in pitch for lexical purposes i.e. differences in pitch are used to convey differences in word meanings, such as in Thai and Chinese. Other ‘pitch-accent’ languages (like Japanese and Scandinavian languages) use stress, tonic accent, and emphasis within syllables or words. On the other hand, in many languages, “differences in pitch are not tied to the lexicon, but are associated with phrases or sentences” (Moen, 1993) and these are called ‘intonation languages’. With a notable exception of Norwegian, most European languages are intonation languages. Studies on the functional lateralization of pitch variation suggest that the right hemisphere is responsible for interpreting the emotional use of pitch variation (Ross et al., 1981; Tucker et al., 1977; Weniger, 1984) but lateralization in the case of processing the grammatical functions of pitch are unclear. Dichotic listening tests on English patients perceiving intonation contours in different sentence types indicated, “a significant left-ear advantage in the perception of intonation contours” (Moen, 1993) thus suggesting right hemisphere (RH) lateralization, even amongst those with a damaged RH (Weintraub et al., 1981). However, a similar study of lexical stress, which is dependent on pitch amongst other auditory features, by Behrens (1985) found the converse to be true i.e. LH lateralization was exhibited. Further, scholars who investigated both LH and RH damaged patients for their ability to differentiate the exact position of the stress, found both types showed no difference in lateralization to either hemisphere (Blumstein & Goodglass, 1972). Nevertheless, right hemisphere preference in perceiving tone does not necessarily mean there is no advantage for left hemisphere lateralization within a listener. Moreover, lateralization to a certain hemisphere in perceiving tone is different from one language to another. For example, Thai and Mandarin Chinese listeners both appear to have left hemisphere preference in processing tonal variation in their own languages, whereas English listeners tend to exhibit a right hemisphere advantage in processing the emotional use of intonation and pitch variation for grammatical use (Moen, 1993). To explain the laterisation differences, scholars have hypothesised the existence of “a scale of pitch contrasts from the least grammatical use of pitch, associated with the right hemisphere, to the most grammatical use of pitch associated with the left hemisphere Van Lancker (1980), but the functional lateralization of perceiving intonation and accentuation has also been linked to specific features of understanding the lexical representation of each language (Packard, 1986). The latter is supported by evidence from studies conducted on brain-damaged people. There is much evidence to support the association between left hemisphere preference and the phonological linguistic features of sound in language (see Moen, 1993:403). Investigation of patients with left hemisphere damage indicates that those patients are unable to distinguish the differences of linguistic features of each sound in their own language. In contrast, there is no association between pitch variation and formulating the lexical representation in the English language. Thus English speakers do not tend to show right hemisphere advantage in perceiving tone, though they do demonstrate more left hemisphere advantage than right hemisphere advantage. (Moen, 1993:403). Moen’s (1993) study based on two-tone words in Norwegian showed that in perceiving their distinction, LH lateralisation supported the hypothesis that “prosodic features which are specified in the mental lexicon are controlled by the LH (Packard, 1986). Ryalls and Reinvang (1986) pointed out that the processing of acoustic properties of pitch is the domain of the right hemisphere, while the left hemisphere formulates the distinctive informational features of sounds. This makes observing lexical tone useful in studying lateralisation, but researchers have constructed two competing hypotheses to investigate whether the LH or RH predominates viz. the acoustic or ‘cue-dependent’ hypothesis and the functional or ‘task-dependent’ hypothesis (Gandour et al., 2003; Wong, 2002). The first posits lateralisation as dependent on acoustic properties (see Robin, Tranel, & Damasio, 1990) and the second proposes lateralisation as functionally determined (see Van Lancker, 1980). The latter therefore uses a LH specification of lexical tone to determine if pitch conveys distinctive linguistic information. Lateralization to the left hemisphere occurs when a listener is focused on processing the essential linguistic features within words or syllables such as lexical tone rather than depending on general pitch (Van Lancker, 1980). Also related to hemisphere preference is left or right ear advantage (L/REA). Researchers have found that speakers of languages with distinctive tonal features, such as Mandarin which phonemically distinguishes similar words using 4 distinct tones, show consistent REA. This suggests a dominant LH as far as the perception and processing of lexical tones are concerned. For example, Thai speakers demonstrate steady LH advantage in processing the tonal characteristics (i.e., distinctive level of pitch) in their own language (Van Lancker & Fromkin, 1973). This has also been shown in languages such as Norwegian (Moen, 1993) and Mandarin Chinese (Wang et al., 2001). English speakers, however, do not exhibit right ear advantage in processing Thai tone (Vanlacker & Fromkin, 1973), since English language speakers do not need to process tone to understand the meaning of words. Investigations using functional magnetic resonance fMRI and positron emission tomography (PET) showed that the native speakers of tone languages utilize left hemisphere regions when they perceive tone to understand words (e.g. Klein et al., 2001; Wang et al., 2003; Gandour et al., 2000), which implies that the left hemisphere is responsible for perceiving the distinctive features of pitch within words. However, those who are non-native speakers of tone languages do not necessarily involve the hemisphere regions in processing what they perceive. In other words, the left hemisphere is not dominant for those who are non-native speakers of tone languages. Moen (1993) pointed out that the Norwegian language has variations of lexical tone within words and syllables, and these tonal characteristics are mainly analyzed in the left hemisphere. There are several controversial studies regarding bilingual speakers who speak the tone language as a second language, in terms of how they lateralize language. Mildner (1999) pointed out that bilingual speakers process language primarily in the left hemisphere, with a greater demand on the right hemisphere (than in those who speak one language). Other researchers, in contrast, have reported that bilingual speakers process language equally in both hemispheres (Ke, 1992), Yet other researchers pointed out that early speakers of two languages show a greater use of the left hemisphere, while late speakers of two languages utilize both hemispheres equally (Sussman, Franklin, & Simon, 1982). However, as Wang (2004) concluded from studying four different listeners of Mandarin tones, there was “consistent support for the functional hypothesis” (Van Lancker, 1980). That is, the LH was found to be dominant in processing lexical tones in native tone-language speakers, whereas different lateralisation patterns were observed between different nonnative speakers. 5. Evidence explaining the natural biological lateralization of the brain to the left hemisphere in perceiving and processing linguistic stimuli Based on electrophysiological measurements, Naatanen et al. (1997) and Breier et al. (1999) concluded that all native speakers have natural biological responses to linguistic stimuli, which are primarily processed and controlled by the left hemisphere. However, there is a debate whether the functional lateralization to the left hemisphere is a result of the special physical characteristics of specific linguistic stimuli that lend more to left hemisphere lateralization (Gandour et al., 2004; Shtyrov et al., 2005). Based on the physical characteristics of the linguistic stimuli hypothesis, Tallal et al. (1993) and Fitch et al. (1997) argued that “slow acoustic transitions, such as pitch change, are preferentially processed in the right hemisphere”, while consonants whose sounds change rapidly are processed mainly in the left hemisphere. However, as Best and Avery (1999) and Gandour et al. (2002, 2004) show, special characteristics of individual languages have a direct impact on the exact place of lateralization in the brain (hemispheric specialization). For example, tonal languages such as Mandarin Chinese utilise special pitch patterns as a lexical determining factor. Neuronal evidence based on second language acquisition also supports these findings. Moreover, native speakers of these languages process the tone in the left hemisphere instead of the right hemisphere (which is responsible for processing pitch changes). This lateralisation towards the LH is attributable to the development of neural circuits in native speakers arising from greater experience with the language. In contrast, non-native speakers of tonal languages tend to show either bilateral or inconsistent activation of both hemispheres (Bottini et al., 1994; Perani et al., 1996; Dehaene et al., 1997; Gandour et al., 2004). 6. The developmental responses to linguistic stimuli during the normal stages of language acquisition Unlike the visual system in young infants, the auditory system develops rapidly such that they are able to detect even subtle changes in sounds. This phenomenon justifies an examination of the neural attunement process in infants, particularly during the acquisition of a language-specific phonemic contrast. Evidence has shown that all infants before the age of 6 months have the ability to distinguish a wide range of non-native acoustic properties of linguistic stimuli, even though the complete acquisition of their native language is not established. (Minagawa-Kawai et al., 2007) But as Cheour et al. (1998) pointed out, once infants reach the age of 11-12 months, they are no longer able to discriminate non-native sounds including vowel sounds. Similarly, Rivera-Gaxiola et al. (2005) reported that they are unable to even discriminate non-native consonant sounds at the age of 11 months. This suggests that the normal acquisition of native phonemic contrast is completed within the first year of life (Kuhl et al., 1992). However, not all infants between the age of 6 to 7 months show consistent responses to specific sounds, whether consonants or vowels. Based on electrophysiological response measures, Japanese infants exhibit responses to consonantal contrast but not to vowel contrast (Dehaene-Lambertz & Baillet, 1998). Amano (1986) explained that vowels differ in distinctive duration; some vowels have a short duration while others have a long duration, which makes an infant unable to detect them. The variation in responses during the first year of life provides insight into the developmental stages of the neural networks involved save the fact that no studies have yet been able to satisfactorily ascertain the developmental process of hemispherical specialisation (Minagawa-Kawai et al., 2007). Studies show that infants between the age of 11 to 12 months exhibit specific responses to sounds, and are able to discriminate the distinctive duration of vowel contrast using special auditory processing (sensory input) regions in the temporal lobe of both hemispheres to perceive the differences of vowel contrasts. The re-organisation process in infants may not be uniform but there is nonetheless a progression “from general acoustic processing to language specific processing” (Minagawa-Kawai et al., 2007). Furthermore, the neural responses to specific sounds gradually become more unilateral rather than bilateral, and confined to the Wernicke’s area as has been observed in Japanese adults (Minagawa-Kawai et al., 2002; Jacquemot et al., 2003). However, Hayashi et al. (2001) points out that the decrease in phoneme-specific responses is only temporarily as it reappears at 13 to 14 months of age albeit under a LH dominance. As already mentioned, it is generally agreed that neural responses to specific linguistic stimuli in native speakers of a language become more dominant in the left hemisphere (e.g. Breirer et al, 1999; Furuya & Mori, 2003; Zevin & McCandliss, 2005) as a result of established experience in a native language. On the other hand, non-native speakers of other languages usually fail to show left-dominant responses (Gandour et al., 2004). There are other factors that explain the differences in acquiring specific linguistic stimuli, such as the form and type of linguistic stimuli (syllables or words) and the kind of measurements used to present these linguistic stimuli. Researchers posit that the form of linguistic stimuli affects the lateralization of brain responses. It is unclear whether functional lateralisation is determined by physical or linguistic properties of the stimuli, but Shtyrov et al. (2005) proposed that the physical features of the linguistic stimuli might determine neural responses to isolated syllables more so than linguistic features. Researchers observed that neural responses to linguistic stimuli differ between early and later infancy. During the early part of the first year of an infant’s life, all left-dominant responses are linked to perception involving higher cognitive ability because this period is critical for normal language acquisition. The crucial developmental timing when an infant becomes able to distinguish the acoustic properties of sounds that are integrated with syllable structures thus explains the neural transmission changes during language acquisition in the first year of life. (Minagawa-Kawai et al., 2007) 7. Relationship between the functional lateralization of brain responses and the voice onset time of syllables Consonant–vowel (CV) syllables are commonly used as stimuli for dichotic listening (DL) tests in which several elements for simultaneous processing are provided so as to detect an overall left or right ear advantage (L/REA) associated with dominance of the RH and LH respectively. Listeners typically show REA with CV syllables that contain plosive sounds plus the vowel /a/ either with long or short voice onset time (VOT) (Rimol et al., 2006). According to Kimura et al. (1967), these REAs are a result of speech processing regions of the temporal lobe in the left hemisphere. With DL, it is possible to modulate and change the acoustic properties of stop CV syllables that are presented to listeners so as to discover more about REA. Darwin (1971) pointed out that listeners show reduced (or none at all) REA with syllables containing fricative sounds, and Haggard (1971) found similar results with syllables containing liquid sounds. Voiced stop syllables usually contain short VOTs, while unvoiced stop syllables contain long VOTs. An exception is Norwegian in which voiced CV syllables contain long VOTs lasting up to 30 ms. The effect of VOT on performance in DL has been investigated in the past but short and long VOT stimuli have not been compared directly for accuracy before (Rimol et al., 2006). For example, Schwartz and Tallal (1980) demonstrated (in separate sessions), that listeners show greater REA with short transitions than with long transitions when presented with artificial CV lasting either 40 or 80 ms. They used this finding to confirm that “the left temporal lobe specialization for speech is based on a more fundamental specialization for analysis of rapidly changing stimuli” (Rimol et al., 2006). Earlier, Studdert-Kennedy & Shankweiler (1970) similarly found that there was greater REA for voiced as compared to unvoiced stop syllables. The classic REA effect has been explained in many studies. For example, Hugdahl (2003) reported that REA is a consequence of interaction between the acoustic properties of certain speech stimuli and the structure of the auditory system, and termed this the ‘bottom-up effect’ or ‘stimulus-driven effect’. There are two other bottom-up factors that can be important in determining REA. One of these factors is the type of speech stimuli, in which listeners always show REA with linguistic stimuli in general. The other factor is the interval time between consonant and vowel within CV syllables, which is called the ‘sub-phonemic parameter of the stimuli’. Rimol et al., (2006) conducted their own study “to systematically compare effects of all possible combinations of short and long VOTs on the ear advantage, on a within-trial basis” (p.193). They selected a number of right-handed healthy Norwegian speakers, all with an overall REA, and subjected them to dichotic stimuli as described above in thirty different combinations, and controlled unwanted effects of biased attention to either ear. Short-short (SS), long-long (LL), and short-long (SL) VOT syllables were used. It was found that the effects of VOT were significant. In three of the four conditions (SS. LL, and SL), there was a significant REA with a LEA only in the LS condition. Thus, listeners perceive syllables containing long VOTs more frequently and easily than short VOTs regardless of the ear used. Berlin et al. (1973) reached similar findings. Rimol et al. (2006) remark that “VOT turned out to be a more powerful determinant of DL performance than the classic REA effect” (p.194). It has been suggested that long VOT is easy to perceive because it involves less temporal precision than short VOT syllables, and does not affect whether the speech signal comes via a contra-lateral path (indirect path from the left ear) or a direct path (from the right ear). But easily perceivable long VOT syllables would entail SL to the easiest condition i.e. having the least number of errors, which is not substantiated by the above study (Rimol et al., 2006). Instead, the lowest error rates occurred under SS and LL conditions. Alternatively, it is suggested that a delayed or extended transition phase in long VOT syllables may be responsible as they create a backward masking of the shorter syllables. Various studies such as by Widrig (2000) reveal findings consistent with this suggestion. Given that unvoiced stop VOT syllables are aspirated, the temporal lobe becomes more attuned to processing broadband noise caused by aspirated sounds rather than processing rapidly changing frequencies in the linguistic stimuli itself (Rimol et al., 2006). However, other studies such as Schwartz & Tallal (1980) that showed larger REA for short transition syllables versus long transition syllables conflict with the findings by Rimol et al., 2006. It is still argued whether the left hemisphere is indeed dominant in processing the linguistic stimuli or sub phonemic of speech stimuli (Tallal, Miller and Fitch, 1993). But this does not necessarily mean that the LH does not specialise in processing rapidly changing stimuli, as evident in Zatorre & Belin (2001). Rimol et al. (2006) points out that it may be irrelevant to distinguish between short and long VOT for this purpose. Instead, they suggest it may be better “to study the two classes of syllables in direct competition, comparing stimuli that are presented simultaneously on a trial-by-trial basis” (p.195). On the other hand, language differences may also be held to account. 8. The purpose of the dichotic listening test The dichotic listening test is suited to examining the functional lateralization of brain responses. The procedure involves two different speech stimuli presented simultaneously in both right and left ears. A REA indicates a dominant LH whereas a LEA indicates a dominant RH due to the way the pathways are contralaterally formed in the brain. It has been shown that listeners show REA in response to speech stimuli such as stop and liquid sounds (Cutting, 1974), to nasal and fricative sounds (Bryden & Murray, 1985), and in response to vowels (Dwyer, Blumstein, & Ryalls, 1982). Van Lancker and Fromkin’s (1973) research using the dichotic listening test determined that ear preference is revealed by perceiving the distinctive level of pitch within words. They examined speakers of two different languages; a tone language (Thai), and an intonation language (English), and presented listeners with three different types of speech stimuli. The first stimulus consisted of changes in pitch level within words (tone-words), the second stimulus contained a change only in the initial consonant of each word (consonant-words), and the third stimulus contained changes only in pitch (hums). In short, Thai speakers are able to discriminate tone words because the change in level of pitch within words allows them to understand Thai words. As a result, they show REA to lexical tone rather than the tone itself, while they do not give ear advantage in the case of hum tones because they are not lexically salient in their own language. Thus, they process lexical tone in the LH rather than the right, likely because the variation of pitch level is linked to word meaning, so it is more than just tone. Similarly, other studies on the dichotic perception of Mandarin tones amongst Mandarin listeners showed REA in processing tonal word contrast, which means the left hemisphere is dominant in processing tone words for those who are native speakers of a tone language. It is generally agreed that the right hemisphere is a property of prosodic processing. However, Packard (1986) pointed out that lateralization of prosodic features depends on the specific linguistic characteristics of a language. English speakers, on the other hand, speak an intonation language, which means the language does not have distinctive tone features. Thus, changes in pitch level are not important in order to understand English words as tone is used for stress and pauses, which are not linked to word meanings. Consequently, English speakers show only right ear preference to consonant words and do not show REA overall. We can infer from this study (Van Lancker & Fromkin, 1973) that the left hemisphere is dominant in processing different lexical tones. However, non-native speakers of tone languages can show right ear preference when they experience to response to tone words which means the left hemisphere is dominant in processing non-native tone words when these tones exist in their linguistic system (Van Lancker & Fromkin, 1978). Such findings as mentioned above for speakers of tone and intonation languages were made possible using the dichotic listening test. Hence, the dichotic listening test is a useful tool to investigate functional lateralisation. It serves to identify left or right hemisphere characteristics by means of testing for right or left ear preferences. Dichotic tone perception experiments often do however have a ‘ceiling effect’ that may need to be taken into account (Wang et al., 2001). References (Harvard) Primary Sources (In order of use) 1-2. Price, Cathy, Theirry, Guillaume, and Griffiths, Tim. 2005. Speech-specific auditory processing: where is it? Trends in Cognitive Sciences. Vol.9, No.6, June 2005. Elsevier Ltd. 3. Best, Catherine and Avery, Robert. 1999. Left-hemisphere advantage for click consonants is determined by linguistic significance and experience. Psychological Science. Vol.10, No.1. 4. Moen, Inger. 1993. Functional Lateralization of the Perception of Norwegian Word Tones - Evidence from a Dichotic Listening Experiment. Brain and Language. Vol.44, pp.400-413. 4. Wang, Yue et al. 2004. The role of linguistic experience in the hemispheric processing of lexical tone. Applied Psycholinguistics. Vol.25, pp.449-466. 5-6. Minagawa-Kawai, Yasuyo et al. 2007. Neural Attunement Processes in Infants during the Acquisition of a Language-Specific Phonemic Contrast. The Journal of Neuroscience. Vol.27, No.2, pp.315-321. 7. Rimol, Lars et al. 2006. The effect of voice-onset-time on dichotic listening with consonant-vowel syllables. Neuropsychologia. Vol.44, pp.191-196. 8. Wang, Yue et al. 2001. Dichotic Perception of Mandarin Tones by Chinese and American Listeners. Brain and Language. Vol.78, pp.332-348. Secondary Sources (In alphabetical order) Amano, K. 1986. Acquisition of phonemic analysis and literacy in children. Ann Rep Ed Psychol Jpn. Vol.27, pp.142–164. Quoted in Minagawa-Kawai et al., 2007. Behrens, S.J. 1985. The perception of stress and lateralization of prosody. Brain and Language. Vol.26, pp.332-348. Quoted in Moen, 1993. Berlin, C. I. et al. 1973. Dichotic speech perception: An interpretation of right-ear advantage and temporal offset effects. The Journal of the Acoustical Society of America. Vol.53, No.3, pp.699–709. Quoted in Rimol et al., 2006. Best, C.T., McRoberts, G.W., & Sithole, N.M. 1988. Examination of perceptual reorganization for nonnative speech contrasts: Zulu click discrimination by English-speaking adults and infants. Journal of Experimental Psychology: Human Perception and Performance Vol.4, pp.45–60. Quoted in Best & Avery, 1999. Best, C.T. & Avery, R.A. 1999. Left-hemisphere advantage for click consonants is determined by linguistic significance and experience. Psychol Sci. Vol.10, pp.65–70. Quoted in Minagawa-Kawai et al., 2007. Blumstein, S. & Goodglass, H. 1972. The perception of stress as a semantic cue in aphasia. Journal of Speech and Hearing Research. Vol.15, pp.800-806. Quoted in Moen, 1993. Bottini, G. et al. 1994. The role of the right hemisphere in the interpretation of figurative aspects of language. A positron emission tomography activation study. Brain. Vol.117, pp.1241–1253. Quoted in Minagawa-Kawai et al., 2007. Breier J.I. et al. 1999. Temporal course of regional brain activation associated with phonological decoding. J Clin Exp Neuropsychol. Vol.21, pp.465– 476. Quoted in Minagawa-Kawai et al., 2007. Bryden, M. P., & Murray, J. E. 1985. Toward a model of dichotic listening performance. Brain and Cognition. Vol. 4, pp.241–257. Quoted in Wang, 2001. Cheour, M. et al. 1998. Development of language-specific phoneme representations in the infant brain. Nat Neurosci. Vol.1, pp.351–353. Quoted in Minagawa-Kawai et al., 2007. Cutting, J. E. 1974. Two left-hemisphere mechanisms in speech perception. Perception and Psychophysics. Vol.16, pp.601–612. Quoted in Wang, 2001. Darwin, C. J. 1971. Ear differences in the recall of fricatives and vowels. The Quarterly Journal of Experimental Psychology. Vol.23, No.1, pp.46–62. Quoted in Rimol et al., 2006. Dehaene, S. et al. 1997. Anatomical variability in the cortical representation of first and second language. NeuroReport. Vol.8, pp.3809 –3815. Quoted in Minagawa-Kawai et al., 2007. Dehaene-Lambertz, G, & Baillet, S. 1998. A phonological representation in the infant brain. NeuroReport. Vol.9, pp.1885–1888. Quoted in Minagawa-Kawai et al., 2007. Dwyer, J., Blumstein, S. E., & Ryalls, J. 1982. The role of duration and rapid temporal processing on the lateral perception of consonants and vowels. Brain and Language. Vol.17, pp.272–286. Quoted in Wang, 2001. Fitch, R.H., Miller, S, & Tallal, P. 1997. Neurobiology of speech perception. Annu Rev Neurosci. Vol.20, pp.331–353. Quoted in Minagawa-Kawai et al., 2007. Furuya I, Mori, K. 2003. Cerebral lateralization in spoken language processing measured by multi-channel near-infrared spectroscopy (NIRS). Brain Nerve. Vol.55, pp.226–231. Quoted in Minagawa-Kawai et al., 2007. Gandour, J. et al. 2000. A crosslinguistic PET study of tone perception. Journal of Cognitive Neuroscience. Vol.12, pp.207–222. Quoted in Wang et al., 2004. Gandour, J. et al. 2002. Neural circuitry underlying perception of duration depends on language experience. Brain Lang. Vol.83, pp.268 –290. Quoted in Minagawa-Kawai et al., 2007. Gandour, J. et al. 2003. Temporal integration of speech prosody is shaped by language experience: An fMRI study. Brain and Language. Vol.84, pp.318-336. Quoted in Wang et al., 2004. Gandour J. et al. 2004. Hemispheric roles in the perception of speech prosody. Neuro-Image. Vol.23, pp.344 –357. Quoted in Minagawa-Kawai et al., 2007. Haggard, M. P. 1971. Encoding and the REA for speech signals. The Quarterly Journal of Experimental Psychology. Vol.23, No.1, pp.34–45. Quoted in Rimol et al., 2006. Hayashi, A, Tamekawa, Y. & Kiritani, S. 2001. Developmental change in auditory preferences for speech stimuli in Japanese infants. J Speech Lang Hear Res. Vol.44, pp.1189 –1200. Quoted in Minagawa-Kawai et al., 2007. Hickok, G. & Poeppel, D. 2000. Towards a functional neuroanatomy of speech perception. Trends Cogn, Sci, 4, 131-138. Quoted in Price et al., 2005. Hugdahl, K. 2003. Dichotic listening in the study of auditory laterality. In K. Hugdahl & R. Davidson (Eds.), The asymmetrical brain. Cambridge, MA: MIT Press. Quoted in Rimol et al., 2006. Jacquemot, C. et al. 2003. Phonological grammar shapes the auditory cortex: a functional magnetic resonance imaging study. J Neurosci. Vol.23, pp.9541–9546. Quoted in Minagawa-Kawai et al., 2007. Ke, C. 1992. Dichotic listening with Chinese and English tasks. Journal of Psycholinguistic Research. Vol.21, pp.463–471. Quoted in Wang et al., 2004. Kimura, D. 1967. Functional asymmetry of the brain in dichotic listening. Cortex. Vol.3, pp.163–168. Quoted in Rimol et al., 2006. Klein, D. et al. 2001. A cross-linguistic PET study of tone perception in Mandarin Chinese and English speakers. Neuroimage. Vol.13, pp.646–653. Quoted in Wang et al., 2004. Kuhl, P.K. et al. 1992. Linguistic experience alters phonetic perception in infants by 6 months of age. Science. Vol.255, pp.606–608. Quoted in Minagawa-Kawai et al., 2007. Liberman, A.M., & Mattingly, I.G. 1989. A specialization for speech perception. Science. Vol.243, pp.489–494. Quoted in Best & Avery, 1999. Mildner, V. 1999. Functional cerebral asymmetry for verbal stimuli in a foreign language. Brain and Cognition. Vol.40, pp.197–201. Quoted in Wang et al., 2004. Minagawa-Kawai, Y. et al. 2002. Assessing cerebral representations of short and long vowel categories by NIRS. NeuroReport. Vol.13, pp.581–584. Quoted in Minagawa-Kawai et al., 2007. Moen, I. 1993. Functional lateralization of the perception of Norwegian word tones — Evidence from a dichotic listening experiment. Brain and Language. Vol.44, pp.400–413. Quoted in Wang et al., 2004. Na¨a¨ta¨nen R. et al. 1997. Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature. Vol.85, pp.432– 434. Quoted in Minagawa-Kawai et al., 2007. Packard, J. L. 1986. Tone production deficits in nonfluent aphasic Chinese speech. Brain and Language. Vol.29, pp.212-223. Quoted in Moen, 1993; Wang, 2001. Patel, A.D. and Daniele, J.R. 2003. An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45. Quoted in Price et al., 2005. Perani, D. et al. 1996. Brain processing of native and foreign languages. NeuroReport. Vol.7, pp.2439 –2444. Quoted in Minagawa-Kawai et al., 2007. Rivera-Gaxiola, M., Silva-Pereyra, J., & Kuhl, P.K. 2005. Brain potentials to native and non-native speech contrasts in 7- and 11-month-old American infants. Dev Sci. Vol.8, pp.162–172. Quoted in Minagawa-Kawai et al., 2007. Robin, D., Tranel, D., & Damasio, H. 1990. Auditory perception of temporal and spectral events in patients with focal left and right cerebral lesions. Brain and Language. Vol.39, pp.539-555. Ross, E.D. et al. 1981. How the brain integrates affective and propositional language into a unified behavioral function. Archives of Neurology. Vol.38, pp.745-748. Quoted in Moen, 1993. Ryalls, J., & Reinvang, I. 1986. Functional lateralization of linguistic tones: Acoustic evidence from Norwegian. Language and Speech. Vol.29, pp.389-398. Quoted in Wang et al., 2004. Scott, S.K. et al. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406. Quoted in Price et al., 2005. Schwartz, J., & Tallal, P. 1980. Rate of acoustic change may underlie hemispheric specialization for speech perception. Science. Vol.207, pp.1380–1381. Quoted in Best & Avery, 1999; Rimol et al., 2006. Shtyrov Y, Pihko, E, & Pulvermuller, F. 2005. Determinants of dominance: is language laterality explained by physical or linguistic features of speech? NeuroImage. Vol.27, pp.37– 47. Quoted in Minagawa-Kawai et al., 2007. Studdert-Kennedy, M., & Shankweiler, D. 1970. Hemispheric specialization for speech perception. The Journal of the Acoustical Society of America. Vol.48, No.2, pp.579–594. Quoted in Rimol et al., 2006. Sussman, H. M., Franklin, P., & Simon, T. 1982. Bilingual speech: Bilingual control? Brain and Language. Vol.15, pp.125–142. Quoted in Wang et al., 2004. Tallal P, Miller S, & Fitch, R.H. 1993. Neurobiological basis of speech: a case for the preeminence of temporal processing. Ann NY Acad Sci. Vol.682, pp.27– 47. Quoted in Minagawa-Kawai et al., 2007; Rimol et al., 2006. Tucker, D.M., Watson, M.D., & Heilman, K.M. 1977. Discrimination and evocation of affectively intoned speech in patients with right parietal disease. Neurology. Vol.27, pp.947-950. Quoted in Moen, 1993. Van Lancker, D., & Fromkin, V.A. 1973. Hemispheric specialization for pitch and ‘tone’: Evidence from Thai. Journal of Phonetics, I, pp.101-109. Quoted in Wang et al., 2004; Wang, 2001. Van Lancker, D., & Fromkin, V. A. 1978. Cerebral dominance for pitch contrasts in tone language speakers and in musically untrained and trained English speakers. Journal of Phonetics. Vol. 6, pp.19–23. Quoted in Wang, 2001. Van Lancker, D. 1980. Cerebral lateralization of pitch cues in the linguistic signal. Papers in Linguistics: International Journal of Human Communication. Vol.13, No.2, pp.201-277. Quoted in Moen, 1993; Wang et al., 2004. Wang, Y., Jongman, A., & Sereno, J. A. 2001. Dichotic perception of Mandarin tones by Chinese and American listeners. Brain and Language. Vol.78, pp.332–348. Quoted in Wang et al., 2004. Wang et al. 2003. fMRI evidence for cortical modification during learning of Mandarin lexical tone. Journal of Cognitive Neuroscience. Vol.15, pp.1019-1027. Quoted in Wang et al., 2004. Weintraub, S., Mesulam, M.M., & Kramer, L. 1981. Disturbances in prosody. Archives of Neurology. Vol.38, pp.742-744. Quoted in Moen, 1993. Weniger, D. 1984. Dysprosody. In F.c. Rose (Ed.). Advances in neurology. New York, Raven Press. Vol.42. Quoted in Moen, 1993. Widrig, M., 2000. See Wood, S et al., 2000. Wood, S., Hiscock, M., & Widrig, M. 2000. Selective attention fails to alter the dichotic listening lag effect: Evidence that the lag effect is preattentional. Brain and Language. Vol.71, No.3, pp.373–390. Quoted in Rimol et al., 2006. Wong, P.C.M. 2002. Hemispheric specialization of linguistic pitch patterns. Brain Research Bulletin. Vol.59, pp.83-95. Quoted in Wang et al., 2004. Zevin, J.D. & McCandliss, B.D. 2005. Dishabituation of the BOLD response to speech sounds. Behav Brain Funct. Vol.1, p.4. Quoted in Minagawa-Kawai et al., 2007. Zatorre, R. J., & Belin, P. 2001. Spectral and temporal processing in human auditory cortex. Cerebral Cortex. Vol.11, No.10, pp.946–953. Quoted in Rimol et al., 2006. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Native Arabic Speakers and English Speakers Dissertation, n.d.)
Native Arabic Speakers and English Speakers Dissertation. Retrieved from https://studentshare.org/education/1726936-laterality-differences-in-arabic-native-speakers-and-english-native-speakers-measured-by-dichotic-listening
(Native Arabic Speakers and English Speakers Dissertation)
Native Arabic Speakers and English Speakers Dissertation. https://studentshare.org/education/1726936-laterality-differences-in-arabic-native-speakers-and-english-native-speakers-measured-by-dichotic-listening.
“Native Arabic Speakers and English Speakers Dissertation”, n.d. https://studentshare.org/education/1726936-laterality-differences-in-arabic-native-speakers-and-english-native-speakers-measured-by-dichotic-listening.
  • Cited: 0 times

CHECK THESE SAMPLES OF Native Arabic Speakers and English Speakers

Contrastive AnalysisPaper

English as a language has developed over the years to become a language that is not only used predominantly between non-native speakers and native speakers but also amongst the non-native speakers.... Contrastive Analysis and possible problems Several arabic speakers carry out a variety of abnormal tentative tasks which involve discrimination of words.... Contrastive Analysis of English and other Arabic Languages Name: Institution: Contrastive Analysis of English and other Arabic Languages Introduction Second language learning among learners has posed a challenge to learners in their bid to try and become identical with the native speakers....
4 Pages (1000 words) Research Paper

Research proposal (English by Arabic Foreign Language learners)

Making compliments successful in English by Arabic Foreign Language learners, to avoid circumstances of misunderstanding by the native english speakers or misunderstanding by the native speakers of English is the interest of this research.... Order 284816 APPROPRIATE COMPLIMENTS BY ARABIC EFL LEARNERS TO AVOID MISUNDERSTANDING A Research Proposal Presented to The Faculty ofIn Partial Fulfillment Of the requirements for By ABSTRACT Making compliments successful in English by Arabic Foreign Language learners, to avoid circumstances of misunderstanding by the native english speakers or misunderstanding by the native speakers of English is the interest of this research....
2 Pages (500 words) Essay

Translation review 5

english not only has a wider range of collective nouns than Arabic, it also "gives the speaker many such choices to express his attitude to the content of his message".... So pluralization of collectives is possible in both languages though the range of collectives is greater in english.... The hypothesis is that "testees are going to use the item group as the equivalent for most of the collective nouns of english.... The grammatical mistakes showed that "Arabic is richer in its grammatical system than english....
4 Pages (1000 words) Essay

Laterality Differences in Native Arabic Speakers and English Speakers

That is, the lateralized perception was evident amongst Arabic speakers during the Arabic language test, and amongst english speakers during the English language test, but not during the language test that was in a foreign language for each group.... In the first set of tests in which all participants were exposed to both native and non-native sounds to observe any lateralized perception, it was found that each group of speakers had lateralized perception for their own native language but not for the non-native language....
6 Pages (1500 words) Article

How it's hard to leave your country saudi arabia to study in USA

In the USA, the learners are native speakers of… Inadequate English language proficiency in the Saudi Arabian nationals is a challenge for those wishing to study in the USA as it affects their comprehension.... In the USA, the learners are native speakers of English while in Saudi Arabia; English is used as a second language.... First, there is the problem of using english as the language of instructions.... First, there is the problem of using english as the language of instructions....
1 Pages (250 words) Essay

Why English is Not Enough

This essay Four Videos presents Why english is Not Enough - Nigel Vincent, Provost Lecture: Li Wei - Multilingualism, Social Cognition and Creativit, Learning from Heritage Languages, Language, Immigration, and Human Rights in U.... From the report it is clear that next aspect of language that was analyzed by the professor was its power, contrasted to the military one and demonstrated on the examples of spreading of the British power by means of english language....
2 Pages (500 words) Essay

Relative Clauses

Adoption of the english language has enabled students from minority populations to understand the learning process.... Although they have employed english as their main language of communication.... Adoption of the english language has enabled students from minority populations to understand the learning process.... Although they have employed english as their main language of communication, it is worth appreciating that they are still greatly influenced by their cultural languages....
9 Pages (2250 words) Research Paper

The Top Languages in the Middle East

arabic speakers of the Middle East are from the Afro-Asiatic group, while the individuals who speak Kurdish and Persian hail from the Indo-European community.... Their speakers might not understand each other but do share some basics.... Even though english is a commonly spoken lingua franca in countries across the world, very few people in this territory are fluent in it.... While they might not be as fluent as people from different regions who have adopted english as their go-to second language, it is still possible to hold conversations with middle and upper-class inhabitants of the region in english....
4 Pages (1000 words) Essay
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us