StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

The Differences Between Arabic and English Syntactic Structure - Essay Example

Cite this document
Summary
In this paper, the author looks into those methods of data analysis that will benefit the future research the possibility of their combination. The use of qualitative methods proves to be rather productive for particular topics, especially those which look into explanations of human behaviour…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER95.4% of users find it useful
The Differences Between Arabic and English Syntactic Structure
Read Text Preview

Extract of sample "The Differences Between Arabic and English Syntactic Structure"

 Introduction Once research questions are formulated, the researcher is tasked with selecting appropriate research methodology. According to Lia Litosselliti, the design of the research or research methodology choice is based on the researcher asking himself, “What data do I need?” and “How do I analyse it?” The answers to these questions are: I need the data that will best enable me to get an answer to the posed research questions; in a manner which will make it possible for me to address the posed research questions (Sunderland 2010, p.10). Selection of research methodology typically includes choosing between qualitative or quantitative methods of data collection and analysis, or a combination of both. In this paper, the author looks into those methods of data analysis that will benefit the future research and discusses the possibility of their combination. Overview of Qualitative Methods of Data Analysis Because linguistic research is generally descriptive, it often uses qualitative methods of data analysis. In particular, these are preferred when the researcher wants to discover why and how specific linguistic phenomena happen, as well as how people’s behaviour influences their use of language. The use of qualitative methods, according to Kuntjara, proves to be rather productive for particular topics, especially those which look into explanations of human behaviour (Kuntjara, 2005; Kunjara, 2006). According to Lincoln & Guba (1985), the advantage of qualitative methods is their adaptability to dealing with a multitude of realities. Generally speaking, qualitative methods of data analysis are understood as a non-mathematical procedures that involve analysis (Silverman 2006). Qualitative analysis is not interested in the sample’s number but in the sample’s quality, i.e. in samples that will be able to produce many answers to the given research question. Another characteristic of the qualitative research is that it is not used for generalization. If the researcher wishes to make generalizations, he should complement the study with quantitative research. Analysis of qualitative data is characterized by the need to analyse piles of qualitative data collected in the form of journal writings, transcripts, documents, field notes, most of which are quite hard to organize in comparison with quantitative findings, which are analysed with the help of computer. While qualitative study data may come in form of written texts (field notes or documents), they can also be represented by visual and audible data (e.g. recordings of interviews). Records of data may be either dynamic or static. The dynamic record is known to change through time. It takes the place of specific data and consequently becomes the data. Recordings used are video (these are used for recording sign language), audio (these are used for recording spoken language), video plus audio (provides recording of the combination of spoken language and accompanying body language or gestures, facial expressions, etc). Recordings then get transcribed from spoken into written form and studied in detail either connected to analytic codes or coded. Transcribing Spoken Data Transcribing data has long been used in qualitative research. In the past, data was transcribed for the purposes of making it available for broader audiences. At that time distribution of tapes was either unpractical or impossible. Another reason was transcribed spoken data enabled the researcher to get a good overview while reading and browsing data. Today, transcribed spoken data is used since researchers want to have a good overview of it; want to have it searchable (thanks to the previously unavailable software); want to tag their data grammatically, and embark on more interesting searches. The aspect of making data available to broader audiences is less important. The transcribed corpus has been used for different purposes in linguistic research. Corpus or corpora (in plural) is “a collection of linguistic data, either compiled as written texts or as a transcription of recorded speech”, whose key purpose is “to verify a hypothesis about language - for example, to determine how the usage of a particular sound, word, or syntactic construction varies.” (Crystal 1992, p.85). The related term is “computer corpus” which is explained as an extensive body of the texts which are machine-readable. McArthur (1992, p.265-266) defines corpus as a body of specimens that are thought to represent a particular language (namely, texts, various utterances, etc) and are typically stored in electronic databases. McEnery (2001, p.17) observes that today the term “corpus” is thought to be almost a synonym to “machine readable corpus”. As for transcribed corpus, it is used in pragmatic research, socio-linguistic research, syntactic research, conversation analysis research, morphological research, semantic research, and phonological/phonetic research. In the linguistic research of recent years, the concept of computerized corpus or corpora has been explored in two basic representations: first, as native speaker corpora, and second as learner corpora. Granger (1998) describes the latter as compiled with the purpose of obtaining objective data which would assist in describing the language of learners. According to Granger (2003) learner corpora is an interlanguage or, in other words, L2 corpora. Recent investigations of learner corpora include part-of-speech tagging (POS), discoursal tagging, parsing, error tagging, and morpho-syntactic tagging. In the context of the suggested research into differences between Arabic and English structures, the concept of learner corpora is highly relevant. It may be used to discover the learning patterns used by non-native speakers/ learners of English, in particular some questions of sounding foreign which become evident as non-native speakers start overusing and underusing structures and words found in the target language, as well as by some reflections of pragmatic nature within the language learning paradigm (Flowerdew 1998). Transliteration Transliteration is used when there is a need to use one spelling/writing system for the purposes of representation of some other spelling/writing system. An example of transliteration is the use of Roman alphabet when there is a need to write Arabic, Japanese, Russian, or Chinese. Transliteration differs from transcription, both from a phonetic and phonemic one. Specifically, transliteration deals with spelling, i.e. representation of various lexical items: words or morphemes – the ones which are perceived as units of meaning. Phonetic/phonemic transcription deals with representation of how words are pronounced; in other words, it deals with units of sound. Significant differences in phonology of the two languages – Arabic and English – along with lack of a consistently utilized universal system of phonological representation in Arabic make transliteration a highly applicable method of representation of Arabic lexical units (Salman & Kharusi, 2011). An example of Arabic-English transliteration is provided below: Arabic letter ﺡ = IPA [ħ] is transliterated as or . Translating Data Use of translational phenomena as a ground for linguistic descriptions assumes that one is right talking about a particular translational relation that exists between the two languages. Comparison of descriptions of the given language systems (specifically, two of them) is based on assumption that some amount of information is provided about the translational relation between the target language and the source (Thunes, 1998, p.25). Here translational correspondences come divided in 4 basic types. The first type is word-by-word correspondence (it has certain morphological discrepancies, i.e. gender differences, which are tolerated); the second type is when it is not possible to translate in a word-by-word fashion, but the translation is quite close or near; in the third type, bigger structural discrepancies are fixed between the target and source string if to compare with type 2; finally, type 4 correspondences are found when discrepancies take place not just on the structural level, but also on the level of semantics (Thunes, 1998, p.28). These four categories make up a hierarchy which grows more complex with the fourth level. In recent years, language documentation has attracted researchers’ attention. Scholars agree that the core of language documentation ought to comprise, above all, texts that are recorded and transcribed. Not only should they be translated, but they also have to be annotated, in other words, glossed. This should be done to enable pursuance of a broader range of purposes. To provide a gloss generally means to do a morpheme-by-morpheme translation with information on grammatical categories. Indeed, the most widespread kind of format in grammar context is interlinear glossings or interlinear morphemic translations. They were first offered by Lehmann back in 1982 when he defined the principal purpose of the format as “to make the grammatical structure transparent.” (Lehmann 1982, p.202). However, in reality glossing renders the function and meaning of some individual morphemes. Examples include: le-s grand-s livre-s sont arrivé-s the -pl. big.masc-pl. book.masc-pl. be.pres.3 masc.pl. Ppart.masc-pl. ‘The big books have arrived’. As it is evident from the description above, the traditional glossing format is based on one line. It contains predominantly morphological and semantic information as well as information of some unclear status. It is used to describe languages (Drude 2002). At the same time, glossing is not recognized a tool, but rather serves to provide general orientation in the data model development. It is actually a schema of linguistic annotation, including the one that deals with syntax. None-English data undergoes glossing in the following way: first goes the line with romanised representation; second goes the line with literal translation, as necessary, at the level of morphemes; third goes a grammatical translation. To illustrate, I Ama na-kei-ngga-nya (Kambera) the father 3s.nom-buy-1s.dat-3s.dat ‘Father buys it for me’ To illustrate the ungrammatical nature of a sentence the symbol of * is used. It shows that this sentence is ungrammatical. For example, (1) a. I thanked him. b. *Me thanked him. (2) a. S/He thanked me. b. * S/He thanked I. (3) a. I thanked him/her. b. *I thanked s/he. Performing Corpus Analysis When the researcher has obtained his recordings, it is high time he analysed them. The choice of analysis depends on what area of linguistic behaviour he intends to analyse. The analysis of lexis involves analysing how vocabulary is used; grammar analysis is about the use of syntactic and morphological constructions in corpus; analysis of pronunciation is about how sounds are produced and used; analysis of style concerns exploration of formal-informal aspects of language, as well as standard versus non-standard; when interaction is studied, the researchers focuses on turn-taking and speakers convergence/divergence. Corpus analysis is the focus of corpus linguistics. The key point of corpus linguistics is to discover certain patterns of a given authentic language use via analysis of how it is actually used. In this respect, the goal of the corpus based analysis is to identify usage patterns found within particular empirically collected data as well as discover what it reveals about the language behaviour. Thus, corpus based analysis is not preoccupied with generation of theories that would say what is possible in the language, like, for example, Chomsky’s grammar of phrase structures that produces an unlimited number of various sentences, but it is rather concerned with probably choices made by speakers in their language behaviour (Krieger 2003). Corpus analysis within the domain of corpus linguistics requires getting access to a particular corpus and a specific concordancing program. As it has already been mentioned, a corpus is made of a databank of a multitude of natural texts that result from compilation of writing or transcription of the speech that is recorded. As for the concordancing program or a concordancer, it is a software that conducts an analysis of corpora and generates the results. Concordances take their name from the term ‘concordance’, which basically means “an alphabetical arrangement of the principal words contained in a book, with citations of the passages in which they occur.” (King’s College London n.d.). Concordances are empirical tools of in-text research. Initially created by hand, with the advent of the software technology, concordances have become the most basic tools of textual analysis. Their popularity is explained by the fact that they allow seeing each place in a given text where a specific word or word-form is used; hence, they allow detecting patterns of meaning, too (King’s College London n.d.). The central focus of the corpus analysis is register variation. Register comprises language varieties that can be used within a variety of situations. It is hard to keep track of unless corpus analysis is used. It is believed that language evolves as a combination of different registers that include more general and highly specific varieties. Specifically, a general register my include fiction, casual conversation, academic prose, or newspapers; in its turn, a specific register is consists of academic prose, for instance scientific texts, studies in linguistics, works of literary criticism, each of them possessing their own characteristics that are specific to their field. The role of the corpus analysis is to reveal the patterns of language behaviour within these registers, and it appears that it often discovers that language demonstrates different behaviour depending on the register, following some unique rules and patterns (Krieger 2003). The advantages of corpus based analysis have been clearly outlined by scholars. It is considered to be able to provide a view of the language that is more objective in comparison with introspection, anecdotes, and intuition (Krieger 2003). Sinclair (1998 cited in Krieger 2003) explains that this may be attributed to the fact that speakers are deprived of access to existing subliminal patterns found running through a particular language. Moreover, corpus-based analysis is capable of investigating virtually all language patterns, which means that it applies to lexical, lexico-grammatical, phonological, structural, morphological, and discourse areas. It also allows discovering special patterns or, better, agendas, for example, male vs female use of a particular kind of questions (e.g. tag questions), or error patterns in Chinese students’ counterfactual statements. Utilization of adequate analytical tools enables the researcher to reveal not just the patterns of how language is used, but also the extent to which these patterns are used, as well as certain contextual patterns which impact variability. To illustrate, a researcher may examine the past perfect tense to reveal how often it is used in fiction vs newspaper language or writing vs speaking. On a similar note, researchers may want to explore the use of synonyms such as start and begin or small/little/tiny to define their particular contextual preferences as well as frequency distribution (Krieger 2003). As Ludeling and Kyto observe, the advantage of using concordancers to generate concordances lies in availability of computer software and its capacity “to search quickly and efficiently through large amounts of language data for examples of words and other linguistic items” (Ludeling & Kyto 2008, p.707 ). They further explain that once the results of the above described search get displayed in the form of concordance, the researcher is able to view the data in a comfortable format and conduct various types of analyses (See Figure 1). Figure 1. A sample concordance (King’s College London, n.d.) Theoretical background of text analysis through concordancing has been explored by Tognini-Bonelli (2001). Tognini-Bonelli focuses on differences between a researcher who is reading some text in a common linear manner (i.e. from the start to the end) on the one hand and a researcher who is reading lines of a specific concordance generated on the basis of a corpus. While reading concordances, researchers search for patterns of contrast or similarity that may be found in the words around that search item. Structurally, Tognini-Bonelli explains, the difference is that when the researcher reads a particular text under analysis, he or she works with parole; in other words, he researches how exactly meaning is created in this very text. As for doing corpus-based analysis of concordances, the researcher has a possibility to gain insights into the way the language works as a system, in other words - into langue. From the functionalist perspective, text reading enables readers to focus on poetic, rhetorical, phatic, emotive, phatic, and referential functions whereas concordancing of a particular corpus provides a chance to foreground another function of the text – metalingual (Jakobson, 1960). Application of the corpus analysis as a data analysis method does not necessarily require it to be the primary method of research. Scholars agree that concordances may well be used simply to find data in order to support some hypothesis in a research that is not corpus-driven. Ludeling & Kyto, for example, admit that it is possible to utilize concordance programs to obtain data that would support the hypothesis which was arrived at through some other means rather than corpus analysis. Interestingly, most research which utilizes concordances is known to be of this very type. In this case, qualitative analysis of obtained concordances takes place (Ludeling & Kyto 2008, 712). In the context of the suggested study that will look into the differences between the syntactic structures use in Arabic and English, corpus analysis may be utilized as a convenient research tool. McEnery & Wilson (1996: 93) suggest that corpora and grammatical/syntactic studies connect well, so that corpora may effectively be used as a research tool. According to McEnery & Wilson (1996: 93) corpora as a valid tool of syntactic research provides possibility of grammar representational quantification. They also speak about corpora’s role as empirical data that has representative and quantifiable properties for the hypothesis testing when the hypothesis is derived from grammatical theory. One of three approaches to analysis of syntactic data may be chosen. First, the indirect approach suggests use of tools that are provided by concordance and collocation software which has already described the contexts the words appear in. Second, collected data may be manually tagged with the help of syntactic markers as it gets inserted, which is a time-consuming process. Third, the automatic tagging of the text may be used through computer parser use, which leaves out only those data parts which the computer can hardly process (Higgins & Jones 1984, 93). The use of corpus analysis in syntactic research has been demonstrated in the study by Singh (n.d.) “Syntax in a Business Context: A Learner Corpus Analysis.” The study shows that corpus obtained from the output of L2 learners provides the data to study syntactic structures. Application of Singh’s experience may be considered on the basis of a relatively simple approach to syntactic analysis suggested by this scholar. Typically, computer-based studies in syntax are rather limited since they demand hard work and lengthy hours to be spent on keying-in of data and on application of a complex method of data analysis to process the findings. Singh’s research provides the experience of using an uncomplicated analytical method and the effective use of POS (part-of-speech) tagging software. Below a table of POS tags is provided which is used by concordance software to conduct vertical tagging. A similar one was used by Singh to develop sentences’ syntactic fragmentations and thus to conduct to the syntactic analysis on a sentence level. Figure 2: Table of POS Tags (Adapted from Thomas, 2002) NOUN this macro tag marks any noun tag sigh/NOUN VERB this macro tag marks any verb tag dog@/VERB NN common noun friend/NN NNS noun plural wants/NNS will not show the word as a 3rd person singular verb. JJ Adjective like/JJ not as a verb or noun. DT definite and indefinite article This is used in word strings and gives a, an and the. IN Preposition Marks word strings, when there is a word + preposition. RB Adverb Is there an adverb derived from prohibit? prohibit*/RB Or from ration*/RB? VB base-form verb trigger/VB VBN past participle verb seen/VBN – relevant when studying passive voice or perfect aspect. VBG -ing form verb search/VBG – relevant when studying continuous aspect. VBD past tense verb put can be present and past. put/VBD only shows concordances where it is a past tense verb. CC coordinating conjunction e.g. but CS subordinating conjunction e.g. because PPS personal pronoun subject case e.g. I PPO personal pronoun object case e.g. me PPP possessive pronoun e.g. mine DTG determiner-pronoun e.g. many, some, both, all . Conclusion As far as the suggested study will focus on syntax, the methodology should be selected respectively. Syntax deals with description of rules regarding how words in a sentence relate to one another so that they actually form this sentence. The descriptive research of the differences between Arabic and English syntactic structure may utilize a qualitative paradigm of research with some quantification provided by the use of concordances. While recording and transcription make up the core of language documentation, translation and glossing should be used to annotate the language in order to effectively describe it. At the same time, corpus analysis may be used to look for frequencies of syntactic uses and prevailing syntactic patterns. It could be done on the basis of learner corpora gathered among non-native, Arabic learners of English. The combination of two or more approaches and methods to solve this research problem known as triangulation will increase the validity of the research and make it broader in scope. Bibliography Crystal, D 1992, An Encyclopedic Dictionary of Language and Languages, Oxford University Press. Drude, S 2002, Advanced Glossing — a language documentation format and its implementation with Shoebox, viewed March 26, 2013, http://www.mpi.nl/lrec/2002/papers/lrec- pap-10-ag.pdf. Higgins, J & Johns, T 1984, Computers in language learning, Taylor & Francis. Jakobson, R 1960, Closing statement: Linguistics and Poetics, In T. Sebeok (Ed) Style and Language, 350-377. Cambridge, MIT Press. Kharusi, N & Salman, A 2011, The English transliteration of place names in Oman, Journal of Academic and Applied Studies, 1 (3), 1-27. King’s College London (n.d.) Fundamentals of the digital humanities: The basics of Concording, viewed 26 March 2013, http://www.cch.kcl.ac.uk/legacy/teaching/av1000/textanalysis/concord.html. Krieger, D 2003, Corpus linguistics: What it is and how it can be applied to teaching. The Internet TESL Journal, viewed 26 March 2013, http://iteslj.org/Articles/Krieger- Corpus.html. Kuntjara, E 2005, Sociolinguistic study of politeness ad its problems in using qualitative research approach, Paper presented at the 3d Qualitative Research Convention in Johor Baru, Malaysia. Kuntjara, E 2006, Using qualitative method in linguistic research, viewed 24 March, 2013, http://www.academia.edu/903868/When_to_use_Qualitative_Method_in_Linguist ic_Research. Lehmann, C 1982, Directions for interlinear morphemic translations, Folia Linguistica 16, 199-224. Lincoln, SY & Guba, EG 1985, Naturalistic inquiry, London, Sage Publications. Litosseliti, L 2010, Research methods in linguistics, Continuum International Publishing Group. Ludeling, A & Kyto, M 2008, Corpus linguistics, Walter de Gruyter. McArthur, T (ed.) 1992, The Oxford Companion to the English Language. Oxford Univeristy Press. McEnery, T 2001, Corpus linguistics: An introduction, Edinburgh University Press. Silverman, D 2006, Interpreting qualitative data: Methods for analysing talk, text, and Interaction, SAGE. Singh, M n.d., Syntax in a business context: A learner corpus analysis, viewed 26 March 2013, http://www.academia.edu/1993288/Syntax_in_a_Business_Context_A_Learner_Corpus_Analysis. Sunderland, J 2010, Research questions in linguistics, in L. Litosellitti (Ed.) Research methods in linguistics, 9-28, Continuum International Publishing Group. Thomas, J 2002, A Ten-step introduction to concordancing through the Collins Cobuild Corpus concordance sampler, viewed 26 March, 2013, http://web.quick.cz/jaedth/Introduction%20to%20CCS.htm. Thunes, M 1998, Classifying translational correspondences, in Stig Johansson and Signe Oksefjell Corpora and cross linguistic research:Theory, Method, and Case studies. Amsterdam-Atlanta, Rodopi. Tognini-Bonelli, E 2001, Corpus linguistics at work, Johns Benjamins Publishing. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“The Differences Between Arabic and English Syntactic Structure Essay - 5”, n.d.)
The Differences Between Arabic and English Syntactic Structure Essay - 5. Retrieved from https://studentshare.org/humanitarian/1796591-research-methods
(The Differences Between Arabic and English Syntactic Structure Essay - 5)
The Differences Between Arabic and English Syntactic Structure Essay - 5. https://studentshare.org/humanitarian/1796591-research-methods.
“The Differences Between Arabic and English Syntactic Structure Essay - 5”, n.d. https://studentshare.org/humanitarian/1796591-research-methods.
  • Cited: 0 times

CHECK THESE SAMPLES OF The Differences Between Arabic and English Syntactic Structure

Should Arabic Students Speak English in American School

This paper “Should Arabic Students Speak english in American School?... There are many Arabian students studying in American schools and the learning of American english has become inevitable for them because of various reasons.... hellip; The main purpose of this study is, as stated earlier, to expose the value of studying english in American schools, concerning the Arab students.... Without english, the Arabs cannot use the language's capacity to go to one's heart (Nelson Mandela quotes)iv....
1 Pages (250 words) Essay

Information Structure

Information structure (IS) refers to the way in which the information in a sentence can be separated into categories such as topic, focus, background, comment, old or new.... hellip; According to Jackendoff (216), there are two major general conceptions currently being used to explain the place where information structure fits in the structure of the grammar namely the syntactocentric vs.... Generally, syntax is the main core of syntactocentric conception and the pragmatics and semantics (information structure) are considered to be the derivatives of syntax....
14 Pages (3500 words) Essay

Comparison of Sentence Structures between Arabic and English Languages

nbsp; The present research has identified that arabic and english are official languages used widely but in different regions of the globe.... This paper will explore some of the differences in sentence structure between the arabic and the English languages.... The researcher states that a superficial comparison of the english and the Arabic languages may give a wrong connotation that the Arabic language is uniquely different in english....
8 Pages (2000 words) Research Paper

Word Order between the English and Chinese Languages

In such a case, the Chinese structure assumes a similar arrangement to the English structure.... This paper ''Problem Set'' tells that in linguistics, the concept of word order is essential for referring to syntactic elements.... Nevertheless, the preference considered in this discussion relates to the english and the Chinese languages (Xin, 2012).... The order of words in Chinese is considerably important as those in the english language....
7 Pages (1750 words) Assignment

Problems of English Definite Article

Background InformationAccording to Alamrani and Zughaibi (pp 93), arabic and english native speakers provide an excellent example of phonological and phonetic errors.... The primary causes of the differences are the similarities and errors in one language that may affect acquisition of the second language.... Learners face challenges in various aspects such as language structure.... The interchangeability proves the existing differences in word usage and structure between English and Arabic language....
13 Pages (3250 words) Thesis

Linguistic Factors Influencing Translation from English to Arabic

The two linguistic factors discussed have the most influence on the success of a translation exercise from English to Arabic mainly because of the pronounced cultural difference between arabic and english-speaking contexts.... exical and Semantic Issues in the Translation of English to ArabicThis research paper attempts to contextualize the linguistic factors that influence the efficiency with which a native Arabic speaker or one who is fluent in written Arabic can translate English into arabic and vice versa....
10 Pages (2500 words) Coursework

Code-Switching between English and Arabic in Saudi Arabia

The paper "Code-Switching between english and Arabic in Saudi Arabia" focused on the grammatical and sociolinguistic aspects of its execution.... Using this as a strategy in teaching english within an Arabic classroom has been the subject of controversy and debate among scholars, as well as other professional teaching contexts.... Hence, L1 such as arabic deserves a place within a foreign language classroom.... nbsp;Code-switching in itself is an unavoidable consequence of communication processes between different language types....
18 Pages (4500 words) Research Proposal

Using Arabic in English Classroom

… The paper “Using Arabic in english Classroom” is an impressive variant of a literature review on english.... The use of the Arabic language in teaching english classrooms in Saudi Arabia has been subject to much debate and research since the introduction of english in Saudi Arabia classes by the Ministry of Education in 1925 The paper “Using Arabic in english Classroom” is an impressive variant of a literature review on english....
11 Pages (2750 words) Literature review
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us