StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

What Is Speaking Ability - Assignment Example

Cite this document
Summary
From the paper "What Is Speaking Ability" it is clear that the testing of speech skills is of growing importance in a world in which business is becoming increasingly international in nature and in which many people live and work in countries outside their own…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER93.9% of users find it useful
What Is Speaking Ability
Read Text Preview

Extract of sample "What Is Speaking Ability"

The Advantages and Disadvantages of Interview Format and Paired Format Speaking Tests INTRODUCTION The basic question involved with speaking tests is "what is speaking ability" (Johnson, 2001) This might seem self-evident, but it is an essential quandary for those attempting to design tests for a number of purposes. Old theories of communicative competence and proficiency have been replaced by an explanatory framework known as "interactional competence" (Marysia, 2001). These theorists argue that speaking ability should not be judged cognitively, but rather as a social construct. The exact use of such ideas to those involved in the very practical (and often pragmatic) process of developing speaking tests is unclear. In the Interview Format type of language test, often called the Oral Proficiency Test (OPI), the following set-up occurs. The OPI is a face-to-face or telephonic interview that consists of three phases: a warm-up phase, a series of level checks and probes, and a wind-down phase. This is one of the most widely accepted tests for speaking ability and is used by government agencies (The Defense Language Institute, The Peace Corps), testing institutions (Educational Testing Service) and the Federal Interagency Language Roundtable. There are many advantages to the OPI system of testing. It is easy, quick and apparently accurately forecasts the degree to which a foreign-speaker will be able to communicate in English. Unlike written tests, it actually tests English speaking ability which, as with all languages, is completely separate from the ability to read and write. The test can be performed quickly and the tester can interview multiple people in a single session. This is particularly importance within the context in which this test is often given. Thus graduate students from foreign countries are often given the test before they can perform grading and/or teaching duties in American universities. Resources for such testing are limited, and so the ability for one tester to perform multiple tests in one day is vital. There are, however, detractors who criticize the test, both for what it purports to test, and for the accuracy of those findings. Thus Messick (1998) suggests that the test is "invalid", defining validity as the process that indicates the degree to which the theoretical/empirical evidence justifies the claims that are made for the test scores. Basically, Messick suggests that the OPI tests do not actually represent real-life conversations. Part of the problem with OPI tests are that they do not really reflect the sheer range of speaking that occurs in actual life. Thus there is monologic speaking (one person), dialogue speaking (two people) or multiple speakers, such as in a meeting with several colleagues. The OPI tests tend to test only one of these: the dialogue. As Brown (2003) and Bonk (2003) have suggested, some speakers do better with dialogue and some with discussion activities. A test that tests one over the other is bound to be somewhat limited in its scope. Another basic problem with this type of test (although it may in fact be shared with all speaking tests) is the variability of the interviewer and his/her affect upon the test results. Each interviewer will have a unique speech style, pattern and intonation that may help (or hinder) the interviewee (Brown, 2003). Thus the test result may be seen as a co-score reached by both the tester and the subject, rather than an accurate measure of the non-native speaker's communication prowess. This tendency may be countered by careful training of the tester and the equally careful process of self-evaluation and objective supervision which must occur. Within one center periodic test interviews can be undertaken in which the same candidate is tested by all the testers (with suitable renumeration of course) and the tests and then compared. If test results vary too much from the mean then some additional training etc, is perhaps needed. As McNamara (1997, 2002) suggests, the more educated, skillful and eloquent the interviewer is the more she may actually 'raise' the performance of the interviewee. Another problem is that interviewees tend to do far better on the second test than on the first (Aaarts, 1995). It seems as though familiarity with the test enables those taking it to do far better than they would have before. Thus familiarity with the test may become a greater variable in the influence of the results than actual verbal ability. Again, this is perhaps inevitable within all testing types. One might argue that the result for the second test is in fact more accurate because it tests the candidate's actual knowledge rather than ability to adapt to a strange and often stressful situation. In order to test language skills as non-stressful an environment as possible is needed so the speaker can show his true worth. Of course, if the test is designed to test English skills under pressured moments, such as negotiating and interviewing skills, then this can be made part of the test. Paired format language tests also have their advantages and disadvantages. Paired format does what its name suggests, two people are tested simultaneously by an interviewer: The move towards using a paired (or group) format for assessing speaking ability directly reflected changes which were taking place during the 1980s in the teaching and learning of English as Foreign Language. Developments in applied linguistics during the 1970s had led to a better understanding of the communicative role of language and this in turn influenced approaches to language teaching; the focus shifted away from the teaching of knowledge about language towards developing the ability to use language for communicative purposes (Taylor, 2004). The advantages of this method are obvious: it enables the evaluator to rate the interviewees in a more complex and, some would argue, more accurate type of communication than appears in the singleton type of test. In paired interviews there is often another assessor involved who remains outside of the actual discussion environment and merely observes and evaluates. This process is illustrated in the diagram below: Thus two different vantage points are gained within the test, and thus the actual measurement may be more accurate. The Interlocutor has taken part within the conversation, and thus can judge what it would be like to be part of a communicative process with the candidates. This is a vital perspective because it is communication that is being tested. However, the Assessor also has an important role. She provides the more "objective" viewpoint, both of the candidates' performance, but also of the effect that the Interlocutor may have had on the conversation and thus the individual performances. This seems a very powerful model, but it suffers from the fact that it is more complex than the previous type of test. The difference in viewpoints between the Interlocutor and the Assessor may be difficult to mediate. If they differ greatly from pairing to pairing then an analysis of the reasons needs to be undertaken. Thus the fact that more information is gathered, and is evaluated in more than one way, so there is more opportunity for error. So a detailed comparison of the two types of testing can be gleaned. Hughes (1989) drew attention to "at least one potentially serious drawback" of the traditional interview format: the power relationship which exists between tester and candidate. He also suggested that 'only one style of speech is elicited and many functions...are not represented in the candidate's performance', adding that 'discussions between candidates can be a valuable source of information' (pp 104-108). A defender of the traditional format might sensibly suggest that including more than one tester merely increases the number of power relationships involved: between Interlocutor and candidate, between Interlocutor and Assessor, and between candidate and Interlocutor, and between candidate and Interlocutor/Assessor. Ross and Berwick's study (1992) showed how oral interviewers use features of control (e.g. topic nomination) and accommodation (e.g. speech modifications) for different purposes. Young and Milanovic's analysis of the one-to-one FCE interview (1992) indicated that the resulting examiner-candidate discourse was highly asymmetrical in terms of features of dominance, contingency and goal-orientation. The fixed role relationship (examiner-candidate) in a one-to-one test format makes it difficult for the candidate to escape this asymmetry. The paired candidate format, on the other hand, provides the potential for various interaction patterns: between candidate and examiner; between the 2 candidates; and between the three participants. The asymmetrical nature of the discourse is therefore considerably reduced. The Grading of Speaking Tests The problems and quandaries associated with the grading of speaking tests are closely related to some of the problems stemming from the nature of the tests. The most basic problem is that a test of speaking ability is essentially a qualitative rather than a quantitative test that has few objective measurement standards. The creation of criteria and the grading of them is essentially subjective and almost a "matter of opinion". Thus in the singleton tests where one interviewer tests one candidate, the assessor is involved with the test and thus may influence the results. Communication through language is an immensely complex process that may depend upon factors such as personal attractiveness, size, race and a whole host of factors that are beyond the variables being measured by the test. One other problem is what should actually be tested with a speaking test. Old tests tended to try to come to some objective analysis as to the amount of vocabulary that the candidate possesses, their skill with grammar and where they can be placed on a continuum of other learners. More recent studies tend to concentrate upon the actual communicative process as it occurs in real, everyday life. Both these sets of criteria have problems. The old type of measurement suffered from the fact that it was not testing actual language as it is spoken day to day. It is perhaps easier to evaluate whether a candidate understands and knows how to use a certain list of words, but does this necessarily show how they will communicate in real life It does not. The more recent stress, on everyday communication, suffers from the fact that it has even less "objective", i.e. empirical data that can be gathered. Conversational language is so complex, and so variable according to the type of conversation taking place, that it becomes clear that a single conversation subject will have an infinite variety of possible courses with different interviewers and different candidates. Objective grading of such conversations is very difficult as the criteria and standards for that grading must be very complex in order to reflect the nature of the test. Upsher and Turner (1995) reveal both the difficulties, but also perhaps opportunities offered by the process of creating a grading criteria and a rubric for scoring them. The authors illustrate the history of grading as related to speech testing: With the arrival of the communicative era, the criteria for language assessment have been expanding within educational contexts. Rather than simply being asked to respond to discrete point items (e.g., multiple-choice questions), students are increasingly being asked to perform tasks that involve extended speaking and writing. The new requirements bring with them the need for rating scales to assess these performances. Scales have been with us for many decades, but their criteria have been evolving as language ability models are enhanced through research. The basis for rating performances has expanded to include not only formal accuracy of language but also its appropriate use and effectiveness. Increasingly, teachers must make qualitative judgments with the help of such criteria. In addition, they need to be informed about such development procedures and the components that affect these procedures. (Upsher, Turner, 1995) The change from tests using discrete points (easy to grade) to more complex assignments coincides with the increase in difficulties of grading the students. The need to make "qualitative" judgments is by nature a complex and subjective task. But some headway can be made in creating a rubric and criteria for grading. Upsher and Turner (1995) suggest that having groups of teachers and testers grade speech tests and then discussing what the differences were and why they occurred will be a good start to the process of creating grading guidelines. In this way it is similar to what occurs in exams such as the AP English exam in which essays are graded by a bank of testers. If they all grade the papers within a certain set margin then no discussion occurs, but if there are large differences then the particular paper is discussed. A similar process could occur with speeches. The question raised here is even if the graders do concur on the grade does this necessarily imply that the testing and grading is accurate It does not. If the speech test is not based upon empirical and carefully calculated criteria then any grading, whether accurate or not, will be of little use. The most important point within the speech test process is the actual design and creation of the test, rather than the grading. The grading of an accurate test can be adapted, but an inaccurate test's grading is fatally flawed from the first. In the future, better computer programs may be able to test speaking skills at a far higher level than the essentially subjective and qualitative approach that is now offered. Analysis of the type of words that the candidate uses, together with sentence construction and other analyses, may provide an individualized speaking program for the person rather than just a generalized score. Internet based tests may also provide even more accurate and convenient language testing. This is already occurring, as in the TOEFL (Test of English as a Foreign Language) testing service: The Internet-based TOEFL Test The TOEFL Internet-based Test (TOEFL iBT) tests all four language skills that are important for effective communication: speaking, listening, reading, and writing. The test helps students demonstrate that they have the English skills needed for success. What Is the Benefit of An Internet-based Test TOEFL iBT emphasizes integrated skills and provides better information to institutions about students' ability tocommunicate in an academic setting and their readiness for academic coursework. With Internet-based testing,ETScan capture speech and score responses in a standardizedmanner. Online registration and online score reporting make it easier for students to register for TOEFL iBT and receive their test scores. When Will TOEFL iBT Be Available TOEFL iBT wasintroduced in the United States, Canada, France, Germany, Italy, and Puerto Rico in 2005. The second phase of the rollout began on March 25, 2006, when test centers in selected cities in Africa, the Americas, Europe, Eurasia, the Middle East, and North Africa offered TOEFL iBT for the first time. (TOEFL, 2006) One new type of test allows for a permanent record of the speech to be made for testing the grading and also planning future study: The Simulated Oral Proficiency Interview (SOPI) is a type of tape-mediated test of speaking proficiency. All SOPI items are based on the speaking proficiency guidelines of the American Council on the Teaching of Foreign Languages (ACTFL). The test is presented to examinees via a test booklet and a master tape. It can be administered individually by anyone using two tape recorders. It can also be used in a language laboratory setting to test groups. During testing, the examinee listens to directions to speaking tasks from a master tape while following along in a test booklet. As the examinee responds to each task, his or her speaking performance is recorded on a separate response tape. Each examinee's response tape is later evaluated by a trained rater who scores the performance according to the proficiency guidelines developed by the American Council on the Teaching of Foreign Languages (ACTFL). Each examinee receives a proficiency rating on the ACTFL proficiency scale. Professional test developers at the Center for Applied Linguistics, working with leaders in the field of foreign language education, have carefully designed the SOPIs to elicit a representative performance sample of an examinee's speech in a short period of time. The test is intended for students at proficiency levels from Novice-High to Superior. (www.cal.org, 2006) This test enables the flexibility of verbal testing with the permanence of a quantitative technique. This makes it ideal. CONCLUSION To conclude, the testing of speech skills is of growing importance in a world in which business is becoming increasingly international in nature and in which many people live and work in countries outside their own. Speech tests need to be both fair and accurate: they should offer a degree of objective, empirical testing as well as the more qualitative approach of actual testing conversational English. The latter is very difficult to grade, but this does not mean that it should not be attempted. One-on-one interview techniques can be combined with the more multiple, more complex tests to provide an overall picture of the student's speaking ability. One test delivered a single time is unlikely to produce an accurate and thus useful result. ______________________________________________ Works Cited Aarts, Flor, & Schils, Erik. Relative Clauses, the Accessibility Hierarchy, and the Contrastive Analysis Hypothesis. IRAL, 33, 1, 47-63 1995. Bonk, WJ. "A many facet Rasch analysis of the second language oral discussion task." Language Testing. 20, 1, 89-110. 1993. Brown, A. "Interviewer Variation and the Co-construction of Speaking Proficiency." Language Testing, 20,1, 1-25. 2003. Hughes, A Testing for Language Teachers, Cambridge University Press, Cambridge: 1989. Johnson Marysia. "The Art of Non-Conversation: A Reexamination of the Validity of the Oral Proficiency Interview". Canadian Modern Language Review. Vo. 59, No. 2, December 2002. McNamara, T. "'Interaction' in second language performance assessment: whose performance" Applied Linguistics, 2002, 22, 221-242. -----------. "Discourse and Assessment" Annual Review of Applied Linguistics. 2002, 22, 221-242. Messick, SJ. Assessment in Higher Education: Issues of Access, Quality, Student Development and Public Policy. Erlbaum, New York: 1998. Ross, S and Berwick, R 'The discourse of accommodation in oral proficiency interviews' Studies in Second Language Acquisition, 14/2, 159-176. 1992. Taylor, Linda. "Investigating the Paired Speaking Test Format". Research Notes. 2005. TOEFL, Test of English as a Foreign Language, www.toefl.com Upsher,J. Turner, C. "Constructing rating scales for second language tests." ELT Journal. 49, 1, 3-12. (www.cal.org, 2006) Young, S and Milanovic, M "Discourse variation in oral proficiency interviews" Studies in Second Language Acquisition, 14/4, 403-424. 1992. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Speaking Tests Assignment Example | Topics and Well Written Essays - 3750 words”, n.d.)
Retrieved from https://studentshare.org/education/1531707-speaking-tests
(Speaking Tests Assignment Example | Topics and Well Written Essays - 3750 Words)
https://studentshare.org/education/1531707-speaking-tests.
“Speaking Tests Assignment Example | Topics and Well Written Essays - 3750 Words”, n.d. https://studentshare.org/education/1531707-speaking-tests.
  • Cited: 0 times

CHECK THESE SAMPLES OF What Is Speaking Ability

Testing Speaking Skills

The author of the paper explains why it is considered important to test speaking, even if there are many problems in testing it effectively.... nbsp; speaking skills comprise the ability to understand the content of what is taught or spoken by the second person and providing the right response in terms of language and grammar.... The difficulty in testing speaking skills is when a large number of students needs to be addressed and tested.... Test speaking is an arduous task....
9 Pages (2250 words) Assignment

Proficiency Test Issues

Speaking Tests consists of 11 questions to be answered in 20 minutes.... The purpose of the test is to Placement tests & Proficiency tests Imagine that you are responsible for admitting to an English language school.... There are two types of TOEIC tests: Speaking and Wring Test, Listening and Reading Test.... Employers from all over the world use TOEIC tests to evaluate the proficiency of a potential employee to communicate in English....
2 Pages (500 words) Essay

English Language Teaching Language and Assessment Harmful Effects of Tests: A Personal Experience

The author of the "English Language Teaching Language and Assessment Harmful Effects of tests: A Personal Experience" paper discusses Hughes' comment in the light of his/her English learning experience.... However, there are many types of tests that are administered depending on the purpose.... ughes (2003) argues that “too often, language tests have a harmful effect on teaching and learning, and fail to measure whatever it is they are intended to measure”....
12 Pages (3000 words) Term Paper

Whether IELTS Is a Reliable and Valid Assessment Procedure for International English Test

Candidates taking the tests have hugely gone up in the last decade and the test centres have experienced high demand and tests are carried out almost four times a month.... Concurrent validity Some people also use the term concurrent validity, A test is said to have concurrent validity if its results are congruent to the results of another test (or perhaps other tests) of the same knowledge or skill.... For example, if we use results in IELTS examinations to identify the English speaking ability a student is allowed to take at A-level or higher education, then we are assuming the IELTS examination has predictive validity....
11 Pages (2750 words) Essay

Lesson Planning and Planning Journal for English as a Second Language

Learners will be able to learn and utilize skills in listening and speaking particularly for the purpose of obtaining information and expressing personal feelings and opinions.... The course features the Adult Migrant English Program.... Facilitation of the course will take place at TAFE....
12 Pages (3000 words) Annotated Bibliography

Knapps Relationship Model

This essay "Knapp's Relationship Model" discusses and explains the model that can determine how both personal and business relationships grow, last and how they terminate as well.... hellip; Explain your feelings to your partner and avoid hurting them.... In addition to all that, one should be gentle and ready to listen, apologetic, hold yourself responsible for your actions; be strong and remember the best part of your qualities, be calm and truthful....
6 Pages (1500 words) Essay

Engineering Design Practice

… Engineering Design PracticeCombustion is a process in which a substance reacts with oxygen to give heat and light.... Combustion is a method of burning due to that chemical change, mainly oxidation, accompanied by the production of heat and light.... ere Engineering Design PracticeCombustion is a process in which a substance reacts with oxygen to give heat and light....
13 Pages (3250 words) Article

The Process of Linguistic Development of the Second Language

This paper "The Process of Linguistic Development of the Second Language" tests the myth that studying abroad is better than foreign language classroom learning exploring the external and internal factors that relate to AS learner's experiences abroad.... The second section explores the process of linguistic development based examples of studies from early research 1980s when the first approaches to study abroad were based on general language tests, how they found a positive effect of study abroad as well as how students made more progress more than AH students....
25 Pages (6250 words) Term Paper
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us