StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Improvement of English Language Teaching - Research Proposal Example

Cite this document
Summary
This research proposal "Improvement of English Language Teaching" focuses on the development of a testing instrument that is in line with the decision of the Ministry of Education of the Kingdom of Saudi Arabia to improve the teaching of the English language to secondary students…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER94.1% of users find it useful

Extract of sample "Improvement of English Language Teaching"

Evidence of each stage and of engagement with the principles of test production – comment on the drafting and development process The development of the testing instrument is in line with the decision of the Ministry of Education of the Kingdom of Saudi Arabia to improve the teaching of English language to the secondary students of the nation (see Saudi Cabinet Weekly Meeting 2003). Aiming to serve the rapidly expanding population of the young Saudi nationals who seek education and employment in and out of the country, this Saudi government initiative comprises – among others – the introduction of new textbooks published by the British educational companies, such as MacMillan, Pearson and Oxford, into the national curriculum. The project kicked off by inviting international experts in language pedagogy to provide strategic advice and practical assistance to the Ministry of Education on developing English language training, and to work with local decision makers and senior practitioners of education in the country. These professionals provided training for a cadre of local teachers for replication of competence, too, and established standards by which the education ministry’s English Language Committee would gauge the implementation of the program to transform the English teaching. Currently, the Saudi government has received bids and presentations from British publishers for the supply of the textbooks for the use in classroom instructions in the country (The English for work and study project, Saudi Arabia, 2008). Now, to help out the Ministry of Education’s English Language Committee and the British companies to come up with the right textbooks for the different year levels of the students, it has been suggested that a diagnostic tests be administered among the students to precisely determine “where they are” – that is, the level of language proficiency by every individual learner – insofar as learning the English language is concerned. In particular, the testing instrument on which this portfolio is about was designed to diagnose the learning level on the English language by the freshmen secondary students. The testing instrument was written and re-written in consideration of the learners’ level based on the Common European Framework of Reference for Languages: Learning, teaching Assessment (CEFR, for brevity) – a guideline that is in place to assess the achievements of foreign language learners across Europe, with its six reference levels being widely received as an acceptable standard for grading an individual’s language proficiency. A task-based language performance assessment (TBLPA) (see Bachman 2002; see Hughes’ [2002, pp. 5] formative assessment; McNamara’s [2000, pp. 6] paper-and-pencil language tests and performance tests) corresponding to B1 level of CEFR, the testing instrument sets to measure the students’ ability to understand the meanings that are implied in the text (see Bachman 2003, pp. 78-79, emphasizing the importance of crucially setting the purpose of the testing instrument). Consisted of general English topics and with varying levels of difficulty among its sets of questions, the developed testing instrument is intended for the first year students of secondary school in Saudi Arabia. It is likewise based on the stated objectives of the new curriculum, and it specifies broad – in particular, he ability to scan written texts for pieces of information and the use of context clues, syntax and structural analysis (e.g., suffixes and prefixes roots) to establish the meaning of unknown words – and underlying skills, which are consisted of being able to guess the meaning of unfamiliar words based on their context, infer the effects of some causes given in the text, recognize the attitude or opinion that the author expresses, and identify pronoun referents. The instrument consists of a number of separate sub-sets. Or, as Bachmann (2003) says, the test is a battery of tests (pp. 120). The instrument has a couple of readings passages, with each consisted of four hundred fifty words, and are related to the students’ previous lessons. The first sub-set intends to measure the ability of the students to scan for meanings of unfamiliar words. Taking the cues from the first reading passage, the students are directed to answer the four short answer items that follow. This sub-test is allocated ten minutes for its completion. The second sub-set, which aims to deal with lexis, features thirteen multiple choice items based on both the reading passages. Specifically, three structural analysis questions are made to base on the second reading passage. Akin to the first sub-set, ten minutes are allocated for this sub-set. The third sub-set is on the learners’ comprehension, and is based mainly on the second reading passage. Given fifteen minutes to answer one question in this sub-set, students are given a leeway to paraphrase and put forward their own point of view. Finally, the fourth sub-set carries referent identification items that have the first reading passage as their backdrop. This sub-test is given the least amount of time for completion – that is, merely five minutes. With the testing instrument goes a detailed answer key. The scoring process – to be done by an independent team of three to five scorers – is going to be mostly objective, except in the reading comprehension question in sub-set three. The identified criterial level of sixty per cent (60%) is set – that is, should sixty per cent (60%) or more of the students pass the test, high level English textbook series will be adopted for the students. This level is expected to be surpass-able for the students who have the ability to comprehend the main textual ideas, analyze a text and form and express their own viewpoints. The results of this testing is actually going to be the point of departure for the textbook writers who are expected to build on the existing English language proficiency by students. Similarly, the students’ weak points that this test would uncover would serve as basis for design of possible training courses for the enhancement of teachers’ performance. Prior to its administration to its addressees, the testing instrument was presented to the class during which the class members were divided into groups and each group member was tasked to provide feedback to the instruments prepared by the other group members. Essentially, the remarks that it earned from the tutor mostly focused on a number of technical enhancement of the instrument – such as improvement of the layout of the test paper, limiting the number of questions, additional rubrics for the written questions, and removing the hints for answers that may be found in some questions. The classmates, too, thought that the testing instrument was perfect; but, their own testing instruments became the basis for the further improvement of this testing devise as it was from theirs that the idea of inserting pagination, etc. was learned. Short report on the implementation of the test When the testing instrument is ready, it was sent to Saudi Arabia for administration to fifteen secondary students of the Fourth Secondary School. Reputed to be one of the best public secondary school in the Kingdom of Saudi Arabia, the Fourth Secondary School is home to Common European Framework B2 level students. The students who took the diagnostic test were aged 15 to 16. On the day of the diagnostic test, there were only thirteen students who turned up; two of the originally designated examinees fell ill and, hence, were absent. The test was administered on 18 January 2010 following the forty-minute time limit of the instrument. It was observed that the level of motivation among the examinees was characteristically high. It was said to be the result of their excitement, for one, because the diagnostic test would be marked in the UK and not in their own country. They are observably eager to wait for the results and feedbacks relative to the test, too. Despite the fact that the students were highly motivated to take the test, it was likewise noticed that they were experiencing anxiety as regards the test. This was on account of their knowing that the test was designed and sent from the UK. To help the students deal with their “fear,” they were provided with useful information. They were clearly told when and where the test would be administered; they were briefed about the subject materials of the test; they were made familiar with the type of test questions that the testing instruments had employed; they were notified of the time limits that they had to strictly follow; the rationale of the diagnostic test was explained succinctly to them; they were assured of who may and will see the results of their test; and they were told for what the results would be used. During the actual conduct of testing, the physical environment of the examinees was regulated. The lighting was made sufficient. The temperature was controlled; the ventilation was assured. The ambience of the examination room was relaxing and conducive to thinking and concentration – i.e., the noise level was brought to the minimum. Evaluation of the test and the results including item analysis where appropriate, and discussion of the overall effectiveness of the test and planned revisions Now, when the results of the tests were plotted in a bell curve, the thirteen results yielded a rather symmetrical bell curve (see Smith 2010 for discussion on bell curve). The examinees were generally divided in three sections or groups – i.e., the strong students or those who got marks higher from 90% to 70%; the middle group, or those who got marks ranging from 60% to 69%; and, the weak group or those who scored below 60% which is the pre-determined passing mark. The strong group has four members: 90’s (1), 80’s (2) and 70’s (1). On the opposite side of the graph was the weak group with the four (4) examinees who failed. In between the two groups was the third or middle group of five examinees who scored in the 60’s. If there is anything that these scores and the mathematical mean of 64.46154 indicate, it is that the test was just within the level of intelligence and competency of the majority of students. And, this is very significant general evaluation of the testing instrument since it somehow establishes the testing instrument’s reputation as a reliable diagnostic test for its intent following its being able to meet where the students are – so to speak. Clearly, the testing instrument has given us a picture of the level of the students’ competence and mastery of English language. But, while statistical analysis of the language testing instrument surely helps, it should not be allowed to completely determine the development of language test (see Davidson 2000). Hence, it is deemed that item by item analysis would supplement and complement the foregoing statistical treatment. In making the item analysis which provides objective data on items to inform decisions made a test, the middle group – that is, those whose marks range from 60% to 69% -- is ignored as it is only the strong and weak groups of students that will be handy for this purpose. For this purpose, the item facility is the percentage of the students who got the correct answer while the item discrimination – which helps to establish the measure of reliability of the items – is the difference between the item facilities of the members of strong and weak groups on any item. The item analysis revealed that items number 1-3, 6-7, 10-13, 16-20, 22, 25, 27-28 and 30 prove to be reasonable items. They did not seem to pose any problem to the students. The item numbers 4, 15, 21, 23 and 29 are easy, straightforward; on these items, both the strong and the weak groups garnered perfect scores. In item number 5, two examinees from the strong group and none from the weak group answered correctly. The dismal item facility in this item must have been occasioned by the item requiring the students’ understanding. Going through how the students answered this item, it is observable that some students tried to find a word in the question and simply write the sentence in which that word occurs in the passage. Item numbers 8 and 9 showed that students are adept in the use of referent pronouns. The students of weaker group got better marks in item number 14, which is about an unfamiliar synonym. Examining the item, it appears that the members of the weak group had had an upper hand over the strong group members. In item number 24, the item discrimination was 100% as the members of the weak group missed the meaning of the suffix altogether. The most difficult item proved to be the number 26. As it turned out, the examinees – both belonging to the strong and weak groups – did not have an idea about it. Finally, the 31st item was a writing question. It is important to note here that two of the weaker group’s members did not write a single word. The examination proctor reported that they refused to write and submitted the paper in just thirty minutes. Anticipating that element of subjectivity – that is, interpretation of answers is always limited as it is not possible to specify all the factors that affect the examiners’ performance and even the examiner’s observations of performance are indirect, incomplete, imprecise, and relative (Bachman 2003, pp. 50) – would very likely color the marking of the item number 31, a writing rubric was devised. Too, a second marker of the chosen samples was designated so as to have and provide another viewpoint on the marks that were given to the students. With the results of the item analysis as a backdrop, recommendations may be issued in view of improving further the diagnostic testing instrument. Foremost of all, the writing question – which was afforded only five minutes – needs to be given a more generous time allotment. For, the amount of allocated time for the test or its parts is likely to affect the test performance. This is even more compelled by the fact that this test is diagnostic in character and, as such, it hopes to gauge the test taker’s level of ability (and not his/her speed or rate at which the test may be completed) (Bachmann 2003, pp. 122-123). Another piece that needs to be considered to improve the testing instrument is the test taker’s perception of the test which is shaped by the salience of the parts of the test and the descriptions of each of these parts. In this particular test that is of separate sub-tests, there is a need to improve the labeling of different sub-tests. In effect, it would help the examinees more if the labels are designed to more clearly describe what the sub-test intends to measure. For example, Test A: Reading comprehension (or listening comprehension and composition) is “A test of how well you can recognize correct grammar” (Bachmann 2003, pp. 120). Let it be proposed too that for power tests that try to establish the level of ability of examinees, such as this diagnostic test, the usual sequencing of questions follow the ordering from the easy to difficult (Bachmann 2003, pp. 121). It is along this line that the proposal to allot more time to the writing question becomes more urgent. As (oral and) written communication, in this instance, more completely reveals the examinees’ quality of command of language, it needs to be given more time as it appears to be the most difficult part of the diagnostic test. Over-all, while task-based language performance assessment (TBLPA) that claims to be able to make predictions about performance on future language use tasks outside the test itself is obviously complex and having its share of problems (Bachmann 2002), this particular testing instrument has given results that would enjoin the Saudi government’s Ministry of Education to decide that a higher level textbook be provided for the use of the first year secondary students. References: Bachman, L. 2002. Some reflections on the task-based language performance assessment. Language Testing, [Online]. 19 (4). Abstract from Sage Journals Online database. Available at: http://ltj.sagepub.com/cgi/content/abstract/19/4/453 [Accessed 3 February 2010]. Bachman, L. 2003. Fundamental considerations in language testing. Oxford: Oxford University Press. Davidson, F. 2000. The language tester’s statistical toolbox. System, [Online], 28 (4). Abstract from Elsiever Database. Available at: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VCH-41JTS93-B&_user=10&_coverDate=12%2F31%2F2000&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1210460097&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=70b530caf5e1f1ea64bae4910fb7b746 [Accessed 17 February 2010]. Hughes, A. 2002. Testing for language teachers. Cambridge: Cambridge University Press. McNamara, T. 2000. Language testing. Oxford: Oxford University Press. Saudi cabinet weekly meeting, 2003. Available at: http://www.saudinf.com/main/y6067.htm [Accessed 8 February 2010]. Smith, S.E. 2003. What is a bell curve? Available at: http://www.wisegeek.com/what-is-a-bell-curve.htm [Accessed 14 February 2010]. Teacher’s guide to the Common European Framework, n.d. Available at: http://www.pearsonlongman.com/ae/cef/cefguide.pdf [Accessed 7 February 2010]. The English for work and study project, Saudi Arabia, 2008. Available at: http://www.teachingenglish.org.uk/elt-projects/english-work-study-project-saudi-arabia [Accessed 8 February 2010]. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Improvement of English Language Teaching Research Proposal, n.d.)
Improvement of English Language Teaching Research Proposal. https://studentshare.org/education/2044353-test-development
(Improvement of English Language Teaching Research Proposal)
Improvement of English Language Teaching Research Proposal. https://studentshare.org/education/2044353-test-development.
“Improvement of English Language Teaching Research Proposal”. https://studentshare.org/education/2044353-test-development.
  • Cited: 0 times
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us