StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Education: Evaluating the Research of others - Essay Example

Cite this document
Summary
This research is the best example of comparison of two articles: “The Effect of Computer Mediated Conferencing and Computer Assisted Instruction on Student Learning Outcomes” and “The use of Digital Technologies in the Classroom: A Teaching and Learning Perspective”…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER97.8% of users find it useful
Education: Evaluating the Research of others
Read Text Preview

Extract of sample "Education: Evaluating the Research of others"

?Education: evaluating the research of others. This paper analyses and compares two articles relating to aspects of computer mediated teaching and learning with an emphasis on the methodology chosen in each. The first paper (Cain and Pitre, 2008) deals mainly with the student learning side of this topic, while the second paper (Buzzard et al., 2011) looks at both the teaching side and the learning side. Both papers were published within the last three years and represent aspects of current thinking on this topic. The two articles are analysed in turn, and this is followed by a comparison of the two, evaluating the strengths and weaknesses of their respective methodological approaches. This paper concludes with a personal reflection presenting my own views on the two papers, and how useful I anticipate they will be in my future professional career. The first article, entitled “The Effect of Computer Mediated Conferencing and Computer Assisted Instruction on Student Learning Outcomes” (Cain and Pitre, 2008) is published in the Journal of Asynchronous Learning Networks. This is a specialised journal, which states on its website that it “adheres to traditional standards of double-blind peer review and authors are encouraged to provide quantitative data.” (JALN, 2011). This assurance gives the reader confidence that original fieldwork with clear data will be provided, and that a checking procedure has been carried out to ensure that the paper meets proper academic and stylistic standards. The website also indicates a potential bias in favour of Asynchronous Learning Networks, as opposed to other forms of learning, since it states: “The original objective of the Journal was to establish ALN as a field by publishing articles by reliable and authoritative sources...” (JALN, 2011). The starting point of the article is a perceived gap in the literature so far: “Even with the tremendous growth in the use of web-based technologies in traditional college courses... not much attention has been paid to the effectiveness of these new technologies in contributing to the college classroom experience. Few studies have examined whether these new pedagogical tools increase actual learning.” (Cain and Pitre, 2008, p. 32). The authors then proceed to examine the effects of using web-based technology “utilizing a constructivist framework for analysis.” (Cain and Pitrie, 2008). After explaining what constructivism entails, with reference to the Soviet Russian psychologist Vygotsky, (1978) the authors summarise some recent research on student performance and how it is affected by email contact with instructors. They concluded that “findings related to the effectiveness of learning through the use of email appear to be mixed, though there are slightly more studies showing the use of email can improve learning.” (Cain and Pitrie, 2008, p. 33) One declared aim of the Cain and Petrie Study is therefore to explore this further. Another aim is to look at use of technologies like the internet and various inter-active media, and determine how the use of Computer Assisted Instruction (CAI) contributes to and enhances learning. The literature on this area is cited as being more uniformly positive about the effects of CAI. An important feature of this article is its clear definition of the two main area to be studied, namely “ 1) The frequency of use of online collaboration (i.e. email, threaded discussion, or relay chat) and whether collaboration contributes significantly to student outcomes and 2) The frequency of use of computer assisted instruction (i.e. computer and Internet) and its contribution to student outcomes. This is a good approach, which sets the limits of the research to collecting frequency of very specific actions, rather than any vague qualitative impressions. At the outset it is clear that the frequency information is likely to produce hard data which can be measured and compared, while the contribution to student outcomes is much more difficult to measure, and therefore tricky to assess. A crucial factor in this study is the way that the hard data and supposed outcomes are linked with each other since it is possible that changes in outcomes are not necessarily attributable to the frequency of use figures. The method is clearly explained in a separate section. The data used in this article is taken from consists of a United States national survey collected between May and September 2003 and entitled “2003 College Student Experiences Questionnaire.” This data was not collected specifically for this study, but was a fairly comprehensive general survey consisting of 190 Likert-type items that related to student academic learning experience in general. Out of the total items surveyed, 53 were selected, namely those asking students to estimate the educational gains made, and their own engagement. The sample was gained through sending out emails to enrolled students, and checking them against official records. Demographic variables were also gathered, such as “age, martial status (no doubt a typographical error for marital status!), gender, race, living situation, educational status, parents’ educational level and major field of study.” (Cain and Pitre, 2008, p. 36). Out of a total of 87,855 student responses, the study used a random sub-sample of 2000 students. This sample excluded students who had not used online courses, but otherwise it maps onto the bigger survey sample quite closely. There is an imbalance between male and female student numbers, which reflects the national picture of higher numbers of women than men studying at college level. These details are clearly explained, so that a future researcher could repeat this method, using the same basic survey data, perhaps from a later year to see if there has been any change, or using similar types of survey data from another source, for example in another country. The authors correlate demographic variables with the students’ answers on how well they have achieved particular outcomes. They also correlate the frequency of using particular computer aided technologies such as email, discussion boards, internet searches, etc. with these same student answers on learning outcomes. The article provides an appendix containing extensive tables showing these correlations so that the reader can check the detail of this. The methods of calculation using regression are outlined also, which is useful for those who may wish to replicate the study. This methodology has the advantage of using a huge sample, reduced by randomisation to a manageable core sample, and of covering several CAI technologies. Because of this large sample, the findings are likely to be widely applicable. These findings are expressed in disappointingly vague terms, however, for example “the Internet can be a somewhat effective tool in facilitating student learning” (Cain and Pitre, 2008, p. 41). In one area there was an outcome that the researchers did not anticipate: “It is somewhat surprising for the researchers not to find a significant relationship between the variable of computer use and learning, particularly since it is the most popular learning tool of all the technology variables” (Cain and Pitre, 2008, p. 41) The authors reflected on this and concluded that the wording of the questionnaire, which used the terms “word processor” and “computer” as separate variables, may have introduced an area of uncertainty in the minds of students, since the students may have used these terms interchangeably. Alternatively, the authors speculate, there may have been variation in instructional design, but there was no way of determining this using the data provided. The authors conclude that these factors may have distorted the results. In methodological terms, this shows that the survey design was not perfectly matched with the study intentions. The article should perhaps have considered the drawbacks of using existing data collected for general purposes. More useful data would have been collected if the authors had designed, tested, and used their own tailored questions, but of course the drawback of doing this would be the practical limitations of finding a sufficiently large and broadly drawn sample. One other issue, namely the possible existence of incalculable variables due instructor input, is a much more serious drawback, and one which this study simply failed to address. Another issue which was not discussed by the authors, but which is nevertheless very important, is the fact that the data on learning outcomes appears to be based on the students’ own perceptions. Actual performance data (such as course completion rates, or data on grade achievements) was not gathered, and so there remains an unanswered question about how reliable the data on student learning outcomes actually is. There may have been a tendency to inflate these answers, since students are reporting on themselves, and personal vanity may be a factor, or there may even be a built-in variation between male and female reporting tendencies. Previous studies on gender in education, and specifically on learning outcomes with computer assistance, have noted that females and males rate their own ability differently: “The confidence expressed by males and the apprehension felt by many of the women was not substantially reflected in the grades they achieved.” (Gunn et al., 2003, p. 22) This factor was also not considered by the authors. The second article, entitled “The use of Digital Technologies in the Classroom: A Teaching and Learning Perspective” (Buzzard et al., 2011) was published in the Journal of Marketing Education which claims on its website to be “the leading peer-reviewed, international scholarly journal publishing articles on the latest techniques in marketing education, emphasizing new course content and effective teaching methods.” (JME, 2011) This statement gives a reassurance that quality checks are carried out by peers before any articles are published, and it shows also an emphasis on practical matters, such as techniques and teaching methods, rather than theoretical ideas. Buzzard et al. provide an overview of educational scholarship related to technology usage in the marketing classroom and then they present the results of two separate exploratory studies. (Buzzard et al., 2011, pp. 131-132). The overview considers both a broad perspective on the inclusion of technology in the classroom in general and a narrower perspective on “how particular Web 2.0 technologies have been used in the marketing classroom via online activities and projects.” (Buzzard et al., 2011, p. 132) The broad perspective produces a generally positive evaluation of the use of technology in education, but the authors cite Strauss and Hill (2007) as a dissenting voice, claiming that about half of marketing students did not embrace web-based technology, perhaps because they prefer the simplicity of traditional methods, or perhaps because of a saturation point at which students find the time and effort required to master the technology does not result in sufficient gains in achievement. A particular strength of this article is that it is up to date with the latest technological advances, and it provides full and clear definitions throughout. The term “Web 2.0” is defined as “including social networking sites (e.g. Facebook, MySpace, Twitter), blogs, mobile devices (e.g. cell phones, PDAs), user-generated content (e.g. YouTube) and virtual worlds (e.g. Second Life).” (Buzzard et al., 2011, p. 132). There is even mention of “more than 100 universities” who have experimented with classes being conducted in the virtual world of Second Life. (Rzewnicki, 2007) The level of detail here suggest that the authors are familiar with current developments. The literature survey is truly international citing context ranging from American/Chinese (Hu, 2009), United Kingdom/United States (Newman and Hermans, 2008), Scandinavian (Barner-Rasmussen, 1999) and many more. This suggests that the authors have an eye for the general applicability of their work to the global context, and this is also a strength. Buzzard et al. present the results of two studies, and the first of these “attempts to understand technology as related to the demands of the instructors” (Buzzard et al., 2011, p. 133). A survey was carried out on “a convenience sample of instructors at colleges and universities across the United States ” (Buzzard et al., 2011, p. 134) covering several major discipline areas and resulting in 1,717 usable responses. Since the organisation carrying out the survey was Cengage learning, a company which produces learning materials mainly in book form, but also with some web support, it is possible that a bias has crept into the sample. There may be a difference between instructors who use, or whose employers use, Cengage, and those who do not but this point was not discussed. The findings were that instrutors placed a higher utility value on technology, but also valued a mixture of print and electronic instrutional materials. The focus on materials betrays a market research angle on the part of the Cengage Learning company. The second part of the study looked at both instructor and student preferences and the sample here consisted of 765 students and 308 instructors. The full data is not released, due to “the proprietary nature of the research” (Buzzard et al., 2011, p. 135) and this is a serious weakness in the article. Readers have to take the writer’s results on trust, without being able to check any detail. Large variations in preferences were found to exist between subject areas, with students and instructors in Life Sciences and especially Fine Arts being the least keen to use technology in learning. Gender differences were found between male and female students, the former expressing more preference for technology than the latter. This difference was not present in the instructor sample. Student and instructor perceptions on the extent to which technology was used effectively produced largely positive responses of 61% (instructors) and 75% (students). The authors acknowledge that there may have been an element of self-selection in the student sample, resulting in a large proportion who actually chose high technology learning classes because of a pre-existing preference. The article concludes that in general students are more interested in the technology than instructors, teachers are not sufficiently aware of the meta-teaching needs in the area of technology skills, and the tools available appear to be adequate for all concerned. In these two articles the way that the sample was derived is an important research decision and I was aware of the need to obtain as wide a range as possible, while at the same time achieve some focus. The sample sizes of 1000 to 2000 in these papers entail a lot of analysis work, but they do give some confidence that the results have wider significance. The questionnaire method appears to be a good choice for obtaining preference data, but a very crucial element in this is to ensure that there is a match between the actual questions asked, and the aims of the study. In both of these articles it appears to me that the questionnaire part of the work was designed for a different purpose than the declared aims of the analysis side. This weakens the results, because chances are missed to find out relevant details, while some material is not very relevant to the research question. The most interesting point in the Cain and Pitre article was its focus on what students think about their learning. The more that is known about this, the better institutions and instructors can devise materials and programs that suit student needs. I would have liked to see, however, some clear linkage between student perception of performance, and actual student performance. If I were to do a study of this kind, then I would try to correlate these two factors, because I suspect perception and grades do not match exactly. I was also intrigued to reflect on the fact that perhaps students are learning skills and techniques in their web based learning that are not formally assessed. This is an area which I think might be worthy of more research. In the Buzzard et al. article I found the writing style more pleasant to read, because it used vivid language such as “Today’s college students are described as technologically savvy and the most visually sophisticated of any generation, with technology as familiar as a knife and fork to this group” (Buzzard et al., 2011, p. 131). On reflection, however, I think that this inventiveness can distract from the main point, and can create unfounded impressions rather than views based on clear evidence. I was wary of the Buzzard et al. article, because I suspected interference from the Cengage marketing angle, and this detracted from the article’s usefulness. On the positive side, however, the Buzzard et al. did try to encompass both instructor and student views, and this was an important strength. Both of these articles contain valuable insights, and I will remember them in future mainly because of the weaknesses that they have in the match between survey questions and aims. One important lesson that I will retain for future use is the need to define terms very carefully in questions to survey respondents, so that no confusion arises later in the analysis stage. I was disappointed that so little information was given on the statistical formulae, especially in the Buzzard et al. article, and I learned that it is much more helpful to the reader if some tables with raw data are provided as an appendix (as in the Cain and Pitre article). References Barner-Rasmussen, M. (1999) Virtual interactive learning environments for higher education institutions: A strategic overview. In U. Nulden and C. Hardless (Eds.), Papers from the Nordic Workshop on Computer Supported Collaborative Learning, Gothenburg, pp. 1-10. Buzzard, C., Crittenden, V.L., Crittenden, W.F., and McCarty, P. (2011) The Use of Digital Technologies in the Classroom: A Teaching and Learning Perspective. Journal of Marketing Education 3, pp. 131-139. Cain, D.L. and Pitre, P.E. (2008) The Effect of Computer Mediated Conferencing and Computer Assisted Instruction on Student Learning Outcomes. Journal of Asynchronous Learning Networks 12 (3), pp. 31-52. Gunn, C., McSporran, M., Macleod, H. and French, S. (2003) Dominant or Different? Gender Issues in Computer Supported Learning. Journal of Asynchronous Learning Networks 7 (1), pp. 14-30. Hu, H. (2009) An international virtual team based project at under-graduate level: Design and assessment. Marketing Education Review 19 (1), pp. 17-22. JALN Website. (2011) Journal of Asynchronous Learning Networks. Available online at: http://sloanconsortium.org/publications/jaln_main JME Website (2011) Journal of Marketing Education. Available online at: http://jmd.sagepub.com/ Newman, A.J. and Hermans, C.M. (2008) Breaking the MBA delivery mould: A virtual international multi-group MBA/practitioner collaborative project. Marketing Education Review, 18 (1), pp. 9-14. Rzewnicki, A. (2007) Professors in avatar mode begin to work in Second Life. Online article available at: http://www.poole.ncsu.edu/index-exp.php/profiles/feature/professors-second-life/ Strauss, J. and Hill, D.J. (2007) Student use and perceptions of web-based instructional tools: Laggards in traditional classrooms. Marketing Education Review 17 (3), pp. 65-67. Vygotsky, L.S. (1978) Mind and Society: The development of higher psychological processes. Cambridge,MA: Harvard University Press. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Education: Evaluating the Research of others Essay”, n.d.)
Retrieved de https://studentshare.org/education/1391354-evaluating-the-research-of-others
(Education: Evaluating the Research of Others Essay)
https://studentshare.org/education/1391354-evaluating-the-research-of-others.
“Education: Evaluating the Research of Others Essay”, n.d. https://studentshare.org/education/1391354-evaluating-the-research-of-others.
  • Cited: 0 times

CHECK THESE SAMPLES OF Education: Evaluating the Research of others

HPM1C: Evaluation research

Evaluation research: An Introduction (Your name) (Your School) Evaluation research Government and non-government agencies, from time to time, implement new social programs with an intention for the betterment of the community.... research, termed as evaluation research, usually goes parallel with or follows the implementation of these objects to assess their worth, merits, usefulness and feedback.... The audiences for this research are sponsors, donors, client-groups, administrators, staff, and other relevant constituencies....
3 Pages (750 words) Research Paper

The Curriculum Evaluation

The analysis would entail evaluating every topic within the prepared curriculum and the duration of time that has been allotted to each topic area for studying.... hellip; As the curriculum involves higher education course, which is multidisciplinary, it requires all the students in terms of audience for deriving and adapting a common foundation.... The curriculum development process also undergoes change due to newer developments in the education sector, and its evaluation keeps it convincing, dependable and puts it on the right track....
10 Pages (2500 words) Research Paper

Educational Organization Evaluation Process

The process could also be used to research and assess the effectiveness of the educational system which is used in an educational institution vital in the formulation of educational policies geared towards the realization of an empowered and globally competitive citizen.... From the paper "Educational Organization Evaluation Process" it is clear that the process which described serves as a guide for the educational organization to work on areas that need improvement as a whole....
12 Pages (3000 words) Research Paper

Standardized Testing in American Education

Standardized testing is an issue which has received a fair amount of attention lately particularly as it relates to the fairness and universality of standardized testing in an academic setting.... In the United States, standardized testing is quite common where exams such as the… Are standardized tests valid as a reflection of a student's performance?...
7 Pages (1750 words) Research Paper

Educational Settings in the Workplace

The study “Educational Settings in the Workplace» reveals the need to regularly increase the employees' level of education and apply systematically the most effective and appropriate teaching and learning technologies and relevant and appropriate use of data and information technology.... In today's educational institutional systems especially in open and distance learning institutions, it is a necessity to have in place an agglomeration of skills and competencies in order to assail the competitive milieu which the twenty-first century higher education has inextricably entered....
14 Pages (3500 words) Research Paper

Assistant Principals and Teacher Evaluations

Johnson and colleagues (2009) observe that the theory of action behind supervision and evaluation is flawed and that the conventional process rarely changes what teachers do in their classrooms The minimum acceptable credentials for the former include a master's degree in education administration while a bachelor's degree is permissible for the latter.... For those aspiring to become principals, they are required to acquire a doctorate degree in education administration....
10 Pages (2500 words) Research Paper

Similarities and Dissimilarities between Research and Evaluation

This essay discusses similarities and dissimilarities between research and evaluation, which are two distinct disciplines although they are used interchangeably.... The two disciplines share various things such as concepts, methods, and tools but are differentiated during use and dissemination… The purpose of this essay is to provide information concerning purpose of both evaluation and research and how they are conducted.... It examines the central characteristics of research and evaluation such as validity, generalization, theory, and their usefulness in the decision-making process as well as the roles those characteristics play in each Evaluation and research differ in degree along dimensions of generalizations of both particularization and generalization as well as decision oriented and conclusion oriented that serve as the basis of distinctions....
6 Pages (1500 words) Research Paper

The Future for the Psycho-Educational Evaluation in Saudi Arabia Educational System

… The paper “The Future for the Psycho-Educational Evaluation in Saudi Arabia Educational System” is an intriguing variant of a research proposal on education.... The paper “The Future for the Psycho-Educational Evaluation in Saudi Arabia Educational System” is an intriguing variant of a research proposal on education.... Major developments and enhancements of systems, as well as structures, are occurring in Saudi Arabia education calling for the need to understand how to effectively address the issue and come up with effective strategies for implementation (Khan 2011)....
8 Pages (2000 words) Research Proposal
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us