StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Planning and Conduction of Evaluation - Case Study Example

Cite this document
Summary
The paper 'Planning and Conduction of Evaluation' focuses on Fetterman and Wandersman who define evaluation as a systematic method of determining the merit of a subject. The worth and significance of this subject are determined using criteria that are set by a given level of standards…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.9% of users find it useful
Planning and Conduction of Evaluation
Read Text Preview

Extract of sample "Planning and Conduction of Evaluation"

COMPARISON BETWEEN DECISION-BASED EVALUATION AND UTILIZATION-FOCUSED EVALUATION of Introduction Fetterman and Wandersman (2007) define evaluation as a systematic method of determining the merit of a subject. In this regard, the worth and significance of this subject is determined using a criteria that is set by a given level of standards. Evaluation aims to help make constructive decisions about a given situation, program or project through a later proposal of intervention measures. Among other reasons, it helps make a reflection, gain insight and about the subject which would help in a future change. Evaluation takes different forms depending on what is to be evaluated (Pesaran & Scouras, 2000). In this case, there is a reference to decision-based evaluation (DBE) and utilization-focused evaluation (UFE). UFE, developed by Michael Quinn Patton refers to a decision making approach in which there is assertion that the judgment of evaluation should be based on the usefulness it has on its users. Planning and conduction of evaluation therefore should have utilization of both process and decisions with an aim of improving performance (Patton, 1997). Decision based evaluation on the other hand, according to Saks, ‎ Haccoun and Belcourt (2010), is a method used to provide value and knowledge based on making defending decisions. In most cases, there is a need to justify decisions made in this method as well as plans and actions. This paper seeks to make a comparison between decision-based evaluation and utilization-focused evaluation. In light of the same, it will make an analysis of the components of the two different models by their ideal purposes and absolute involvement in the final decision making level. Pre-Amble to Evaluation Comparison According to Hall (2010), evaluation is a very important aspect of educational inquiry. Before venturing into comparison of which method can rule best in being the best for a given criteria of measurement, there is need to develop the purpose and objective of the education system in place. The objects of an evaluation need to have a collection of components. Programmes, policies, procedures, units of an organization and people performance are some of the components that may constitute an evaluation. These refer to the parties to an evaluation and the issues to be evaluated. For an evaluation to be considered effective, it has to be relevant and efficient. Relevance means it bears the proportionate requirement as per the perceived objectives set before. Efficiency means that it is workable under the prevailing conditions of operation. In essence, this is one of the qualifications for either decision-based evaluation or utilization-focused evaluation to be considered relevant under any situation. Evaluation starts with a pilot study, the pre-evaluation and goes on to the actual evaluation to the post-evaluation stage. This helps give out more information about the whole idea of what was in the objectives (Ramírez & Brodhead, 2013). A designed evaluation programme must therefore serve the purpose of the shareholders in place. Utilization-Focused Evaluation As stated, this type of evaluation bases its strength on the fact that “Utilization-Focused Evaluation (UFE) begins with the premise that evaluations should be judged by their utility and actual use”, according to the work cited by Ricardo Ramírez and Dal Brodhead (2013). This evaluation method has the major primer as being education practitioner evaluators and implementers. UFE is learnt through mentoring practice. The evaluators apply knowledge based on how people who get the knowledge might apply it in real life situations. UFE includes a wide variety of methods for evaluation and is therefore open to overall participatory. Decision making in this strategy is the most important aspect of this method followed by the aspect of consultation. The UFE model is not a linear model because there is an interaction, interconnection and interdependence of aspects. Summarily, it can be summarised in 12 steps and these are interconnected and the explorations go back and forth (Patton, 1997). The purpose, and therefore involvement of this process in decision making is wholly recognisable in its procedural phases. The framework, which also constitutes the model of formation in expansion in this evaluation type, takes a collection of steps as shown below: Assessment of Programme Readiness. Guidance is always required for those people who would like this programme implemented. This gives them the go-ahead to decide whether they are ready for it or not. It is a process that requires guidance from professionals who are well skilled because of the step-by-step procedure. The starting line is the indication of readiness and defining the intended primary consumers/ users. Evaluation of the readiness of the evaluators. As stated earlier in introducing this work, all the stakeholders to an evaluation must always be ready to take part in the program. In this case however, it is the readiness of the evaluators. Readiness means owning the right knowledge and developing a comprehensive situation that makes carrying out the programme feasible. Reviewing skills by the managers and their willingness is an issue that is important in making the programme succeed (Nicol & Macfarlane‐Dick, 2006). Identification of the intended primary users. This process is highly determined by the primary users (Kraiger, 2006). They have such a high stake in its very existence that their stake cannot be done away with. These are always engaged in the evaluation on an on-going process and can always influence how it is to be evaluated. The work of the evaluator would be to assess who the primary intended users are, their objectives and their participation confirmed as much as possible. Analysis of the situation Evaluation in general is highly dependent on people and context. The situation determines the level of enhancement. Situational factors such as the previous experience during evaluation, available resources and the priority given to the evaluation play a big role in mitigating the situation. In the case of situation, other factors that must be considered are the culture of the organization, the power and politics at the helm and turbulence in the organization. Identification of the primary uses intended Since these are usually used in guiding the project, they are used at the initial stages of the projects in evaluation. The intended uses include a combination of processes and as well as findings. Focus and design of the evaluation The focus of this evaluation is usually in line with the primary intended users (PIU). A set of evaluation questions are compiled and evaluation questions. In this design, formulation of the questions is usually harder but must always go in line with the PIUs. The design is also based on the focus required and must respond to the key required questions. These must satisfy the user at the end of the channel. Simulation of Use This is done usually before data is collected so that with findings that is fabricated to verify the idea that the expected data yields sizeable findings by the PIUs. During this process, the key evaluation questions are modified to suit the given situation and stage. Data collection and analysis In this type of evaluation, data is collected with effective use in mind. Keep in mind that the primary users are still supposed to be involved throughout the cause of all these actions. The analysis also involves the users and they are very important in setting up a useful delivery especially during interpretation. Facilitation of use In UFE, use is not just done immediately but has to undergo facilitation. Recognition and inclusion of other factors that will inevitably play a crucial role in the facilitation or inhibition of the findings take place with effective attention. Original uses and findings are closely related with the idea of the evaluated findings. Prioritizing among recommendations and development of the strategy for dissemination for evaluation facilitates the use of the information obtained. Facilitation is central to UFE because it requires time and resources located to facilitating use from the beginning throughout the process. Meta evaluation UFEs are usually evaluated by whether the PIUs used the evaluation in the way that was intended. Users and evaluators are allowed to learn from the experience they have achieved. By extension, each and every step of the UFE is cognisant with the level of involvement of the different stakeholders. Most importantly, the goal of this method remains with satisfaction of the intended users. That one seems to be a static point in the use of this method where the methods may shift and intermingle but they remain the same. Evaluation is judged by its utility. The utility is so important that evaluators have to keenly take in mind everything that happens. This should also be considerate of how all these will affect the actual use. There are very many stakeholders in this type of evaluation and each has his own level of use. UFE moves from potential abstract t potential use, primary concrete use to specified explicit uses. The evaluator is not a distance independent judge but is highly involved in judgement facilitation and making of decisions. UFE is usually highly personal and situational and is not supposed to be left to the evaluators only. Primary contribution of other people at that primary level is very important. The users determine the kind of evaluation they need and therefore it can be said to be negotiation-oriented method. The evaluator’s work is to offer a menu of possibilities within the given standardised principles and standards. The evaluator strives to create utility by use of accuracy and feasibility. Moreover, much as there has to be adherence to the user’s needs, the evaluators are supposed to maintain a professional standard of operation on issues during the evaluation. These professional ethics are in line with performance of systematic enquiries that are data based, competency, honesty and integrity of the entire process. This involves respect to the involved people, the process and their personal ability to perform the evaluation (Newton, 2007). This method, just like any other method involving a collection of people has its own collection of challenges encountered during the process. The first challenge is where the challenge is engendered. Evaluators work to engender commitments between the evaluation and use. The funders and other stakeholders on the other hand who are rightly mandated to this evaluation usually have no idea what the process involves. If they do, then the knowledge is not in any specified way. For instance, use of some terms during evaluation such as assess, measure, judge, rate and compare. Decision-Based Evaluation (DBE) The decision based model is an ideal model that takes into account a mixture of procedures that work together to reach a concrete evaluation conclusion. This model takes into account three important aspects of the planning; training contents delivery, the organizational payoff and the changes in the learners due to this evaluation. The idea in this method is that all these are interdependent and can be very helpful in developing an ideal procedure for evaluation. In this type of evaluation, there is a high dependence on the credibility of the results obtained by the evaluation team. The unique features of this model make it possible for its competence, importance and purpose to be well manifested. One of the unique features, according to Hall (2005) is that it is that the process is a set of decisions that are very comprehensive. The second unique feature is that decisions made are usually dictated by clarity of purpose. Moreover, this method has a strong basing on the decisions that are made at any particular time. The decisions have a direct influence on the audience targeted. The comprehensive outcomes from this type of evaluation are dictated by the measurement strategies that are out in place for use. This type of evaluation has four major components that help develop every characteristic in it; The purpose for evaluating A preparation for obstacles through planning The content to be evaluated The evaluation design. Putting in mind the obstacles that are always bound to be encountered during evaluation, there is an effort to always make sure that the evaluation professionals meet the qualification requirements for this purposeful task. This model however is very simple, much as it takes into account a collection of steps; the prime background step is where the objectives are declared. This sets the pace for making and implementing the other sections of the evaluation process. To achieve the objectives effectively, there has to be a proper design that helps in developing a sound programme. The last bit includes monitoring of the achieved results against the original objectives. DBE is a comprehensive system of evaluating education. It covers the contractual relationship between the different stakeholders that are party to the organization under evaluation. The stakeholder agrees upon a collection of outcomes that are needed in the final achievement of the project (Fetterman & Wandersman, 2007). The evaluators have the freedom to develop a programme that is privy to the structure of the organization to be evaluated. Regardless of the structure or the qualification of the evaluators, there are some conventional steps that can be used in the evaluation. As stated earlier, these steps dictate the purpose and level of involvement of the different stakeholders. In this regard, the steps that dictate this process are as follows: This is a step at which the objective goals of the organization are dictated. The goals are broken down into a set of objectives that are specific to given areas to be evaluated. This section puts in place the actual goals of the organization and is not an independent section. It involves basing the findings on the goals as stipulated in the original goals of what is being evaluated. In the second step, there is devising of the model that can be used in the evaluation. In the model, there are found objectives, tasks and the outputs that are to be evaluated. Performance indicators are important in setting up a comprehensive situation against which the evaluation will be made. In the third step, there is analysis according to the information collected and dependent on the contents as given in part 2 backgrounds. In this section, there is a reason to develop a level of success as par the achieved objectives and an eventual report that sets to meet the requirements and the criterion meeting as par the objectives-evaluation-findings relationship. The focus of evaluation in decision based evaluation is the effectiveness of the things under evaluation. This point comes up because sometimes the evaluation may end up having a focus on some unintended issues giving rise to a different kind of effects or side effects on the same thing. This may alter the level of success or accuracy of the findings of the evaluation. The intention is to try and plug the existing loose processes or clarification of the running of a given process. Therefore, specificity is paramount in this type of evaluation. Some of the variable objects that may be subject to evaluation are: Policies and programmes Organizational units Outcomes or outputs Individuals and groups in terms of their values, beliefs and behaviour. Effectiveness is one of the most important aspects of this method of evaluation. In adopting the effectiveness, there is always need to satisfy the different stakeholders that are associated with the organization. In this regard, there are aspects of effectiveness that are important in establishing the actual reality of the evaluation process. The first one is the relevance. Relevance in this context is used to show whether this method is useful as per the given conditions. This is because there has to be a matching between the process and the cultural context it is operating in (Bouyssou, Marchant, Vincke, Tsouki`as, Perny, & Pirlot, 2010). Efficiency is another measure of effectiveness. In efficiency, as was with relevance, there is a lot of focus on the cost and efforts put in implementing the policies of evaluation. The evaluation process must be very efficient in trying to create a successful the desired results. When the results are well taken care of, they generate a well-structured level of consistence ranging from the original objectives to the end product of the evaluation process. Monitoring has always been a process through which there is a level of transparency generated. In this regard, monitoring has a strong focus on satisfaction of the stakeholders involved in the process. The evaluators must maintain professionalism as well as the members of the organization being evaluated taking into account the ideal aspect of whether what is being evaluated meets the criteria for evaluation and also goes hand in hand with the set objectives of the company or organization. This is because some evaluators might always want to use the process to victimise the subjects to the organization. Impact is a very important aspect of this type of evaluation. It dictates the final view of all the stakeholders on the work that has been evaluated. It is important to note that impact refers to the truth of the evaluation process that has been realised. In this regard, impact makes it possible for all to assess the difference between the level of the project before it was evaluated and after evaluation. Comparison between the Two Evaluation seeks to verify an issue. It is supposed to give a feedback to the stakeholders involved. Nicola and Macfarlane-Dick (2006) indicate that the characteristics of a good feedback system has to be verified in the right way to show whether the methods have been clearly used and the stakeholders satisfied with the procedures followed. Comparing these two evaluation methods in terms of the feedback system brings out a comparative aspect. The meta-evaluation stage of UFE indicates that the feedback is bound to be good if the PIUs used the information in the intended ways as was stated. The same aspect can be realised in the DBE. In this aspect, the work as accented by the different stakeholders is acceptable if agreed upon at important levels of the donor and the evaluators (Hall, 2010). The donor sponsors the projects and the evaluators come up with the information from the evaluation. This in turn brings out the aspect of stakeholders. The stakeholders must be available and in agreement for the works to be accepted as validly and well done. Some of these stakeholders are internal while others are external. In either case, there will need to be a mutual agreement on the evaluation process and results as there might be an evaluation report that incriminates some of them. This aspect of participatory evaluation, especially in the education sector has been fully supported by Newton (2007). He states that the purpose of evaluation is to come up with a standard referenced judgement. This judgement is supposed to be effectively included in the development of all the stakeholders’ input into the systematic flow of the piece of work being done. Evaluations as shown by these two methods also support selection decisions. Selections decisions are never the same for any tow organizations. However, they are supposed to be expressed in a way as to generate a satisfactory end to all the stakeholders especially the major stakeholder. The other comparative aspect of these two methods lies in the procedural methods. In this case, the UFE procedures indicate that those who need the evaluation done need guidance done on the procedures because of the level of complexity experienced. This is the same with DBE where the first integrated step is all about training and delivery. Training can never be done on any of an individual who does understand a situation. The subject at this point actually is the non-professional stakeholders in the process (Guskey, 2000). This is done through a comprehensive decision making process that involves all the stakeholders in the two different evaluation processes. These procedures can be altered in a way as to favour the development of inclusion of all the stakeholders because their sound inclusion makes the process worthwhile. In both cases, the evaluation is people and context dependent. This situational analysis section of the UFE evaluation gives the idea that development of this method to another level must be of specific reference to the people and the context. From a professional perspective, the evaluation process is supposed to be dependent on collaboration, participation and empowerment. All these aspects are important in putting together all the expertise and experience of all those involved in the process. Without putting purpose to much use at this point, there is always a considerable point of confusion when the matters to do with the levels of participation are not clarified. Participation and collaboration creates a continuum whereby there is continuity of all the matters to do with the work. When all the stakeholders understand the work they are supposed to do, they would tend to develop a comprehensive development in terms of the work not stalling as a result of either of the stakeholders unable to discharge any of their duties (Hall, 2005). The purpose and intention of carrying out the evaluation as per the UFE is with the satisfaction of the primary intended users (PIUs). This means that the base of the use of the information lies with the users who are partly stakeholders and implementers of the project. The same is the purpose with the DBE. However, the level of participation differs depending on the complexity of the information obtained from any of the two evaluation models. In UDE, the user tends to be more advanced given the type of information that is to be found. Information here is fully processed and turned into an interpretive data for use. This requires a complex level of involvement which is done by the professional evaluators. In the UFE on the other hand, as described in the processes, there is a dependence on information going to the primary users who may not need a complex level of understand the information portrayed. This makes a small logical difference but the purpose is to show that they all serve the same reason in being included in the evaluation (Sanga, 2005). Lastly, their models are absolutely integrated. In this, there is an interdependence of issues where no process or stakeholder can stand independently and remain stable. All the procedures and stakeholders depend of each other in the entire process of the evaluation. Conclusion Evaluation plays the role of verification. This paper has discussed the comparisons between two evaluation methods; decision-based evaluation and utilization-focused evaluation. These are two separate methods of assessing the viability of procedures, people, developments and performances. In the same criteria, the paper developed a discussion on Utilization Focussed Evaluation where it dug into the purposes, procedures and intentions of the method. It then discussed Decision based Evaluation under similar circumstances. A cohesive section was then developed to seek the competitive similarity between the two procedures. It was found out that stakeholders play a key role in any evaluation procedure if it has to be successful. References Bouyssou, D., Marchant, T., Vincke, P., Tsouki`as, A., Perny, P., & Pirlot, M. (2010). EVALUATION AND DECISION MODELS:a critical perspective. London : KLUWER ACADEMIC PUBLISHERS. Fetterman, D., & Wandersman, A. (2007). Empowerment Evaluation: Yesterday, Today, and Tomorrow. American Journal of Evaluation, 172-197. Guskey, T. (2000). Evaluating Profesional Development. London : Thousand Oakes, Sage. Hall, C. (2005). OutCome Based Approach to Aid Education . In K. Sanga, & C. Hall, Rethinking Evaluation in Pacific Education (pp. 293-311). Wellington : HeParekereke. Hall, C. (2010). Evaluation as a Method of Educational Enquiry,A Guide on Programme and Course Evaluation . Wellington: Victoria University of Wellington. Kraiger, K. (2006). Creating a Compelling Case for Training:Decision Based Evaluation. Colorado : Colorado State University. Newton*, P. E. (2007). Clarifying the purposes of educational, assessment. Assessment in Education: Principles, principles, policy and PERSPECTIVES, 14(2), 149-170. Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self-regulated learning. Studies in Higher Education, 31(2), 119-218. Patton, M. Q. (1997). What is Utiloization Focused Evaluation? Hoe Do You get started. London: Thousand Oakes, Sage. Pesaran, H., & Scouras, S. (2000). Decision-Based Method for Forecast Evaluation. Cambridge: Trinity College. Ramírez, R., & Brodhead, D. (2013). Utilization Focused Evaluation . Perpustakaan Negara: Southbound Sdn. Bhd. Saks, A. M., Haccoun, R. R., & Belcourt, M. (2010). Managing Performance Through Training and Development. London: Cengage Learning. Sanga, K. (2005). Self Evaluating, A donor Funded Iniiative. In K. F. Sanga, Rethinking Aid Relationships in the Pacific Education (pp. 105-115). Wellington: He Parekereke. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Planning and Conduction of Evaluation Case Study Example | Topics and Well Written Essays - 3750 words, n.d.)
Planning and Conduction of Evaluation Case Study Example | Topics and Well Written Essays - 3750 words. https://studentshare.org/education/1810716-comparison-between-decision-based-evaluation-and-utilization-focused-evaluation
(Planning and Conduction of Evaluation Case Study Example | Topics and Well Written Essays - 3750 Words)
Planning and Conduction of Evaluation Case Study Example | Topics and Well Written Essays - 3750 Words. https://studentshare.org/education/1810716-comparison-between-decision-based-evaluation-and-utilization-focused-evaluation.
“Planning and Conduction of Evaluation Case Study Example | Topics and Well Written Essays - 3750 Words”. https://studentshare.org/education/1810716-comparison-between-decision-based-evaluation-and-utilization-focused-evaluation.
  • Cited: 0 times

CHECK THESE SAMPLES OF Planning and Conduction of Evaluation

Developing Programs and Evaluation

This assignment "Developing Programs and evaluation" shows that along with my background in health promotion, I have carried out extensive research into the matter, which I believe will aid you greatly.... The recommendations that I have come up with are not merely based on superfluous assumptions....
17 Pages (4250 words) Assignment

Conducting and Evaluating an Interview

evaluation of the Questions One is convinced that the questions which were asked were effective in terms of their ability to mix open and closed ended questions, as well as in using diverse techniques for the interview, ranging from clarification, reflection, paraphrasing, confrontational and motivational techniques.... evaluation of the Techniques Since the interview questions were able to use the different techniques, overall, these could be considered effective and balanced....
4 Pages (1000 words) Essay

Assessment and Evaluation in the Workplace

Assessment in the Work Place Much research in education around the world is currently focusing on assessment and evaluation.... Assessment and evaluation are best addressed from the viewpoint of selecting what appears most valid in allowing students to illustrate what they have learned....
14 Pages (3500 words) Essay

Evaluation Methods

or this study, the t will first provide a brief overview of evaluation design and methodology followed by discussing the importance of strategic questioning, interview techniques, and evaluation design.... here are a lot of evaluation design and methodology to consider when conducting a research work.... n general, the type of evaluation design and methodology used in a research study is highly dependent on the main purpose of the study.... 405) Other types of evaluation design suitable for field experiment includes: (1) true experiment – also known as the experimental approach; (2) non-equivalent comparison group; or (3) time-series....
7 Pages (1750 words) Coursework

Activity Evaluation

Activity evaluation in our instance simply denotes a systematic gathering, analysis and reporting of data concerning our on-going health education activity.... evaluation of health (education) activities, as the one previously implemented in the community, is beneficial in numerous… evaluation, which is done after a certain time following implementation, helps in ascertaining how well objectives of the program are being met and to what extent the set goals and objectives are being realized....
5 Pages (1250 words) Essay

Media Campaign Evaluation

Nonetheless, the evaluation of the campaign is key to establishing the success of the media relations theory in application in the communications world.... A successful campaign meets the evaluation criteria that encompass the residual practices from media and public relations as facilitating continuums to the topic of campaign evaluation....
12 Pages (3000 words) Term Paper

Comparing 2 Different School District's Evaluation Systems

These are planning and preparation, classroom environment, delivery of instruction, extra professional responsibilities and professional growth(Los Angeles Unified School District (LAUSD), 2013).... This paper ''Comparing 2 Different School District's evaluation Systems'' tells that teacher evaluation systems in the United States were mere bureaucratic exercises that did little to improve the performance of teachers.... hellip; This paper compares and contrasts two separate teacher evaluation systems: the Los Angeles Unified School District, popularly referred to as LAUSD, and the Jackson Public School District (herein abbreviated as JPSD)in Jackson, Mississippi....
7 Pages (1750 words) Report

Definition of Valuation Land

In the evaluation process, various challenges are faced which require appropriate planning and strategies.... "Definition of Valuation Land" paper argues that valuation planning forms an important aspect in ensuring the land that is selected meets the required criteria.... This has to involve creative appraisals in the planning techniques....
7 Pages (1750 words) Assignment
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us