StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Synthesizing Qualitative and Quantitative Health Evidence - Assignment Example

Summary
This assignment "Synthesizing Qualitative and Quantitative Health Evidence" presents the data collection tools. The age, race, and race of the participants were self-reported at the start of the exercise. Since the three factors are easy to determine, they did not require any tool…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.5% of users find it useful

Extract of sample "Synthesizing Qualitative and Quantitative Health Evidence"

Quality assessment . Name and student number Second reviewer, name and number Pair number Study assessed: Active by choice today, D Wilson principal investigator QUALITY ASSESSMENT TOOL FOR QUANTITATIVE STUDIES COMPONENT RATINGS A) SELECTION BIAS (Q1) Are the individuals selected to participate in the study likely to be representative of the target population? 1 1. Very likely 2 2. Somewhat likely 3 3.Not likely 4 4. Can’t tell 5 Very likely. This is because the individuals were selected from the schools that met the criteria under study. The individuals selected to participate in the study were selected from schools that comprised mainly of students who come from low-income families and ethnic minorities (Redmond, 2001). In addition, the schools were categorized to ensure that there was heterogeneity in the sample selected for study. Students in the 24 schools were also assessed for eligibility and only those who met the conditions set in the study were selected to participate. (Q2) What percentage of selected individuals agreed to participate? 1 1. 80 - 100% agreement 2 2. 60 – 79% agreement 3 3. less than 60% agreement 4 4. Not applicable 5 5. Can’t tell 6 There was 80-100 percent agreement to participation. In the 24 schools that were selected, 1563 students were assessed for eligibility to participate in the exercise. Among these, 16 students did not meet the criteria. This left 1547 eligible students who could participate in the study. However, 55 students among the eligible group refused to participate. This left 1492 students who agreed to participate. This was about 96 percent agreement from the eligible (Brody, 2011). RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 Strong. This section can be rated as strong because it meets the conditions required for selection of participants. There is limited biasness in selection of the participants. The schools selected met the criteria required in the study. The schools comprised of students who come from minority groups and low-income (Hulley, 2007). This was in line with the requirements of the study which wanted to determine how ACT intervention could affect the MVPA among low-income adolescents. The sample size was big enough to include all aspects under study. This meant that the findings derived reflected near-accurate characteristics of the whole population. B) STUDY DESIGN 1 1. Randomized controlled trial 2 2. Controlled clinical trial 3 3. Cohort analytic (two group pre + post) 4 4. Case-control 5 5. Cohort (one group pre + post (before and after)) 6 6. Interrupted time series 7 7.Other specify ____________________________ 8 8. Can’t tell Randomized cohort design Was the study described as randomized? If NO, go to Component C. Yes If Yes, was the method of randomization described? (See dictionary) Yes If Yes, was the method appropriate? (See dictionary) The method was appropriate for this study. To determine the effect of ACT intervention, there was need for the group selected for study to be tested before and after study. In this study, the two groups, that is, the group that was subjected to intervention and the control group were tested before the study, tested at mid-intervention and tested at post-intervention (Wang & Bakhai, 2006). This helped the researchers to determine the effect of the intervention on the participants. 7 RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 8 Strong. This section of the paper is strong because it states and describes the design used in the study. The design used is also appropriate for the study because it helps to determine how the ACT intervention affects behaviour among the low-income adolescents pertaining MVPA (Polit & Beck, 2010). The use of randomization ensured that there are no cases of biasness in selecting the participants. C) CONFOUNDERS (Q1) Were there important differences between groups prior to the intervention? 1 1.Yes 2 2. No 3 3. Can’t tell No. The groups were formed of people with relatively similar characteristics. The participants had a certain age limit, height and weight. The participants also came from a relatively similar economic background (Webb & Roe, 2007). The only differences that existed in the groups were race and sex which had no significant effect on the outcome. The following are examples of confounders: 1 1. Race 2 2. Sex 3 3. Marital status/family 4 4. Age 5 5. SES (income or class) 6 6. Education 7 7. Health status 8 8. Pre-intervention score on outcome measure 9 The above mentioned confounders were considered and addressed in this study. The race factor was considered in this study by having a majority of the participants as African Americans. The African Americans comprised of 71 percent of the participants. This ensured that race does not affect the study on a large scale (Pope, Mays & Popay, 2007). In terms of sex, there was relatively equal number of participants from both genders. This implies that sex had a better effect on the outcome of the study in case it is related to the study (Berger, 2005). Having almost equal number of both male and female participants also ensured that there is no gender biasness in the study. It was also meant to determine how the ACT involvement affects both genders. Majority of the participants came from low-income families. This meant that families had an impact on MVPA results especially in post-intervention (Clark, Rothstein & Schanzenbach, 2007). The age factor did not have a big effect on the outcome of the study because the study involved students who were averagely aged 11. The small age difference among participants ensured that all the participants were at the same developmental level and therefore tended to behave in the same manner (Krueger & Zhu, 2004). The outcome was therefore easy to compare among when age is considered. Most of the students who took part in the study came from low-income class. This is because 71 percent of the participants qualified for reduced or free lunch. Having majority of the participants in the same economic class ensured that the study is not affected by economic differences among the participants (Savitz, 2003). On the part of education, the study used only 6th grade students in all the schools. Taking 6th grade students only for the study ensured that only participants with relatively the same level of cognitive development could be used. This was crucial because it meant that the students could be exposed to an intervention that is in line with their cognitive development (Katz, 2006). The health status of the participants was analysed and those that had health problems were excluded (Parry, Parrott, & Institute of Clinical Research, 2004). This was to ensure that the physical activity of the participants was not affected by health conditions. At pre-intervention stage, there was no difference in the outcome measure between the intervention group and the control group. This indicated that all the participants were relatively at the same level of the MVPA. The unavailability of any difference also showed that the two groups were at the same level and no single group had undue advantage over the other. (Q2) If yes, indicate the percentage of relevant confounders that were controlled (either in the design (e.g. stratification, matching) or analysis)? 1 1. 80 – 100% (most) 2 2. 60 – 79% (some) 3 3. Less than 60% (few or none) 4 4. Can’t Tell 5 RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 9 Strong. This section of the study is strong because of how the researchers strived to control the relevant confounders that could affect the outcome of the study. Most of the confounders such as age, education, income level and race were controlled in this study (Briggs, Claxton & Sculpher, 2006). This ensured that the effect of such confounders was cancelled out by having most participants with the same characteristics such as age, race, monetary status and health status. D) BLINDING (Q1) Was (were) the outcome assessor(s) aware of the intervention or exposure status of participants? 1 1. Yes 2 2. No 3 3. Can’t tell 4 5 No. The outcome assessor was not aware of the intervention status of the participants. This is because this study used an independent process evaluator who was did not have knowledge about the participants and the group of participants that had been exposed to intervention and the control group (Sim & Wright, 2002). 6 (Q2) Were the study participants aware of the research question? 1 1. Yes 2. No 2 3. Can’t tell No. The participants were not aware of the research question. This is because the participants were just asked to participate in the program without being told why the program was being undertaken. The participants therefore accepted or refused depending on their interest in the program rather than their interest in taking part in the study (Pawar, 2004). In addition, the participants were young of age to understand whether it was a study or just a program involving play and games. During the study, participants were asked questions related to the study without being told why they were being asked (Axinn & Pearce, 2006). The researchers also used observation and recording devices to analyse the outcome of the program. This means that participants had no knowledge of the research question and therefore did not act in any way that could jeopardize the objectivity of the results obtained from the study. RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 10 Strong. There was objectivity in blinding. The participants and the evaluator did not have prior knowledge about the components of the study. The participants were not aware of the existence of such a study or the research questions. They only knew that they were part of a program but did not know that it was meant to study the effect of ACT interposition on their MVPA (Hagger & Chatzisarantis, 2007). In addition, the age of the participants ensured that it was not possible for them to know exactly what the study was about. The use of recording devices and observation made it harder for the participants to determine the research questions or even about the study itself (Bernard, 2011). The use of an independent process evaluator and outcome assessor ensured that the assessor is biased in his/her assessment due to the knowledge about which participants had been exposed to intervention. E) DATA COLLECTION METHODS (Q1) Were data collection tools shown to be valid? 1 1. Yes 2 2. No 3 3. Can’t tell 4 Yes. The data collection tools were shown to be valid. The age, race and race of the participants were self-reported at the start of the exercise. Since the three factors are easy to determine, they did not require any tool. Factors such as enjoyment of the physical exercises by participants, felt pressure and tension, effort, perceived choice in performing a certain activity and value or usefulness were measured using the intrinsic motivation inventory. The instrument has six subscale scores that are used to determine the level of the various emotional factors mentioned (Jadad, Enkin & Jadad, 2007). The scales are positive predictors of behavioural measures and self-report of intrinsic motivation and their continued use since their inception has shown that they have adequate validity and reliability. Monitoring of the physical activity was carried out using accelerometers. The actical accelerometers have been found to have significant correlations between energy expenditure and activity counts of an individual (Brown-Chidsey & Steege, 2010). The validity of actical accelerometer was proved by comparing its results to the results of other empirically tested accelerometers. The participants wore the accelerometers to calculate MVPA. The ability of these tools to produce data that is comparable to data produced by other tools shows that they are valid. (Q2) Were data collection tools shown to be reliable? 1 1. Yes 2 2. No 3 3. Can’t tell 4 5 Yes. The data collection tools were reliable because of the consistency in the results obtained from the activities. For instance, the mean level of activities as recorded by the actical accelerometers ranged between 35 and 48. There were huge differences between any two days of activity showing that the tool is reliable (Brown-Chidsey & Andren, 2013). Consistency was shown in both groups, that is, the group under treatment and the control group. RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 11 Strong. The data collection methods and tools were efficient, valid and reliable and this ensured that the data collected was relevant. Data collection tools such as the intrinsic motivation inventory and the actical accelerometer were valid and reliable and this made the data collection exercise reliable. F) WITHDRAWALS AND DROP-OUTS (Q1) Were withdrawals and drop-outs reported in terms of numbers and/or reasons per group 1. Yes 1 2. No 2 3. Can’t tell 3 4. Not Applicable (i.e. one time surveys or interviews) Yes. The number of withdrawals and dropouts were reported in terms of both number and reasons per group. For instance, in the group allocated to ACT intervention, it was reported that 43 participants dropped out by moving, 33 participants withdrew due to absenteeism, 14 refused to take part in the exercise and therefore withdrew, 10 dropped without any known reason while one dropped out due to other reasons other than the above mentioned ones (Phillips & Stawarski, 2008). For the control group, it was reported that 28 participants withdrew due to moving, 58 were absent from the exercise, 23 refused to take part in the exercise and therefore dropped out, 6 withdrew due to unknown reasons while 5 dropped out due to other reasons. The numbers of withdrawals and dropouts were indicated at both mid-intervention and post-intervention stages. (Q2) Indicate the percentage of participants completing the study. (If the percentage differs by groups, record the lowest). 1. 80 -100% 1 2. 60 - 79% 2 3. less than 60% 3 4. Can’t tell 4 5. Not Applicable (i.e. Retrospective case-control) The percentage of participants who completed the study was between 80-100 percent. Majority of the participants in both groups took part in the study especially in the post-intervention stage. However, the number of participants at the mid-intervention stage was relatively smaller as compared to the number of participants at the post-intervention stage (Jupp & Sapsford, 2006). The high percentage of participants helped in maintaining the integrity of the study in terms of the number of participants required in a study to come up with a valid hypothesis and theory. RATE THIS SECTION STRONG MODERATE WEAK See dictionary 1 2 3 Not Applicable 5 Strong. Although a number of withdrawals and dropouts occurred during the study, the number was not big enough to affect the overall outcome. In addition, the withdrawals and dropouts were included in the calculations and analysis of the outcome meaning that their effects were accounted for. The reasons for such withdrawals and dropouts were also stated (Ashby, 2011). Because this section has been analysed thoroughly and every factor considered, it has a rating of one. G) INTERVENTION INTEGRITY (Q1) What percentage of participants received the allocated intervention or exposure of interest? 1 1. 80 -100% 2 2. 60 - 79% 3 3. less than 60% 4 4. Can’t tell 5 The percentage of participants who received the allocated intervention or exposure of interest was 51 percent. This was slightly higher than the percentage of participants who were under the control group. The balance between the two groups ensured that no one group gained an undue advantage over the other (Härdle & Simar, 2011) (Q2) Was the consistency of the intervention measured? 1 1. Yes 2 2. No 3 3. Can’t tell 4 Yes. The consistency of the intervention was measured. Before implementation, the program was analysed and each component assessed to determine whether it met the criteria needed in the study. A process evaluator was employed to assess the fidelity, the dose and reach of the intervention program. Fidelity was concerned about the extent to which the social environment autonomy was supportive. The dose delivered was about the completeness of all the components in the program. This was aimed to assess whether or not all the components required in this study had been supplied. Reach, on the other hand, was concerned with the percentage of the participants who attended the program on a daily basis. Apart from analysing the components before implementation, the process evaluator observed the program on a daily basis for 2 weeks at each stage of intervention (Phillips & Stawarski, 2008). The assessment was to measure the consistency of the intervention in terms of the mechanisms and the number of days the program is being administered. (Q3) Is it likely that subjects received an unintended intervention (contamination or co-intervention) that may influence the results? 1 1. Yes 2 2. No 3 3. Can’t tell 4 No. No participant in the control group received any unintended intervention that may have influenced the results. This is the two groups were in different schools where contamination was not possible. The study involved 24 schools and the schools were divided into half, with students from half of the schools subjected to intervention while students from the other half of the schools were being used as the control group (Briggs, Claxton & Sculpher, 2006). H) ANALYSES (Q1) Indicate the unit of allocation (circle one) Community organization/institution practice/office individual Institution practice (Q2) Indicate the unit of analysis (circle on Community organization/institution practice/office individual Office individual 6 Provide your explanation to Q1 & Q2 The unit of allocation was institution practice because the groups were allocated according to institutions which were schools. The study involved 24 schools and the schools were divided into two groups with each group comprising of 12 schools. Community organization was applied to the study through the selection of the eligible who could participate in the study (Pawar, 2004). The researchers choose participants with certain characteristics which were similar. Some of the characteristics that were used to determine eligible students who could take part in the study include age, economic status or income level, race and education (Clark, Rothstein & Schanzenbach, 2007). The unit of analysis used in this study was community organization (Katz, 2006). Each group was treated as a community organization and the analysis carried out. The average values from group was obtained and analysed basing on the performance of the group. Comparisons were also carried in terms of the performance of the group (Q3) Are the statistical methods appropriate for the study design? 1 1. Yes 2 2. No 3 3. Can’t tell 4 yes 5 Provide your explanation The statistical methods were appropriate for this study because there was need for the quantitative data on the number of participants taking part in the study and the amount of time participants take to carry out physical activities in that duration. This was important because it helped to determine the effect of intervention on the MVPA among people (Pope, Mays & Popay, 2007). Recording of statistical data was therefore important in determining the differences in the characteristics of the two groups under study. (Q4) Is the analysis performed by intervention allocation status (i.e. intention to treat) rather than the actual intervention received? 1 1. Yes 2 2. No 3 3. Can’t tell No 4 Provide your explanation The analysis was performed by the intention to treat rather than the actual intervention received. This is because although some of the participants dropped out or withdrew from the study due to movement, refusal, absenteeism, other unknown reasons and other reasons other than the mentioned, the whole number of participants was used in the analysis (Brody, 2011). GLOBAL RATING COMPONENT RATINGS Please transcribe the information from the gray boxes on pages 1-4 onto this page. See dictionary on how to rate this section A SELECTION BIAS STRONG MODERATE WEAK 1 2 3 B STUDY DESIGN STRONG MODERATE WEAK 1 2 3 C CONFOUNDERS STRONG MODERATE WEAK 1 2 3 D BLINDING STRONG MODERATE WEAK 1 2 3 E DATA COLLECTION METHOD STRONG MODERATE WEAK 1 2 3 F WITHDRAWALS AND DROPOUTS STRONG MODERATE WEAK 1 2 3 Not Applicable GLOBAL RATING FOR THIS PAPER (circle one): 1 STRONG (no WEAK ratings) 2 MODERATE (one WEAK rating) 3 WEAK (two or more WEAK ratings) Strong Discuss now your report with your partner working through each item and the epidemiological principles. With both reviewers discussing the ratings: Is there a discrepancy between the two reviewers with respect to the component (A-F) ratings? No Yes No If yes, indicate the reason for the discrepancy 1 Oversight 2 Differences in interpretation of criteria 3 Differences in interpretation of study 7 Describe what happened and the outcome (you may describe what you learned during the process) If you were unable to complete the above comparison, please describe the reason and state the risk of failing to undertake a comparative assessment after initial independent assessment. Final decision of both reviewers (circle one): 1 STRONG 2 MODERATE 3 WEAK 8 Provide your explanation 9 References Hagger, M., & Chatzisarantis, N. (2007). Intrinsic motivation and self-determination in exercise and sport. Leeds: Human Kinetics. Redmond, C. K. (2001). Biostatistics in clinical trials. Chichester [u.a.: Wiley. Brody, T. (2011). Clinical Trials: Study Design, Endpoints and Biomarkers, Drug Safety, and FDA and ICH Guidelines. Burlington: Elsevier Science. Hulley, S. B. (2007). Designing clinical research: An epidemiological approach. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. Wang, D., & Bakhai, A. (2006). Clinical trials: A practical guide to design, analysis, and reporting. London: Remedica. Polit, D. F., & Beck, C. T. (2010). Essentials of nursing research: Appraising evidence for nursing practice. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. Webb, C., & Roe, B. H. (2007). Reviewing research evidence for nursing practice: Systematic reviews. Oxford: Blackwell Pub. Pope, C., Mays, N., & Popay, J. (2007). Synthesizing qualitative and quantitative health evidence: A guide to methods. Maidenhead, England: Open University Press, McGraw Hill Education. Berger, V. (2005). Selection bias and covariate imbalances in randomzied clinical trials. Chichester: John Wiley & Sons. Clark, M. A., Rothstein, J., & Schanzenbach, D. (2007). Selection bias in college admissions test scores. Princeton, N.J: Education Research Section, Princeton University. Krueger, A. B., & Zhu, P. (2004). Inefficiency, subsample selection bias, and nonrobustness. Princeton, NJ: Industrial Relations Section, Dept. of Economics, Princeton University. Savitz, D. A. (2003). Interpreting epidemiologic evidence: Strategies for study design and analysis. Oxford: Oxford University Press. Katz, M. H. (2006). Study design and statistical analysis: A practical guide for clinicians. Cambridge: Cambridge University Press. Parry, T., Parrott, A., & Institute of Clinical Research. (2004). Statistics in clinical research. Marlow, Bucks., England: Institute of Clinical Research. Bacchieri, A., & Della, C. G. (2007). Fundamentals of clinical research: Bridging medicine, statistics, and operations. Milano: Springer. Ospina, M., University of Alberta Evidence-based Practice Center., & United States. (2009). Meditation practices for health: State of the research. Darby, PA: Diane Publishing. Furberg, B., & Furberg, C. (2007). Evaluating clinical research: All that glitters is not gold. New York: Springer Verlag. McPhaul, M. J., & Toto, R. D. (2011). Clinical research: From proposal to implementation. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins Health Briggs, A., Claxton, K., & Sculpher, M. J. (2006). Decision modelling for health economic evaluation. Oxford [u.a.: Oxford Univ. Press. Sim, J., & Wright, C. C. (2002). Research in health care: Concepts, designs and methods. Cheltenham: N. Thornes. Pawar, M. S. (2004). Data collecting methods and experiences: A guide for social researchers. Elgin, IL: New Dawn Press. Axinn, W. G., & Pearce, L. D. (2006). Mixed method data collection strategies. Cambridge: Cambridge University Press. Jadad, A. R., Enkin, M., & Jadad, A. R. (2007). Randomized controlled trials: Questions, answers, and musings. Malden, Mass: Blackwell Pub. Bernard, H. R. (2011). Research methods in anthropology: Qualitative and quantitative approaches. Lanham, Md: AltaMira Press. Brown-Chidsey, R., & Steege, M. W. (2010). Response to intervention: Principles and strategies for effective practice. New York: Guilford Press. Brown-Chidsey, R., & Andren, K. J. (2013). Assessment for intervention: A problem-solving approach. New York: The Guilford Press. Phillips, P. P., & Stawarski, C. A. (2008). Data collection: Planning for and collecting all types of data. San Francisco: Pfeiffer. Jupp, V., & Sapsford, R. (2006). Data collection and analysis. London: SAGE. Ashby, F. G. (2011). Statistical analysis of fMRI data. Cambridge, Mass: MIT Press. Härdle, W., & Simar, L. (2011). Applied multivariate statistical analysis. (Applied Multivariate Statistical Analysis.) Berlin: Springer. Read More
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us