StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

The Benefits and Risks of Statistical Methods - Assignment Example

Cite this document
Summary
"The Benefits and Risks of Statistical Methods" paper explains 7 critical components that every research experiment should contain in order to get meaningful data. Statistical techniques are particularly important in cases where the variables under investigation do not exhibit obvious differences…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER95% of users find it useful
The Benefits and Risks of Statistical Methods
Read Text Preview

Extract of sample "The Benefits and Risks of Statistical Methods"

Math Reflections Chapter one and two examine the benefits and risks of statistical methods and explains seven critical components that every research experiment should contain in order to get meaningful data. Statistical techniques are particularly important in cases where the variables under investigation do not exhibit obvious differences. In order to conduct a successful study, a researcher needs to understand how the experiment should be carried out. The researcher is expected to identify the sample to be investigated, determine the appropriate sample size depending on the study, and decide whether to run a randomized or an observational experiment. A good researcher should be able to notice when essential information lacks from the data, what additional information should be entered in the study and how to interpret the results (Utts 3-14). In section three and four, the writer discusses the dissimilarities concerning closed and open items and gives the components of a good sample (Utts 15-46). Data are observations (measurements) and can be quantitative or qualitative. Closed ended questions may consciously or subconsciously introduce bias into the data or ignore important areas in the study. Open ended ones, alternatively, may gather information that is not logical or is inappropriate to the study. A researcher should study these methods keenly before conducting research because obtaining the right information is critical to drawing conclusion from the data. In order to get a good sample, the researcher must identify an appropriate sampling technique to use. This section presents and explains the different sampling methods available to researchers (Utts 15-46). Chapter five presents case studies that try to explain the relationship between a predictor variable and the response variable. To determine if a treatment is effective, most experimentalists create control groups that are handled similarly to the treatment groups except they do not receive any treatment. In some cases, experiments use a treatment or a person as his own control group. This plan can be extended to several people or treatments acting as their own control groups. This type of a design is described as a randomized block design (Utts 90-96). The main aim of any randomized design is to even out confounding variables. By randomly assigning treatments to blocks, a cause-and-effect conclusion can be deduced that would not be achieved in an observational study (Utts 81-98). Gathering useful insight from any collected data starts with summarizing the raw responses. How to summarize and interpret data may not always be immediately obvious to the researcher. There are several methods of reviewing and displaying numerical information that are available to a researcher. Using visual representations to explain data and surveys make them easier to understand. Chapter seven presents ways that raw data can be condensed and displayed in graphs and guidelines for creating the graphs so that they can be interpreted easily to determine particular characteristics of the population (Utts 124-140). A density curve is the most popular way of representing a population. A density curve is created by connecting the top of the frames of a histogram with a level curve (Utts 471-4). A bell shaped curve implies that the population under study has a uniform distribution. A density curve can be skewed to the left or the right depending on the nature of the population under study. Density curves are useful in determining the percentage or proportion of the population that falls within a particular interval. Chapter eight of the book discusses the meaning of the different shaped density curves and their importance to any study (Utts 164). Before attempting to interpret data displayed in a graph, the researcher needs to ensure that he chose the proper display technique. The display type depends on the variable in the study. Chapter nine explains the suitable display selection criteria depending on the variable involved and provides a checklist for statistical pictures. Like other methods of interpreting data, there are difficulties and disasters a researcher may meet while displaying data. Most common disasters include misleading units of measurement; using erroneous information, change in labeling and graphs that do not start from zero (Utts 162-173). Chapter ten examines the difference between statistical and deterministic relationships. A deterministic or statistical relationship is statistically significant if the relationship has a probability of 0.95 or more of being stronger than the relationships we would expect to see by chance (Utts 180-195). Statistical significance can, however, be misled by a large or a very small sample. Correlation can be used to measure the degree of statistical significance between two variables. Correlation measures linear relationship only by measuring how close points in a scatter plot are to a straight line. Positive correlation indicates presence of a linear relationship between the two variables under study while negative correlation indicates a lack of a linear relationship between the two variables (Utts 180-195). Chapter eleven explains more reasons why two variables may be related using case studies to illustrate each point. Positive correlation does not necessarily imply a relationship between two variables in some cases. Illegitimate correlation can produce a deceiving linear relationship between sets of quantitative variables. Correlation influenced by outliers and correlation based on means tend to exaggerate the strength of a linear relationship. A relationship between two variables can occur when the predictor variable is the obvious cause of the response variable, confounding variables exist in the data, both variables are changing over time or the association may only be a coincidence (Utts 200-211). Situations can arise where a researcher may be required to perform a hypothesis test on contingency tables. Contingency tables are used when all the variables are categorical and the samples are independent (Utts 219-57). Hypothesis tests on contingency tables are based on the chi-square statistic and are similar to all other hypothesis tests. That is, the statistic is computed and compared with the statistic calculated from the control model. In chapter twelve the author first reviews contingency tables, followed by a discussion of the chi-square distribution then precedes by discussing the hypothesis test. The author highlights this chapter by giving full computational illustrations that involve contingency tables (Utts 219-57). Personality probabilities are the values assigned by individuals based on how likely they think an event will occur. Some psychological influences on risk perception mentioned in this chapter include the certainty effect, pseudo certainty effect, availability heuristic and anchoring. More mental influences include conjunction fallacy, elapsed base rates, optimism and overconfidence. Chapter sixteen provides more insight on these effects. Many professionals make decisions using personality probabilities. This section highlights and provides the researcher tips for improving personal probabilities and judgment (Utts 297). The sampling technique rather than the outcome of the survey is what defines the random nature of the sample. Random samples, particularly small random samples, are not necessarily representative of the whole population (Utts 349). Similarly, small sample proportions and sample means are not a representative of the entire population proportion and population mean (Utts 357). Larger sample proportions and sample means as well conducting the experiment repetitively yield more accurate estimates of the respective population values than smaller samples. If this is the case, the density curve for means and proportions from different samples will be approximately bell shaped. This concept is explained and illustrated with cases in chapter eighteen of the book (Utts 349). Other statistical techniques for estimating population values are hypothesis testing and use of a confidence interval. A confidence interval is group of values with an upper and lower limit calculated from the experiment that is expected to comprise the definite population value. In statistics, the confidence level is typically stated as 95% with a 5% chance that the population value will not lie in the interval (Utts 381). About 95% of all confidence levels calculated in this manner will include the population value of the proportion and about 5% will miss it. Chapter nineteen explains how the confidence interval for proportions is constructed and how the result is interpreted (Utts 361). Many experimentalists come across situations that require them to compare population means for two groups or under two dissimilar conditions and make a conclusion. Chapter twenty of this book describes ways the experiment was conducted by clarifying the position of confidence interval in an experiment (Utts 379-81). One way of solving it is to construct separate confidence intervals for the two groups and then comparing them. A more straight forward and efficient method is to generate an interval level of the target group. This section presents different case studies that illustrate this approach (Utts 379-81). All areas of quantitative research often have some issue or dilemma that they are trying to solve. The aim of hypothesis testing is to find ways to formulate these in such a way that they can be tested efficiently. Chapter twenty one discusses and explains the process of hypothesis testing. The decision in any hypothesis test is based on a summary of the data called the test statistic. It may interest the experimentalist to know how accurate the test statistic is in which case he needs to calculate the p-value (Utts 397). The p-value helps the researcher to determine whether the result of the test hypothesis is statistically significant or not. The result is considered to be significant statistically if the p-value is less than or equal to the level of significance (Utts 397). Chapter twenty two introduces the concept of the standardized score and the p-value. The standardized score is merely the test statistic of a test hypothesis involving means, a proportion or the difference between two means. The p-value is read from a table of percentiles and it gives the researcher the percentile the sample under study would fall if the null hypothesis were true (Utts 413-25). Hypothesis testing involves the use of distributions like the normal distribution and chi-square distribution to estimate the probability of getting a certain value as a result of chance. The author gives in detail the different test statistics that a researcher could use depending on the size of the sample that he collected (Utts 413-25). The author emphasizes on the importance of learning results that are exact in chapter twenty three. Whether results are statistically significant or not, a researcher needs to examine the confidence interval in order to determine the size of the effect of the relationship or difference in the study. Large samples may produce a false statistically significant relationship while small samples may indicate a false no- difference found in the study. The researcher needs to learn the certainty in a sample from the width of the confidence interval before drawing any conclusions (Utts 519). Work Cited Utts, Jessica. Seeing Through Statistics, Volume 1. London: Cengage Learning, 2004. Print Read More

How to summarize and interpret data may not always be immediately obvious to the researcher. There are several methods of reviewing and displaying numerical information that are available to a researcher. Using visual representations to explain data and surveys make them easier to understand. Chapter seven presents ways that raw data can be condensed and displayed in graphs and guidelines for creating the graphs so that they can be interpreted easily to determine particular characteristics of the population (Utts 124-140).

A density curve is the most popular way of representing a population. A density curve is created by connecting the top of the frames of a histogram with a level curve (Utts 471-4). A bell shaped curve implies that the population under study has a uniform distribution. A density curve can be skewed to the left or the right depending on the nature of the population under study. Density curves are useful in determining the percentage or proportion of the population that falls within a particular interval.

Chapter eight of the book discusses the meaning of the different shaped density curves and their importance to any study (Utts 164). Before attempting to interpret data displayed in a graph, the researcher needs to ensure that he chose the proper display technique. The display type depends on the variable in the study. Chapter nine explains the suitable display selection criteria depending on the variable involved and provides a checklist for statistical pictures. Like other methods of interpreting data, there are difficulties and disasters a researcher may meet while displaying data.

Most common disasters include misleading units of measurement; using erroneous information, change in labeling and graphs that do not start from zero (Utts 162-173). Chapter ten examines the difference between statistical and deterministic relationships. A deterministic or statistical relationship is statistically significant if the relationship has a probability of 0.95 or more of being stronger than the relationships we would expect to see by chance (Utts 180-195). Statistical significance can, however, be misled by a large or a very small sample.

Correlation can be used to measure the degree of statistical significance between two variables. Correlation measures linear relationship only by measuring how close points in a scatter plot are to a straight line. Positive correlation indicates presence of a linear relationship between the two variables under study while negative correlation indicates a lack of a linear relationship between the two variables (Utts 180-195). Chapter eleven explains more reasons why two variables may be related using case studies to illustrate each point.

Positive correlation does not necessarily imply a relationship between two variables in some cases. Illegitimate correlation can produce a deceiving linear relationship between sets of quantitative variables. Correlation influenced by outliers and correlation based on means tend to exaggerate the strength of a linear relationship. A relationship between two variables can occur when the predictor variable is the obvious cause of the response variable, confounding variables exist in the data, both variables are changing over time or the association may only be a coincidence (Utts 200-211).

Situations can arise where a researcher may be required to perform a hypothesis test on contingency tables. Contingency tables are used when all the variables are categorical and the samples are independent (Utts 219-57). Hypothesis tests on contingency tables are based on the chi-square statistic and are similar to all other hypothesis tests. That is, the statistic is computed and compared with the statistic calculated from the control model. In chapter twelve the author first reviews contingency tables, followed by a discussion of the chi-square distribution then precedes by discussing the hypothesis test.

The author highlights this chapter by giving full computational illustrations that involve contingency tables (Utts 219-57).

Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(The Benefits and Risks of Statistical Methods Assignment Example | Topics and Well Written Essays - 1750 words, n.d.)
The Benefits and Risks of Statistical Methods Assignment Example | Topics and Well Written Essays - 1750 words. https://studentshare.org/mathematics/1829332-math-reflections
(The Benefits and Risks of Statistical Methods Assignment Example | Topics and Well Written Essays - 1750 Words)
The Benefits and Risks of Statistical Methods Assignment Example | Topics and Well Written Essays - 1750 Words. https://studentshare.org/mathematics/1829332-math-reflections.
“The Benefits and Risks of Statistical Methods Assignment Example | Topics and Well Written Essays - 1750 Words”. https://studentshare.org/mathematics/1829332-math-reflections.
  • Cited: 0 times

CHECK THESE SAMPLES OF The Benefits and Risks of Statistical Methods

Financial Policies and the Value of the Firm: The Role of Corporate Governance

Three popular methods to assess strategic fit are knowledge audits, step-by-step gap-based audits and performance measurement (Lillis & Macaulay, 2012).... The impact and magnitude of corporate governance policies are measured by extracting the financial data of companies headquartered in each nation to research empirically using statistical tools....
28 Pages (7000 words) Dissertation

Financial Conservatism. Determinants of cash and leverage

The meaning of the term financial conservatism can be understood in different contexts, depending on the variable used in categorizing the financially conservative entities.... Many of the studies classify this concept using either cash holdings or leverage of a firm.... ... ... ... Mikkelson and Partch provide that financially conservative firms are those that continually hold huge cash balances to finance their financial activities, meaning that they hardly rely on borrowed capital to finance their investment opportunities....
18 Pages (4500 words) Dissertation

Managers Perception on Dividend in Mauritius

Secondly, the principle of beneficence speaks directly to the notion that the researcher is obligated to maximize the benefits of the research while minimizing the potential of harm to the research subjects.... If the benefits outweigh the risks, it is ethical to proceed with the body of research.... If the risks to the subjects outweigh the benefits then it would be unethical to proceed with the body of research.... In the case of perception of the managers in Mauritius with regards to dividends the methodology of case study stands out since the aim of this research is to gain an understanding of the complicated issue of the choice between investing and obtaining the benefits of that investment in the form of dividends as opposed to utilizing the traditional method of obtaining interest through the conservative method of savings accounts....
11 Pages (2750 words) Assignment

The Impact of Changes on Safety and Cost of SABIC

There are many methods of communication research methodologies like qualitative research , quantitative research , case studies and many more .... These communication research methodologies try to acquire the actual scenario of the research phenomenon by methods of interviews and surveys .... ( 2004 ) , when these two methods are used together , provides triangulation , which is expected to increase the validity and reliability of the research data collected ....
61 Pages (15250 words) Dissertation

Mitigation of Risk Factors in Construction Industry

The mathematical methods discussed in detail above are the techniques of range estimation based on expert opinion, systems approach using Monte Carlo algorithms, multiple-layer grey evaluation, rough set and informational entropy approaches, and approaches that utilize a combination of fuzzy logic, particle swarm optimization (PSO) and artificial neural networks (ANN), each of which is examined in terms of its characteristics and effectiveness in this report.... The paper "Mitigation of Risk Factors in Construction Industry" discusses that risks are generally magnitude dependent, meaning that the more impact a risk may have will often determine its priority....
22 Pages (5500 words) Research Paper

Priming Study Attitudes to Smoking

The basic argument and line of thought and reasoning are that the people who oppose smoking consider it a negative externality because of the health risks that are associated with it.... "Priming Study Attitudes to Smoking" paper argues that negative priming and attitude to smoking had the highest mean scores in males and females at 59....
7 Pages (1750 words) Lab Report

Quantitative Methodological Analysis and Appraisal

Package for statistical analysis.... for statistical analysis.... The study "Quantitative Methodological Analysis and Appraisal" focuses on the critical analysis of quantitative methodological appraisal of short-term efficacy of intervention at three levels (individual, environmental and organizational) for reduction of sitting time at the workplace....
7 Pages (1750 words) Case Study

Family Treatment Model and the Group Treatment Model of Disorders

In the paper 'Family Treatment Model and the Group Treatment Model of Disorders,' the author discusses the similarities and differences between the outcomes of the family treatment model and the group treatment model.... In families, the most difficult interactions persist throughout the life of a child....
17 Pages (4250 words) Assignment
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us