Our website is a unique platform where students can share their papers in a matter of giving an example of the work to be done. If you find papers
matching your topic, you may use them only as an example of work. This is 100% legal. You may not submit downloaded papers as your own, that is cheating. Also you
should remember, that this work was alredy submitted once by a student who originally wrote it.
The paper "Autoregressive Conditional Heteroskedasticity and Generalized Autoregressive Conditional Heteroskedasticity Models" is a wonderful example of an assignment on statistics. Under section one, this paper gives a discussion on ARCH and GARCH Models and their use in finance studies…
Download full paperFile format: .doc, available for editing
Extract of sample "Autoregressive Conditional Heteroskedasticity and Generalized Autoregressive Conditional Heteroskedasticity Models"
The use of ARCH and GARCH Models in Finance Studies Submitted by……………………………………………………………… Introduction Under section one, this paper gives discussion on ARCH and GARCH Models and their use in finance studies. We have used two stock market indexes (stock series y and stock series x) to obtain our empirical results. In section two, we estimate the various models for the given dependent variable (Y1,Y2 and Y3) by borrowing from the ideas of panel data models.
SECTION 1
A. Diagnostic tests
The volatility of a series tends to vary from time to time. As such, the need to categorize the times of both high and low volatility into one group is paramount. This is a common observation made in the features of the economic time series and is often seen in the several financial series which are sampled. ARCH model focuses on the approximation of the volatility, which depends on time and is regarded as a function of the earlier perceived volatility. Occasionally, the volatility model gets more attention than the conditional mean model. Similar to what is executed in the volatility model, the ARCH model, now described as the multiplicative heteroskedasticity can be included in the regressors which form the basis of the volatility structural parts.
Heteroskedasticity is a condition suffered by the data when firstly, the variations in the error are not similar, and secondly in which the terms in the error can correctly be predicted to be larger for some points than the other points. The standard caution is that in situations of heteroskedasticity, the coefficients of regression for a normal regression in the least squares are prevalently not biased, but the standard errors and the approximated intervals of confidence by conventional techniques will be too narrow, to result into an incorrect accuracy. ARCH and GARCH models consider heteroskedasticity as a variance to be modeled rather than letting this be a problem to be rectified. As a result, not only are the shortages of least squares rectified, but a prediction is worked out for the discrepancy of every error term. This often results to be of importance, especially in finance.
B. Estimation of data using ARCH and GARCH model
We considered series of two stock market indexes (stock y series and stock x series). We collected yearly data from 1980 to 2014. Our interest was to model the variance of the two stock series over time. We wanted to fit the model of rate of change of stock y series occasioned by stock x series, that is, logyt = logxt - logxt-1. We had two equations: the mean equation, which described the changes in the mean of our stock series. It contained just one intercept as shown below.
yt = β + et……………………………………………………………………………………………(i)
As such, we expect stock y series to vary independently about the mean, β. As usual, the error term followed a normal distribution and was heteroskedastic. The other equation was the variance equation. The variance of the present years error term depended on data of the preceding year. The variance equation defined how our error variance behaved. This equation was as below.
σ2 = α e2t -1 + α1 σ2t – 1……………………………………………………………………………….(ii)
We can see that σ2 is a function of squared error term in the previous year. The parameters in equation (ii) must be positive so as to make sure the variance, σ2, has a positive sign.
First, we fitted a constant-only model, using STATA, by OLS and then tested the ARCH effects by way of Lagrange multiplier test. The results, including the generalized ARCH model (GARCH), were presented in part E. of this paper.
C. Advantages and Disadvantages of the models
ARCH model: Advantages
The benefits of the of the ARCH models is seen on the basis of their capability to illustrate the varying time statistical conditional volatility, which subsequently can be useful in the improvement on how reliable the interval can be in giving a prediction. In addition, such a situation can be of importance in understanding the technique. McWhirter (2008) asserts that “a key contribution of the ARCH literature is in the discovery of the obvious variations in the volatility of economic time series.” The author continues to explain that “the volatility proves to be anticipated and is as a result of a definite type of dependence which is not linear and not as a result of the exogenous structural alterations in the variables”. ARCH models are common in this sector. Also, Rosenberg (2003) give the following reasons for the accomplishment of the ARCH model. As a summary, ARCH models are not complex but they are easy to deal with, and the model puts into consideration the aggregate errors and nonlinearities. In addition it also considers the alterations in the ability of the econometrician to give a prediction.
ARCH model: Disadvantages
The observations et of an ARCH (q) model are not Gaussian. This is because the model is not linear. There is the tendency of the distribution of et being longer. As a result, outliers may appear to be seen regularly. This is an important aspect of the model, because it shows the leptokurtosis, which is usually seen in practical. Additionally, once there is an occurrence of an outlier, it will escalate the conditional volatility for the forthcoming period.
Though there is a no correlation in the observations et, there is still no dependence in {ε}. This is easily observable, since there would be no linear procedure if et were independent; but we earlier revealed that the ARCH (q) technique is not linear. Thus, the very preeminent, either linear or non-linear forecaster of ε on the basis of the available data is basically the insignificant predictor, that is to say the ε t series average, 0. According to the point projection of the series itself, then, the ARCH models provides no benefits beyond the linear models of ARMA.
GARCH model: Advantages
There is substantial effort done in this field despite the increasing focus in nonlinear study of hydrologic time series. The general automatic regressive conditional heteroscedasticity (GARCH) method, which is usually incorporated in demonstrating the changes in time of the second order period or the change of the time series in finance, can be an applicable technique for nonlinear demonstration of hydrologic time series. The GARCH model has various applications in the capital markets. The method is done on the assumption that prediction of variance fluctuating in various moments depend on the protected variations in the capital assets. An unanticipated rise or drop in the revenues of an asset at time t will result to a rise in the changeability projected in the forthcoming period.
Despite the time alteration in the variance of hydrologic constraints, which have been discussed in this work, limited research have in practicality used the GARCH technique to demonstrate this situation in hydrologic variables (Butgereit, 2010, pg. 218). A significant overview of ARCH model is the GARCH parameterization. This approach is also an aggregate average of previous squared residuals, but it has deteriorating influence that does not complete to zero. It provides a parsimonious approach that are simple to determine and, even in its easiest form, has verified to be unexpectedly successful in forecasting conditional variances. The key applied GARCH specification states that the greatest predictor of the changes in the subsequent moments is an aggregate mean of the long-term average changes. The variance to be determined for this moment, and the new data in prevalent moment which is used by the most current squared residual. Such a modernizing rule is a simple explanation of adaptive or educating conduct and can be regarded as Bayesian updating. The GARCH models are mean regressive and tentatively heteroskedastic, but have an unconditional variance which does not vary.
GARCH model: Disadvantages
The key disadvantage of the GARCH model is its inappropriateness for demonstrating the regularly perceived asymmetric impacts; when a dissimilar volatility is noted systematically in the situation of good and bad broadcast. In the scenario of martingale method, drops and escalations in the revenues can be taken to be both good and bad news. If a decline in returns is supplemented by a rise in volatility higher than the prompted volatility due to the rise in revenues, we may talk about a ‘leverage effect’. The method anchored on the hypothesis that unanticipated variations in the revenues of the indexes, given in terms of et, have varying impacts on the conditional variance on the value of the stock market revenues.
D. The most appropriate model
At times, there is need for several parameters for the information to fit in an ARCH model. The GARCH approach offers a parsimonious substitute. The major difficulty with an ARCH model is that it demands a huge number of lags to meet the tradition of the volatility. This might be a concern as it is hard to determine the number of lags to be included and yields a non-parsimonious method where there could be a failure of the positive variable. The GARCH model is frequently more parsimonious and usually a GARCH (1,1) approach is not enough. This is as a result of the GARCH model incorporating a lot of data that cannot be contained by a bigger ARCH model, consisting of large numbers of lags. This is as a result of volatility coming together. Because several high volatility marks tend to be simultaneously in the same places, the autoregressive outline of an Arch collects it by revealing stochastically substantial lag number. The GARCH solves the situation because a surprise will deteriorate with time. When volatility come together, the GARCH factor takes over the similar autoregressive form as the great number of ARCH coefficients.
E. Summary of results
So as to run our arch and garch test in stata, we had to first regress our model. Such a regression enabled us to verify the two conditions for conducting the arch and garch test: that is we could ascertain if our stock series exhibited clustering volatility and whether there was the arch effect or not. Table 1. Displayed a summary statistics of the regression results
Table 1. Regression results
Variables
Coefficient
Constant
t
p>t
0.007101
(0.0029132)
1.52169
(0.0047401)
2.44
0.020
R2
15.26%
In determining if the stock series followed clustering volatility, we conducted a residual test as shown in Figure 1.
Figure 1. Residual analysis
From figure 1, we could see the stock series experienced clustering volatility. Having satisfied the first condition, we proceeded to the LM test as shown in Table 2.
Table 2. LM test for autoregressive conditional heteroskedasticity (ARCH)
Lags(p)
Chi2
df
Prob > Ch2
1
4.847
1
0.0277
Ho: No arch effect
H1: Arch(p) disturbance
Having satisfied the two conditions, we conducted now conducted the arch family regression for Gaussian distribution as shown in Table 3.
Table 3. Arch family regression
logyt
Coefficient
Std Error
z
P > /z/
logxt –logxt -1
0.003432
0.0017212
1.99
0.046
Constant
1.529821
0.0027978
544.79
0.000
Arch
L1.
0.58233
0.41404
1.41
0.160
Garch
L1.
0.31164
0.20984
1.49
0.138
F. Presentation of results
From the outcome, the mean rev is approximately 1.06% (-cons). The t-ratio in the ARCH model is stochastically of significance and the conclusion that can be drawn is that the change is going through an autoregressive conditional heteroskedastic. This change, if it is to be of an accurate measure, should be performed thrice repetitively.
The null hypothesis of no ARCH (1) is to be rejected since a p-value of 0.0277 (which is not above 0.05) is obtained from the LM test. Therefore, a further approximation of the parameter of ARCH (1) is performed by stating ARCH (1). The general first-order ARCH model, written as GARCH (1, 1) is the most usually used preference for the conditional variance in empirical work. An estimation for the GARCH (1, 1) technique can be done for the dissimilar log series. From the results in table 3, the garch model is not statistically significant since the p-value is more than 5% (p-value = 13.8%). As such, we can conclude that previous volatility in stock xt does not necessarily influence volatility in stock yt under Gaussian distribution.
Moreover, we have projected our ARCH(1) estimate as 0.58233 and our GARCH(1) estimate as 0.31164. Therefore, the fitted GARCH(1,1) model, from our two equations, is:
yt = 1.529821 + et
σ2 = 0.58233e2t -1 + 0.31164σ2t – 1
G. Bibliography
Butgereit, F. 2010. Exchange rate determination puzzle: Long run behavior and short run
dynamics. Hamburg: Diplomica-Verl.
Evans, M. D. D. 2011. Exchange-Rate Dynamics. Princeton: Princeton University Press.
Gillen, J. 2009. The key to speculation on the New York Stock Exchange. Tempe, Arizona:
American Federation of Astrologers, Inc.
McWhirter, L. 2008. McWhirter theory of stock market forecasting. Tempe, AZ: American
Federation of Astrologers.
Rosenberg, M. R. 2003. Exchange-rate determination: Models and strategies for exchange
rate forecasting. New York: McGraw-Hill.
H. Appendix I
List of commands
regress logyt logxtlogxt1
predict R, residuals
estat archlm, lags(1)
arch logyt logxtlogxt1, arch(1/1) garch(1/1)
arch logyt logxtlogxt1, arch(1/1) garch(1/1) distribution(ged 1.5)
arch logyt logxtlogxt1, arch(1/1) garch(1/1) distribution(t 10)
SECTION 2
A. Diagnostic tests
Basically, our data was a panel data. In the panel data, there exists the cross section, but observation is done on the cross section periodically. If similar variables, experimented in the cross section, are taken for another sampling of various moments, then this is referred to as a longitudinal data set, and it is a very treasured form of set of panel data. In medical and bio statistical lessons, longitudinal sets of data are of usual aspects. Panel data sets are emerging to be very common as a result of the extensive use of the computer making it simpler to shape and yield such information. As such, the best model to analyze our panel data is either pooled ols, random effect model or fixed effect model.
B. Advantages and Disadvantages of the models
Pooled OLS
Advantages
Pooled OLS is very easy to compute and interpret.
Disadvantage
The Pooled OLS is the most usual predictor of the panel sets of data. Boslaugh (2012) articulates that the pooled ols predictors disregard the panel form of the data, regard the observations as being successively having no correlation for a specific person, with homoscedastic errors in all of the persons and moments.
Fixed effect approach
Advantages
The fixed-effects approach has controls for all time-constant varies between the persons, and as such the approximated coefficients of the constant impact methods cannot be biased as a result of lost time-invariant features.
We use fixed-effects (FE) in cases when there is the interest in studying the effects of variables that often change. FE analyses the connection between forecaster and outcome constraints within a boundary, be it country, individual, company, and many others. Every entity has its specific feature that can or cannot impact on the variables of prediction. For example, being a girl or boy can be influential in the choice made regarding certain matters. Either way, the political structure of a given country can have some impact on trade or GDP; or the entrepreneurial practices of a firm can be influencing the prices of the stock. There is an assumption made that something within a person can have an effect or be biased in the prediction or resultant variable needed in controlling the FE. This is the basis of the assumption done on the correlation between the error term entity and the variables used in prediction. FE does not use the influence of the time-invariant features so that there can be an assessment of the net consequence of the forecasters on the variable(s) of the result.
Another crucial aspect is that the FE approach uses the assumption that that time-invariant features are distinctive to a person and should not have a correlation with other individual features. Every entity is not similar thus the error term of the entity and the invariable; which constitutes individual features should not have a correlation with the others. If there is a correlation in the error terms, then FE is no appropriate. This is because suggestions can be incorrect and there is need to demonstrate that relationship (possibly by the use random effects). Thus, what is being dealt with is a trade-off between unfairness and inconsistency in sampling. For data which is not experimental, fixed consequence approach has the tendency of reducing biasness at the cost of greater variability in the samples. Given the several reasons for anticipating biases in observational analysis, I think this is ordinarily a suitable bargain.
Disadvantage
One side consequence of the fixed - effects approach is that the models cannot be used in the investigation of causes of time-invariant of the dependent constraints. Theoretically, time-invariant features of people are faultlessly collinear with the individual (or entity) figures. Practically, fixed-effects methods are deliberate in the study the causes of alterations in an entity, like a person. A time-invariant aspect cannot result into such a variation, because it does not vary from person to person!
Additionally, the other major disadvantage to constant effect approaches emerge when the ratio of within- to between-person change drops to zero: constant effect approach cannot determine the coefficients for constraints that have no alteration within-subject. Thus, a fixed impact approach will not result into coefficients for social concerns like race, sex, or place of birth. Among the grownups, it cannot be really useful in determining the effects of height or years taken in studying (though there could be a substantial within-person change on the height). Remember, however, that all the constant variables are measured in a fixed consequences of regression, though there are no estimation of such. Actually, the control is probably to be more operational than in conservative regression. And as it will be later seen, interactions between constant variables, for example sex and constraints that periodically vary can be included. Conversely, for several observational analyses, fixed effects approaches are mainly useful for determining the consequences of variables that are not constant in the issue.
Random Effect model
Advantages
The basis of the random effects approach is that, contrasting the constant effects method, the variation in all the entities is presumed to be unplanned and have no correlation with the predictor or independent constraints considered in the model. If there exists a reason to accept that there exists differences that all the entities are influential on a dependent variable, then random effects should be put into use. The ability to include random effects is an advantage of what can included in the constant time variables (that is, gender). In the constant effects approach, the intercept uses these variables. Arbitrary effects have the assumption that the error term of the entity has no correlation with the forecasters which permits the constant time variables to have an obligation as descriptive variables. In random effects, there is the need to identify the specific characteristics that can or cannot have an impact on the variables in making a prediction. The consequence with this is that some constraints cannot be obtainable, and as a result, leading to unused variable bias in the method. RE allows the generalization of the perceptions past the used sample in the approach.
Disadvantage
One limitation of the typical application of random effect model (REM) is that researchers do very limited analysis of how well their data can fit in the model. When there is an application of the statistical assessments of the model, it is the Hausman test which compares the standard FEM and REM that is characteristic. The Hausman test can give support to one of the techniques, even if the designated model is an insufficient depiction of the information. Additional tests exist and can give evidence of model’s capability. As it will be explained, the standard FEM and REM are over identified approaches that indicate over identification of the variables. These variables that are over identifying can be tested and offer evidence for the authenticity of the FEM, REM, or substitute conditions of models.
C. Most Appropriate model
In order to determine the most appropriate model for estimating the results, we adopted Hausan test alongside Bruesch and Pegan LM test. In Hausan test, the null hypothesis was such that:
Ho: Random effects model is the most appropriate
Ha: Fixed effects model is the most appropriate
For the Breusch and Pegan LM test, our null hypothesis was:
Ho: Pooled ols is the most appropriate model
Ha: Random effect model is the most appropriate
We regressed the data using random effect model and fixed effect model and then stored the estimated in the memory using the following commands: estimates store random and estimates store fixed. As such, we were able to do the Hausman test and the results were as displayed in table 4 (see Appendix II).
From table 4, the p-value is 0.000 and so we rejected Ho and concluded fixed effects model is the most appropriate model.
Similarly, the results of Breusch and pegan LM test was as shown in table 5 in Appendix II.
Again from table 5, the p-value is 0.0000 and so we reject the null hypothesis and deduce that random effect model is appropriate in this case. However, a test between random effect model and fixed effect model revealed that fixed effect was the most appropriate model.
D. Summary and presentation f results
With fixed model now as the most appropriate model, we estimated the results for Y1, Y2 and Y3 as illustrated in table 6a, table 6b and table 6c respectively (see Appendix II). With regard to Y1 and with respect to table 6a, some independent variables (X1, X4, and X5) were statistically insignificant as their corresponding p-values were greater than 0.05. It is only X3 and X6 that negatively relates with Y1. That is, if they increase, Y1 decreases.
From table 6b, only X1 and X5 were statistically insignificant at 5% level of significance. That is, unlike the other independent variables, the two did not really explain changes in Y2. Moreover, only X1 and X3 relates negatively with the dependent variable, Y2.
Lastly, with regard to dependent variable, Y3, only X1 and X5 were statistically insignificant but it is only X1 that positively relates with Y3.
E. Bibliography
Ashgar, G. & Saleh, Z. 2012. Normality Tests for Statistical Analysis: A Guide for Non
Statisticians. Int J Endocrinol Metab; 10(2): 486–489.
Boslaugh, S. 2012. Statistics in a nutshell. Sebastopol, CA: OReilly Media.
Gillian, W. 2015. Statistical analysis and reporting: common errors found during peer review
and how to avoid them. Swiss Med Wkly. 2015;145:w14076.
Neil, R. C. and Ma`ayan, A. 2011. Introduction to Statistical Methods to Analyze Large Data
Sets: Principal Components Analysis. Int J Endocrinol Metab; 9(3): 287–301.
Valen E. J. 2008. Statistical analysis of the National Institutes of Health peer review system.
PNAS journal, vol. 105 no. 32.
F. Appendix II
Table 4. Hausman test
Table 5. Breusch and pegan LM test
Table 6a. Summary of y1 regression results
Table 6b. Summary of y2 regression results
Table 6c. Summary of y3 regression
List of commands
xtreg y1 x1 x2 x3 x4 x5 x6, fe
estimates store Fixed
xtreg y1 x1 x2 x3 x4 x5 x6, re
estimates store Random
hausman Fixed
xttest0
xtreg y2 x1 x2 x3 x4 x5 x6, fe
xtreg y2 x1 x2 x3 x4 x5 x6, re
xtreg y3 x1 x2 x3 x4 x5 x6, fe
xtreg y3 x1 x2 x3 x4 x5 x6, re
regress y1 x1 x2 x3 x4 x5 x6
regress y2 x1 x2 x3 x4 x5 x6
regress y3 x1 x2 x3 x4 x5 x6
Read
More
Share:
CHECK THESE SAMPLES OF Autoregressive Conditional Heteroskedasticity and Generalized Autoregressive Conditional Heteroskedasticity Models
Econometric use of the autoregressive conditional heteroskedasticity Model focuses on the limitations of forecasting in resultant prediction of the future that varies from one period to... Time series models such as the autoregressive conditional heteroscedasticity are used by financial analyst to determine the relationship between returns and risks levels in investments.... Volatility of the sequence of the returns in money markets, foreign exchange markets, and stock markets are best described by the autoregressive conditional heteroscedasticity model in financial time series....
The aim of this study is to examine the effects of interest rate volatilities on the demand of Turkish money.... The Turkish money is Lira.... The period that has been taken for analysis is 1990 – 2000.... Interest rates are predicted on time deposits and treasury bills.... .... ... ... This project studied, particularly the consequences of interest rate volatilities of Turkish Money Demand for the period of 1990-2000....
Forecasting Crude Oil (Spot Price) Volatility Institution Date Table of Contents METHODOLOGY AND DATA 2 Introduction 2 Volatility clustering 4 Data for GARCH models 6 Estimation 10 models Used in the Study 11 GARCH (1,1) Model 12 EWMA is considered to be a special type of GARCH(1,1) 15 EGARCH (1,1) Model 15 Data and Sample Size Selection 17 There are four main benchmarks within the global arena in respect to international trading: West Texas Intermediate (WTI), Brent, Dubai, and Tapis....
A primary feature of the autoregressive conditional heteroscedasticity (ARCH) model as developed by Engle (1982), is that the conditional variances change over time.... Assumptions made by probability models are in practice violated.... ARCH and GARCH type models used to estimate volatility are also nonlinear models expressed as a function (linear or not) of past variations in stocks.... ARCH-GARCH models and more recently the range process have generated an extensive amount of research and papers....
The paper "Portfolio Theory and Capital Asset Pricing Model" discusses the model furnishing improved technique for integrating market info linked to the asset prices and explains that effective diversification under CAPM should provide investors with investment returns consistent with market ones....
according to Pelaez (1999, 232), 'there are many ways to forecast economic series, including extrapolation, econometric models, time-series models, and leading indicator models'.... As the paper "Statistical Analysis of Stock Indices" examines, the regression analysis especially the autoregressive model that is of interest in this case has successfully been used during the development of a series of robust tests of the 'intrinsic value measure....
Previous experiential reports of international CAPM models did not find much proof to back up the model.... olnik (1974) also suggests that composite models... After (Sharpe, 1964) developed the CAPM theory several other researchers have developed the theory with giving importance to the diversifiable and non-diversifiable risks of different investments....
Part one regards to the use of ARCH and GARCH models and their applications in finance studies.... Part two on the other hand concerns the estimation of models of determinants of banks' profitability by using pooled OLS (POLS) and panel data.... These models are important in the estimation of time-varying volatility.... Financial time series have features that are represented well by models with dynamic variances.... ses regression to provide possible estimates of the disturbance variances at each sample point and the original relation is then re-estimated by the weighted least squares procedure that corrects for the heteroskedasticity....
11 Pages(2750 words)Essay
sponsored ads
Save Your Time for More Important Things
Let us write or edit the assignment on your topic
"Autoregressive Conditional Heteroskedasticity and Generalized Autoregressive Conditional Heteroskedasticity Models"
with a personal 20% discount.