In this session we see how to build volatility models (specifically from the widely used GARCH family of models) using R. We will be using several packages that are designed to be great at doing volatility analyses on financial data.
Also see this blog for some cool ideas on applications of volatility modelling in R and its application.
These models have many applications, and the study of volatility is particularly important in financial modelling. As a key input to derivative pricing (think Black-Scholes formula where we have to input our estimate of implied volatility), financial engineers need a simple and robust means of using the past as effectively as possible to provide an estimate of this elusive measure.
The GARCH family is incredibly broad (Bollerslev 2008 identified over 180 types of GARCH models) and there are many packages in R that allow its calculation. Another particularly powerful platform is that of G@RCH in Oxmetrics, designed by Laurent. We will be using Rugarch and rmgarch designed by Alexios Ghalanos, while also looking briefly at other packages (such as MTS and fgarch).
My recommended literature for this session includes the very good GARCH modeling summary paper by Hentschel (1995). Also see general summary papers by Bauwens, Laurent, and Rombouts (2006) & Silvennoinen and Teräsvirta (2009). Of course also look at the Tsay (2014) and Tsay (2012) textbooks, as well as Ruppert (2011) - these are great sources with available code.
In terms of using rugarch, I suggest viewing the author of the package’s blog here. It includes some great examples and research ideas that you can apply.
# Always ensure fmxdat is updated to the latest version
# please: pacman::p_install_gh('Nicktz/fmxdat')
::p_load("tidyverse", "devtools", "rugarch", "forecast",
pacman"tbl2xts", "lubridate", "PerformanceAnalytics", "ggthemes",
"robustbase")
fmxdat::findata dailydata <-
To start things off, let’s consider a i.i.d. Gaussian time-series for log-returns Xt:
xt∽N(μ,σ2)μt=1TˆΣT=1T−1(xt−ˆμ)T(xT−ˆμ)
Let’s start by creating some synthetic data using the rugarch pacakge:
::p_load(rugarch)
pacman
# Start by specifying an AR(1) model with given coefficients and parameters:
# Placid fund...
arfimaspec(mean.model = list(armaOrder = c(1,0),
arma_fixed_spec <-
include.mean = TRUE),
fixed.pars = list(mu = 0.005, ar1 = -0.35, sigma = 0.035))
# Wild fund...
arfimaspec(mean.model = list(armaOrder = c(1,0),
arma_fixed_spec_Wild <-
include.mean = TRUE),
fixed.pars = list(mu = 0.005, ar1 = -0.35, sigma = 0.07))
# The output shows the details:
# *----------------------------------*
# * ARFIMA Model Spec *
# *----------------------------------*
# Conditional Mean Dynamics
# ------------------------------------
# Mean Model: ARFIMA(1,0,0)
# Include Mean: TRUE
#
# Conditional Distribution
# ------------------------------------
# Distribution : norm
# Includes Skew : FALSE
# Includes Shape : FALSE
# Includes Lambda : FALSE
# Also check on your side:
# arma_fixed_spec@model$pars # Notice the large possible parameter space (depending on model chosen)
# and:
# arma_fixed_spec@model$fixed.pars # Notice the parameters set by user above.
# Let's create some data with this:
::p_load(tidyverse)
pacman::p_load(tbl2xts)
pacman# simulate one path
1000
Total <-set.seed(1234)
arfimapath(arma_fixed_spec, n.sim = Total)
rand_dat <- arfimapath(arma_fixed_spec_Wild, n.sim = Total)
rand_dat2 <-
bootstrap <-::tibble(
tibbledate = rmsfuns::dateconverter(lubridate::ymd(20100101), lubridate::today(), Transform = "weekdays")[1:1000],
Mild_Cherry = c(rand_dat@path$seriesSim),
Wild_Fig = c(rand_dat2@path$seriesSim),
%>% gather(Portfolio, Return, -date)
)
%>%
bootstrap
ggplot() +
geom_line(aes(date, Return, color = Portfolio)) +
facet_wrap(~Portfolio) +
fmxdat::theme_fmx() + fmxdat::fmx_cols()
%>%
bootstrap group_by(Portfolio) %>%
mutate(Idx = cumprod(1+Return)) %>%
ggplot() +
geom_line(aes(date, Idx, color = Portfolio), size = 1.2, alpha = 0.8) +
fmxdat::theme_fmx() + fmxdat::fmx_cols()
Now that we know the DGP for bootstrap, let’s see how we can estimate the parameters driving the process:
rugarch::arfimaspec(mean.model = list(armaOrder = c(1,
arma_spec =0), include.mean = TRUE))
# estimate model
# Notice rugarch requires data be xts...
tbl2xts::tbl_xts(bootstrap, cols_to_xts = Return,
xtsboot <-spread_by = Portfolio)
rugarch::arfimafit(spec = arma_spec, data = xtsboot[,
arma_fit_MC <-"Mild_Cherry"])
rugarch::arfimafit(spec = arma_spec, data = xtsboot[,
arma_fit_WF <-"Wild_Fig"])
# show(arma_fit_MC) # to get a huge amount of statistical
# details
::coef(arma_fit_MC) rugarch
## mu ar1 sigma
## 0.004317409 -0.316040006 0.034867322
::coef(arma_fit_WF) rugarch
## mu ar1 sigma
## 0.005768281 -0.339569500 0.068644589
Compare the above to the actual parameters (as this data has a known DGP) - and repeat for different N. Notice how the estimate of mu has large errors even for larger N.
Second, note that broom::tidy() doesn’t work as with LM regressions. You will have to construct the output the old fashioned way, by extracting what you need from the output object - type str(arma_fit_MC) to see what is available.
From this, we could e.g. see the following can be used to construct the output.
@fit$matcoef arma_fit_MC
## Estimate Std. Error t value Pr(>|t|)
## mu 0.004317409 0.0008379951 5.15207 2.576266e-07
## ar1 -0.316040006 0.0300083642 -10.53173 0.000000e+00
## sigma 0.034867322 0.0007796568 44.72137 0.000000e+00
Next we consider some of SA’s largest financial institutions and banks (we consider the daily adjusted closing prices). The higher the frequency, the higher typically the volatility (noise often smoothen out as the frequency decreases). Let’s load the data and go through the usual calculation of returns:
# dlog returns:
fmxdat::findata %>% gather(Tickers, TRI, -Date) %>%
rtn <-group_by(Tickers) %>%
mutate(dlogret = log(TRI) - log(lag(TRI))) %>%
mutate(scaledret = (dlogret - mean(dlogret, na.rm = T))) %>%
filter(Date > first(Date)) %>% ungroup()
# Which is (out of interest) equivalent to: rtn <-
# log(dailydata %>% arrange(Date) %>% tbl_xts()) %>% diff( .
# , lag=1) %>% scale(.,center=T,scale=F) %>% xts_tbl %>%
# gather(Type, Ret, -date)
First, let’s clean up the names. I know they are all listed on the JSE, and that they are closing prices… (as before, we use gsub. But note what we need to do to remove ‘.’ or, e.g., ‘(’ or other special characters…)
rtn %>% mutate(Tickers = gsub("JSE\\.", "", Tickers)) %>%
rtn <- mutate(Tickers = gsub("\\.Close", "", Tickers))
Next, let’s plot the TRI series to check for obvious outliers in the returns series:
rtn
Tidyrtn <-
ggplot(Tidyrtn) + geom_line(aes(x = Date, y = TRI, colour = Tickers,
alpha = 0.5)) +
ggtitle("Price Changes: SA Financials") +
guides(alpha = "none") +
::theme_fmx() fmxdat
# Or cleaning the plot up a bit....
ggplot(Tidyrtn) + geom_line(aes(x = Date, y = TRI, colour = Tickers,
alpha = 0.5)) +
ggtitle("Price Changes: SA Financials") +
facet_wrap(~Tickers, scales = "free_y") +
guides(alpha = "none") +
::theme_fmx() +
fmxdatscale_color_hue(l = 20) +
scale_x_date(labels = scales::date_format("'%y"), date_breaks = "2 years") +
theme(axis.text = element_text(size = 7))
Also useful here is histogram plots:
ggplot(Tidyrtn) +
geom_histogram(aes(x = dlogret, fill = Tickers, alpha = 0.5)) +
ggtitle("Log Returns: SA Financials") +
facet_wrap(~Tickers, scales = "free") +
guides(fill = "none", alpha = "none") +
::theme_fmx() fmxdat
It seems that particularly RMH SA has large clumping together of returns in the negative tail. This might bias our model estimates (especially if it is only a few outliers).
We could now turn to PerformanceAnalytics’ return cleaning function to solve this:
Note, below I left_join the cleaned returns to our original returns. This could allow some nice checking…
::p_load(PerformanceAnalytics)
pacman
Tidyrtn <-
left_join(
Tidyrtn,
%>% tbl_xts(., cols_to_xts = "dlogret", spread_by = "Tickers") %>%
Tidyrtn
PerformanceAnalytics::Return.clean(., method = c("none", "boudt", "geltner")[2], alpha = 0.01) %>%
tbl2xts::xts_tbl() %>% gather(Tickers, CleanedRet, -date) %>% rename(Date = date),
by = c("Date", "Tickers")
)
# Now let's see what changed:
%>% filter(CleanedRet != dlogret) Tidyrtn
## # A tibble: 265 x 6
## Date Tickers TRI dlogret scaledret CleanedRet
## <dttm> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 2006-05-23 00:00:00 ABG 11499 0.0689 0.0688 0.0598
## 2 2006-06-08 00:00:00 ABG 9601 -0.0732 -0.0733 -0.0596
## 3 2006-06-09 00:00:00 ABG 10600 0.0990 0.0989 0.0598
## 4 2006-06-26 00:00:00 ABG 9100 -0.0637 -0.0639 -0.0596
## 5 2007-08-10 00:00:00 ABG 13306 -0.0615 -0.0616 -0.0596
## 6 2008-02-26 00:00:00 ABG 11999 0.0756 0.0755 0.0598
## 7 2008-07-16 00:00:00 ABG 9025 0.0934 0.0933 0.0598
## 8 2008-09-19 00:00:00 ABG 10920 0.0634 0.0633 0.0598
## 9 2008-10-06 00:00:00 ABG 9995 -0.0682 -0.0683 -0.0596
## 10 2008-10-13 00:00:00 ABG 9930 0.0762 0.0761 0.0598
## # ... with 255 more rows
As discussed in the class, ARCH effects are seen in the squared residuals of the mean equation, or as below when graphing the squared returns. Let’s graph the returns, squared returns and the absolute returns for an equal weighted portfolio of these shares (note how I construct the equal weighted portfolio. Also (NB) note that I use simple returns when doing portfolio analysis - not the dlog returns):
# Create the Simple returns:
Rtn <-
Tidyrtn %>% group_by(Tickers) %>% mutate(SimpleRet = TRI / lag(TRI)-1) %>% ungroup() %>%
mutate(Date = lubridate::ymd(Date)) %>% # Easier to work with ymd here
tbl_xts(., cols_to_xts = SimpleRet, spread_by = Tickers)
is.na(Rtn)] <- 0
Rtn[
Wt <-
Tidyrtn %>% mutate(YM = format(Date, "%Y%b")) %>% group_by(YM) %>%
filter(Date == last(Date)) %>% mutate(weight = 1/n()) %>%
ungroup() %>% select(Date, Tickers, weight) %>% unique %>%
mutate(Date = lubridate::ymd(Date)) %>% # Easier to work with ymd here
tbl_xts(., spread_by = Tickers)
is.na(Wt)] <- 0
Wt[
# Remember - if you don't divide rtn by 100 you might have log of negative values - producing NAs... Try it!
rmsfuns::Safe_Return.portfolio(Rtn, weight = Wt, geometric = FALSE) porteqw <-
Now, let’s plot the returns, sqd returns and |retuns|:
cbind(porteqw, porteqw^2, abs(porteqw))
Plotdata =
colnames(Plotdata) = c("Returns", "Returns_Sqd", "Returns_Abs")
Plotdata <-%>% xts_tbl() %>%
Plotdata gather(ReturnType, Returns, -date)
ggplot(Plotdata) +
geom_line(aes(x = date, y = Returns, colour = ReturnType, alpha = 0.5)) +
ggtitle("Return Type Persistence: SA Financials") +
facet_wrap(~ReturnType, nrow = 3, ncol = 1, scales = "free") +
guides(alpha = "none", colour = "none") +
::theme_fmx() fmxdat
From the above figure it seems that:
To verify the above, let’s plot the ACFs:
::Acf(porteqw, main = "ACF: Equally Weighted Return") forecast
::Acf(porteqw^2, main = "ACF: Squared Equally Weighted Return") forecast
::Acf(abs(porteqw), main = "ACF: Absolute Equally Weighted Return") forecast
# Note:I use the packagename and :: here - as other packages
# also (stupidly) have the same name function (e.g. FinTS
# which we use later).
The above proves what we expected - in particular very strong conditional heteroskedasticity, as well as long memory.
A formal test for ARCH effects: LBQ stats on squared returns:
Box.test(coredata(porteqw^2), type = "Ljung-Box", lag = 12)
##
## Box-Ljung test
##
## data: coredata(porteqw^2)
## X-squared = 761.27, df = 12, p-value < 2.2e-16
The test rejects the nulls of no ARCH effects - hence we need to control for the remaining conditional heteroskedasticity in the returns series,
Let’s now fit the ARCH models using the package rugarch.
Remember, the standard garch(1,1) model is written as follows:
rt=μ+εtεt=σt.ztσ2t=α+β1ε2t−1+β2σ2t−1zt∼N(0,1)
Fitting it, we first specify our garch setup, and then we fit it. We have to choose the mean model and the variance model. Let’s choose the most appropriate mean model first, using the autoarfima function to test all combinations. note, that this estimation will take a long time if done directly (you can check this) - and as such we can make use of the parallel library which allows your machine to do similar parallel computations at the same time so as to save time (don’t worry about the details, just use it):
So the function suggests we should use a ARIMA(2,2) model without a mean. For simplicity, let’s suppose the above function suggested an AR(1) model to be used, let’s now fit a GARCH(1,1) model on an AR(1) mean model:
# Note we specify the mean (mu) and variance (sigma) models separately:
garch11 <-
ugarchspec(
variance.model = list(model = c("sGARCH","gjrGARCH","eGARCH","fGARCH","apARCH")[1],
garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(1, 0), include.mean = TRUE),
distribution.model = c("norm", "snorm", "std", "sstd", "ged", "sged", "nig", "ghyp", "jsu")[1])
# Now to fit, I use as.matrix and the data - this way the plot functions we will use later will work.
ugarchfit(spec = garch11,data = porteqw)
garchfit1 =
# Note it saved a S4 class object - having its own plots and functionalities:
class(garchfit1)
## [1] "uGARCHfit"
## attr(,"package")
## [1] "rugarch"
# Also see the output for the garchfit1:
garchfit1
##
## *---------------------------------*
## * GARCH Model Fit *
## *---------------------------------*
##
## Conditional Variance Dynamics
## -----------------------------------
## GARCH Model : sGARCH(1,1)
## Mean Model : ARFIMA(1,0,0)
## Distribution : norm
##
## Optimal Parameters
## ------------------------------------
## Estimate Std. Error t value Pr(>|t|)
## mu 0.000814 0.000199 4.08512 0.000044
## ar1 -0.007679 0.017789 -0.43169 0.665969
## omega 0.000003 0.000002 2.28268 0.022449
## alpha1 0.079200 0.009625 8.22890 0.000000
## beta1 0.905468 0.011457 79.02891 0.000000
##
## Robust Standard Errors:
## Estimate Std. Error t value Pr(>|t|)
## mu 0.000814 0.000189 4.30208 0.000017
## ar1 -0.007679 0.016381 -0.46881 0.639203
## omega 0.000003 0.000005 0.65840 0.510279
## alpha1 0.079200 0.023271 3.40330 0.000666
## beta1 0.905468 0.033070 27.38053 0.000000
##
## LogLikelihood : 10303.17
##
## Information Criteria
## ------------------------------------
##
## Akaike -5.8051
## Bayes -5.7964
## Shibata -5.8051
## Hannan-Quinn -5.8020
##
## Weighted Ljung-Box Test on Standardized Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 0.2766 0.598937
## Lag[2*(p+q)+(p+q)-1][2] 3.9758 0.004537
## Lag[4*(p+q)+(p+q)-1][5] 8.5315 0.004620
## d.o.f=1
## H0 : No serial correlation
##
## Weighted Ljung-Box Test on Standardized Squared Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 0.6286 0.4279
## Lag[2*(p+q)+(p+q)-1][5] 1.7194 0.6860
## Lag[4*(p+q)+(p+q)-1][9] 2.2883 0.8679
## d.o.f=2
##
## Weighted ARCH LM Tests
## ------------------------------------
## Statistic Shape Scale P-Value
## ARCH Lag[3] 0.5224 0.500 2.000 0.4698
## ARCH Lag[5] 0.9621 1.440 1.667 0.7442
## ARCH Lag[7] 1.2817 2.315 1.543 0.8639
##
## Nyblom stability test
## ------------------------------------
## Joint Statistic: 20.7971
## Individual Statistics:
## mu 0.07108
## ar1 0.34257
## omega 3.35767
## alpha1 0.14351
## beta1 0.25504
##
## Asymptotic Critical Values (10% 5% 1%)
## Joint Statistic: 1.28 1.47 1.88
## Individual Statistic: 0.35 0.47 0.75
##
## Sign Bias Test
## ------------------------------------
## t-value prob sig
## Sign Bias 0.6911 0.4896
## Negative Sign Bias 0.1826 0.8551
## Positive Sign Bias 1.0489 0.2943
## Joint Effect 4.0888 0.2520
##
##
## Adjusted Pearson Goodness-of-Fit Test:
## ------------------------------------
## group statistic p-value(g-1)
## 1 20 144.2 2.894e-21
## 2 30 205.1 2.036e-28
## 3 40 170.3 2.446e-18
## 4 50 284.6 6.038e-35
##
##
## Elapsed time : 0.7639551
Let’s look at what we can store from the fit:
slotNames(garchfit1)
names(garchfit1@fit)
names(garchfit1@model)
# Use it now as follows:
@fit$matcoef # Model coefficients.
garchfit1
# Include it in your paper as follows:
::p_load(xtable)
pacman
xtable(garchfit1@fit$matcoef)
Table <-
print(Table, type = "latex", comment = FALSE)
# VERY NB: When adding this to your paper, write at the top
# of this code chunk: results = 'asis' Do so always when
# using kable, or xtable, or whichever you are using. If you
# see raw latex code appear in your pdf, this will be because
# you did not add results = 'asis' at the top of the chunk.
# Look where I placed it in your Texevier template...
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
mu | 0.0008140 | 0.0001993 | 4.0851230 | 0.0000441 |
ar1 | -0.0076794 | 0.0177893 | -0.4316865 | 0.6659692 |
omega | 0.0000035 | 0.0000015 | 2.2826811 | 0.0224492 |
alpha1 | 0.0791998 | 0.0096246 | 8.2289013 | 0.0000000 |
beta1 | 0.9054684 | 0.0114574 | 79.0289108 | 0.0000000 |
SO that looks pretty neat. But, as you can clearly see, there are very many other parameters and measures that we can call from our GARCH model. For a list of available commands, consult the Vignette.
persistence(garchfit1)
## [1] 0.9846682
# Persistence is alpha + beta, and it is typically very high and close to 1
# To view the conditional variance plot, use:
sigma(garchfit1) %>% xts_tbl()
sigma <-colnames(sigma) <- c("date", "sigma")
sigma %>% mutate(date = as.Date(date))
sigma <-
gg <-
ggplot() +
geom_line(data = Plotdata %>% filter(ReturnType == "Returns_Sqd") %>% select(date, Returns) %>%
unique %>% mutate(Returns = sqrt(Returns)), aes(x = date, y = Returns)) +
geom_line(data = sigma, aes(x = date, y = sigma), color = "red", size = 2, alpha = 0.8) +
# scale_y_continuous(limits = c(0, 0.35)) +
labs(title = "Comparison: Returns Sigma vs Sigma from Garch",
subtitle = "Note the smoothing effect of garch, as noise is controlled for.", x = "", y = "Comparison of estimated volatility",
caption = "Source: Fin metrics class | Calculations: Own") +
fmxdat::theme_fmx(CustomCaption = TRUE)
::finplot(gg, y.pct = T, y.pct_acc = 1) fmxdat
Notice from the above figure - The red line is the noise reduced version of the actual volatility (close to close vol).
As stressed in class, this is the vital insight: the red line + noise = black line. We are not interested in the noise component (η) of course, so should only consider the fundamental volatility component - i.e. the red line (σ).
So…. GARCH (or any other comparable method) is useful for separating volatility into its signal and noise components:
ε≈σt∗ηt
If we divide out η - we get the volatility signal.
The NIC traces out the impact on the volatility signal (σ) conditional upon unanticipated shocks (ε). For standard GARCH, this is symmetric (less realistic):
newsimpact(z = NULL, garchfit1)
ni <-
plot(ni$zx, ni$zy, ylab = ni$yexpr, xlab = ni$xexpr, type = "l",
main = "News Impact Curve")
# Comparing this to other GARCH fits allow us insight into
# the model's dynamics... Test vs other models yourself.
To check the fit of the model
infocriteria(garchfit1)
##
## Akaike -5.805057
## Bayes -5.796356
## Shibata -5.805060
## Hannan-Quinn -5.801953
# This can now be compared to other garch models to assess
# which provides the best fit to the underlying data.
# To get the model fitted values:
fitted(garchfit1) fitval <-
sigma(garchfit1) # Conditional resids
sigma <-
residuals(garchfit1) # Ordinary resids
epsilon <-
residuals(garchfit1, standardize = TRUE) # Standardized resids
zt <-
# Check the class notes on the definitions of epsilon and zt:
epsilon/sigma
ztbyhand =
# And they are exactly the same:
all.equal(zt, ztbyhand)
## [1] TRUE
# Check the class notes for the definition of these residuals
# and why they equate....
For a full list of commands, see the creator’s blog for this summary:
Rugarch offers a wide variety of plots that we can use to visually get a deeper look at the underlying structure of the second order model behaviour:
# To see all the options, run: plot(garchfit1)
# To see all the plots at once, use:
plot(garchfit1, which = "all")
##
## please wait...calculating quantiles...
Let’s consider a few of these more closely:
plot(garchfit1, which = 1)
plot(garchfit1, which = 3)
# Note for the above plot - the blue line is sigma, gray line
# epsilon (i.e. sigma + noise.): notice the massive
# distorting impact of noise in ordinary epsilon, or squared
# error / S.D. measures: which are typically used for
# proxying vol.
plot(garchfit1, which = 8)
plot(garchfit1, which = 9)
plot(garchfit1, which = 12)
For us to fit other models, or use other distributions, simply amend the code to reflect your choice.
Let’s, e.g., fit a GJR-GARCH model using a student t distribution:
ugarchspec(variance.model = list(model = c("sGARCH","gjrGARCH","eGARCH","fGARCH","apARCH")[2],
gjrgarch11 =
garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(1, 0), include.mean = TRUE),
distribution.model = c("norm", "snorm", "std", "sstd", "ged", "sged", "nig", "ghyp", "jsu")[3])
# Now to fit, I use as.matrix and the data - this way the plot functions we will use later will work.
ugarchfit(spec = gjrgarch11, data = as.matrix(porteqw))
garchfit2 =
@fit$matcoef %>% kable() garchfit2
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
mu | 0.0005582 | 0.0001946 | 2.8688101 | 0.0041202 |
ar1 | -0.0134461 | 0.0171933 | -0.7820517 | 0.4341842 |
omega | 0.0000030 | 0.0000022 | 1.3865777 | 0.1655706 |
alpha1 | 0.0263767 | 0.0115535 | 2.2830071 | 0.0224299 |
beta1 | 0.9183325 | 0.0189055 | 48.5749520 | 0.0000000 |
gamma1 | 0.0830434 | 0.0193102 | 4.3005008 | 0.0000170 |
shape | 8.8988235 | 1.2596969 | 7.0642576 | 0.0000000 |
Let’s now compare the two models (garchfit1 and garchfit2) in terms of infocriteria, likelihood levels and also their NI curves.
First, we see that the sign bias test indicates that there is not a significant leverage effect - implying that negative shocks tend not to cause significantly more persistence than positive shocks. This is quite surprizing. Typically, the test suggests the need for a leverage model, e.g. the GJR or eGARCH. Let’s assume it does, and compare the fits between the GARCH(1,1) and the GJR-GARCH(1,1):
signbias(garchfit1)
## t-value prob sig
## Sign Bias 0.6910819 0.4895594
## Negative Sign Bias 0.1825755 0.8551416
## Positive Sign Bias 1.0488836 0.2943032
## Joint Effect 4.0888465 0.2520288
infocriteria(garchfit1)
##
## Akaike -5.805057
## Bayes -5.796356
## Shibata -5.805060
## Hannan-Quinn -5.801953
infocriteria(garchfit2)
##
## Akaike -5.840688
## Bayes -5.828507
## Shibata -5.840696
## Hannan-Quinn -5.836344
# AIC is smaller for GJR, indicating it is better at
# explaining the sample values.
Note that the shape parameter from the GJR-fit summary indicates it is significant and positive - implying negative shocks tend to cause higher persistence than positive shocks. The p-value also suggests this effect to be significant.
In order to get a visual indication of how residuals impact persistance of the second order GARCH models, we can use NI Curves. Let’s fit another widely used model, eGARCH, and compare the three models’ NI curves:
ugarchspec(variance.model = list(model = c("sGARCH","gjrGARCH","eGARCH","fGARCH","apARCH")[3],
egarch11 =
garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(1, 0), include.mean = TRUE),
distribution.model = c("norm", "snorm", "std", "sstd", "ged", "sged", "nig", "ghyp", "jsu")[3])
# Now to fit, I use as.matrix and the data - this way the plot functions we will use later will work.
ugarchfit(spec = egarch11,data=as.matrix(porteqw)) garchfit3 =
Now let’s define and then plot the NIS’:
# Just to be fancy (and to give you a nice illustration of
# how to use functions for doing repetitive things...):
function(Fit, Name) {
NICurveMaker <-
newsimpact(z = NULL, Fit)
NI <-
cbind(NI$zx, NI$zy)
NI <-
colnames(NI) <- c(paste0("Epsilon_", Name), paste0("Sigma_",
Name))
%>% data.frame() %>% tbl_df()
NI
}
NICurveMaker(garchfit1, "GARCH11")
NI1 <- NICurveMaker(garchfit2, "GJR")
NI2 <- NICurveMaker(garchfit3, "EGARCH")
NI3 <-
# Note the Epsilon columns are similar, so let's remove them
# and gather the Sigmas...
cbind(NI1, NI2, NI3) %>%
NI <-gather(Model, Sigma, starts_with("Sigma")) %>%
rename(Epsilon = Epsilon_GARCH11) %>%
select(-Epsilon_GJR, -Epsilon_EGARCH)
ggplot(NI) +
geom_line(aes(x = Epsilon, y = Sigma, colour = Model)) +
ggtitle("News Impact Curves") + theme_hc()
See if you can wrap all of the above into a function that makes a comparison for two or more model inputs. Create a contingency (if function) that does not bind if only one model has been provided. In this case, it should simply plot one NIC.
As another illustration of simple automation, let’s create a loop for finding the best fitting GARCH model given our possible selections:
1:4
models = list()
model.list =
for (p in models) {
ugarchspec(
garchfit =variance.model = list(model = c("sGARCH","gjrGARCH","eGARCH","apARCH")[p], garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(1, 0), include.mean = TRUE),
distribution.model = c("norm", "snorm", "std", "sstd", "ged", "sged", "nig", "ghyp", "jsu")[1])
ugarchfit(spec = garchfit,data=as.numeric(porteqw))
garchfit1 =
garchfit1
model.list[[p]] =
}
names(model.list) <- c("sGARCH","gjrGARCH","eGARCH","apARCH")
sapply(model.list, infocriteria)
fit.mat =# Note: sapply can apply a function (infocriteria here) to a list...
rownames(fit.mat) = rownames(infocriteria(model.list[[1]]))
kable(fit.mat)
sGARCH | gjrGARCH | eGARCH | apARCH | |
---|---|---|---|---|
Akaike | -5.805057 | -5.814640 | -5.816073 | -5.811956 |
Bayes | -5.796356 | -5.804199 | -5.805632 | -5.799775 |
Shibata | -5.805060 | -5.814646 | -5.816079 | -5.811964 |
Hannan-Quinn | -5.801953 | -5.810917 | -5.812349 | -5.807612 |
The table suggests that the eGARCH model performs the best (lowest AIC).
For the apARCH model, we can interpret the parameters gamma as the leverage effect and d as the integration parameter. Let’s again consider the APARCH mathematical construct to undertsand these parameters. Consider the APARCH(1,1):
σdt=α0+p∑i=1αi(|ϵt−i|−γiϵt−i)d+q∑j=1β1σdt−j
From equation we note that γ is the leverage parameter, so that if γi>0, we have leverage and negative returns increases σ more than positive.
Also, if d=2, we have an ordinary GARCH model with leverage (GJR), while if d=1 we are effectively studying squared σ. See the output, from it we note leverage, and the assumption that d≠2 cannot be rejected (adding S.E. to parameter includes 2).
# Below follows the summary of the apARCH model's output:
4] model.list[
## $apARCH
##
## *---------------------------------*
## * GARCH Model Fit *
## *---------------------------------*
##
## Conditional Variance Dynamics
## -----------------------------------
## GARCH Model : apARCH(1,1)
## Mean Model : ARFIMA(1,0,0)
## Distribution : norm
##
## Optimal Parameters
## ------------------------------------
## Estimate Std. Error t value Pr(>|t|)
## mu 0.000570 0.000202 2.82411 0.004741
## ar1 -0.007810 0.017632 -0.44296 0.657797
## omega 0.000000 0.000000 0.56460 0.572345
## alpha1 0.057397 0.005199 11.03950 0.000000
## beta1 0.905131 0.011123 81.37525 0.000000
## gamma1 0.209380 0.043038 4.86498 0.000001
## delta 2.650023 0.030302 87.45470 0.000000
##
## Robust Standard Errors:
## Estimate Std. Error t value Pr(>|t|)
## mu 0.000570 0.000420 1.357061 0.174762
## ar1 -0.007810 0.022058 -0.354077 0.723281
## omega 0.000000 0.000005 0.033522 0.973258
## alpha1 0.057397 0.086241 0.665541 0.505704
## beta1 0.905131 0.019851 45.597007 0.000000
## gamma1 0.209380 0.067005 3.124848 0.001779
## delta 2.650023 1.084894 2.442657 0.014580
##
## LogLikelihood : 10317.41
##
## Information Criteria
## ------------------------------------
##
## Akaike -5.8120
## Bayes -5.7998
## Shibata -5.8120
## Hannan-Quinn -5.8076
##
## Weighted Ljung-Box Test on Standardized Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 0.4761 0.490177
## Lag[2*(p+q)+(p+q)-1][2] 3.9131 0.005207
## Lag[4*(p+q)+(p+q)-1][5] 8.6784 0.004011
## d.o.f=1
## H0 : No serial correlation
##
## Weighted Ljung-Box Test on Standardized Squared Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 0.2717 0.6022
## Lag[2*(p+q)+(p+q)-1][5] 1.0615 0.8456
## Lag[4*(p+q)+(p+q)-1][9] 1.7799 0.9296
## d.o.f=2
##
## Weighted ARCH LM Tests
## ------------------------------------
## Statistic Shape Scale P-Value
## ARCH Lag[3] 0.8303 0.500 2.000 0.3622
## ARCH Lag[5] 1.2723 1.440 1.667 0.6542
## ARCH Lag[7] 1.7162 2.315 1.543 0.7771
##
## Nyblom stability test
## ------------------------------------
## Joint Statistic: 1048.452
## Individual Statistics:
## mu 0.02631
## ar1 0.31313
## omega 238.55753
## alpha1 0.13321
## beta1 0.26171
## gamma1 0.02734
## delta 0.35242
##
## Asymptotic Critical Values (10% 5% 1%)
## Joint Statistic: 1.69 1.9 2.35
## Individual Statistic: 0.35 0.47 0.75
##
## Sign Bias Test
## ------------------------------------
## t-value prob sig
## Sign Bias 1.0612 0.2887
## Negative Sign Bias 1.1294 0.2588
## Positive Sign Bias 0.3909 0.6959
## Joint Effect 2.5601 0.4645
##
##
## Adjusted Pearson Goodness-of-Fit Test:
## ------------------------------------
## group statistic p-value(g-1)
## 1 20 144.6 2.430e-21
## 2 30 254.7 6.346e-38
## 3 40 265.7 1.623e-35
## 4 50 253.4 2.379e-29
##
##
## Elapsed time : 2.796522
Let’s compare different measures of volatility for the SP500 to illustrate different ways in that we can model Sigma.
SP <-::SP500 %>% mutate(Returns = SP500/lag(SP500) - 1) %>%
fmxdat
slice(-1) %>% select(-SP500) %>%
mutate(SampleSD = sqrt(Returns^2))
%>%
SP mutate(Absret = abs(Returns)) %>%
gather(Type, Valu, -date) %>%
ggplot() +
geom_line(aes(date, Valu, color = Type)) +
facet_wrap(~Type) +
::theme_fmx() + labs(color = F) fmxdat
%>%
SP mutate(Constant_var = sd(Returns)) %>%
ggplot() +
geom_line(aes(date, SampleSD), color = "steelblue") +
geom_hline(aes(date, yintercept = mean(Constant_var)), color = "red",
alpha = 0.8, size = 2) +
::theme_fmx() +
fmxdatlabs(title = "Sample Var vs Constant Var")
Using a simple rolling standard deviation calculation already improves our estimate (less noisy than SD(Returns), more noisy than straight line unconditional SD):
::p_load(RcppRoll) # Great for fast estimation
pacman
100
back =
%>% mutate(Constant_var = sd(Returns)) %>%
SP mutate(Roller = roll_sd(Returns, n = back, fill = NA)) %>%
ggplot() +
geom_line(aes(date, SampleSD), color = "steelblue") +
geom_hline(aes(date, yintercept = mean(Constant_var)), color = "red",
alpha = 0.8, size = 2) +
geom_line(aes(date, y = Roller), color = "darkgreen", alpha = 0.8,
size = 2) +
::theme_fmx() +
fmxdatlabs(title = "Sample Var vs Constant Var")
A more adaptive version is the exponentially weighted moving average (EWMA):
^yt=α∗yt−1+(1−α)∗^yt−1
But this requires modeling a ETS(A,N,N) innovation state space model (Error, Trend, Seasonal or exponentially smoothed model) of the form discussed here, where our innovations are modeled.
Below we do both an additive and multiplicative form.
::p_load(forecast)
pacman
ets(SP %>% tbl_xts(cols_to_xts = SampleSD), model = "ANN")
ets_sd <-
ets(1e-8 + SP %>% tbl_xts(cols_to_xts = SampleSD), model = "MNN") # Adding a small increment for stability
ets_sd_mult <-
left_join(
SP,tibble( date = unique(SP$date), ETS = as.numeric(ets_sd$fitted)), # Make it left-join-able
by = "date"
%>%
)
left_join(.,
tibble( date = unique(SP$date), ETS_Mult = as.numeric(ets_sd_mult$fitted)), # Make it left-join-able
by = "date"
%>%
)
mutate(Constant_var = sd(Returns)) %>%
mutate(Roller = roll_sd(Returns, n = back, fill = NA) ) %>%
ggplot() +
geom_line(aes(date, SampleSD), color = "steelblue" ) +
geom_hline(aes(date, yintercept = mean(Constant_var)), color = "red", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = Roller), color = "darkgreen", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS), color = "darkblue", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS_Mult), color = "darkorange", alpha = 0.5, size = 1.5) +
fmxdat::theme_fmx() +
labs(title = "Sample Var vs Constant Var vs ETS (A,N,N)")
Notice the ETS forms fit the data more closely than the rolling SD (which we can arbitrarily make closer to the data of course, by lowering N).
Right, with our knowledge now of fitting garch, let’s compare it to the above figure using this time fGarch for good measure.
SP %>% tbl_xts(., cols_to_xts = Returns)
xtsret <-
garch_fit <-
fGarch::garchFit(formula= ~arma(1,1) + aparch(1,1), data = xtsret , trace = FALSE)
garch_fit@sigma.t
std_t <-
left_join(
SP,tibble( date = unique(SP$date), GARCH = std_t), # Make it left-join-able
by = "date"
%>%
)
left_join(
.,tibble( date = unique(SP$date), ETS = as.numeric(ets_sd$fitted)), # Make it left-join-able
by = "date"
%>%
)
left_join(.,
tibble( date = unique(SP$date), ETS_Mult = as.numeric(ets_sd_mult$fitted)), # Make it left-join-able
by = "date"
%>%
) mutate(Constant_var = sd(Returns)) %>%
mutate(Roller = roll_sd(Returns, n = back, fill = NA) ) %>%
ggplot() +
geom_line(aes(date, SampleSD), color = "steelblue" ) +
geom_hline(aes(date, yintercept = mean(Constant_var)), color = "red", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = Roller), color = "darkgreen", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS), color = "darkblue", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS_Mult), color = "darkorange", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = GARCH), color = "red", alpha = 0.9, size = 1.5) +
fmxdat::theme_fmx() +
labs(title = "Sample Var vs Constant Var vs ETS (A,N,N) vs GARCH")
A final type of volatility estimator that we can use is stochastic volatility, where vol is treated as a latent (unobservable) variable.
wt=exp(ht2)∗ztht−ˉh=ϕ(ht−1−ˉh)+μt
so that…
wt=σt∗ztlog(σ2t)=ˉh+ϕ(log(σ2t−1)−ˉh+μt)
To estimate the above, let’s use the stochvol package:
::p_load(stochvol)
pacman
SP %>% tbl_xts(., cols_to_xts = Returns)
xtsret <- svsample(xtsret - mean(xtsret), priormu = c(0, 100), priornu = c(4, 100))
sv <-
sv$summary$sd[, 1]
stochvol_t <-
# And for the last time:
left_join(
SP,tibble( date = unique(SP$date), SV = sv$summary$sd[, 1]), # Make it left-join-able
by = "date"
%>%
)
left_join(
.,tibble( date = unique(SP$date), GARCH = std_t), # Make it left-join-able
by = "date"
%>%
)
left_join(
.,tibble( date = unique(SP$date), ETS = as.numeric(ets_sd$fitted)), # Make it left-join-able
by = "date"
%>%
)
left_join(.,
tibble( date = unique(SP$date), ETS_Mult = as.numeric(ets_sd_mult$fitted)), # Make it left-join-able
by = "date"
%>%
) mutate(Constant_var = sd(Returns)) %>%
mutate(Roller = roll_sd(Returns, n = back, fill = NA) ) %>%
ggplot() +
geom_line(aes(date, SampleSD), color = "steelblue" ) +
geom_hline(aes(date, yintercept = mean(Constant_var)), color = "red", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = Roller), color = "darkgreen", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS), color = "darkblue", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = ETS_Mult), color = "darkorange", alpha = 0.5, size = 1.5) +
geom_line(aes(date, y = GARCH), color = "red", alpha = 0.9, size = 1.5) +
geom_line(aes(date, y = SV), color = "black", alpha = 0.9, size = 1.5) +
fmxdat::theme_fmx() +
labs(title = "Sample Var vs Constant Var vs ETS (A,N,N) vs GARCH vs SV")
Let’s now look at ways that we can use the second order persistence model to forecast volatility into the future, and also compare in-sample forecasting performance of GARCH models.
Let’s use our GARCH models to forecast volatility. This is particularly important for forward estimating expected volatility for options pricing, or for predicting forward VaR estimates. rugarch offers a robust method of doing so. Let’s compare the 10 period ahead volatility forecast of our three models:
::p_load("xtable")
pacman
.1 <- ugarchforecast(garchfit1, n.ahead = 10)
garchf
.2 <- ugarchforecast(garchfit2, n.ahead = 10)
garchf
.3 <- ugarchforecast(garchfit3, n.ahead = 10)
garchf
slotNames(garchf.1)
## [1] "forecast" "model"
names(garchf.1@forecast)
## [1] "n.ahead" "N" "n.start" "n.roll" "sigmaFor" "seriesFor"
names(garchf.1@model)
## [1] "modelinc" "modeldesc" "modeldata" "pars" "start.pars"
## [6] "fixed.pars" "maxOrder" "pos.matrix" "fmodel" "pidx"
## [11] "n.start"
# As these models are again S4 classes with their own given
# structures, let's use the built-in plotting
# functionalities:
plot(garchf.1, which = 1)
plot(garchf.1, which = 3)
# Let's create a table of forecast values: First: Save the
# forecast sigmas:
as.data.frame(sigma(garchf.1))
f1 <- as.data.frame(sigma(garchf.2))
f2 <- as.data.frame(sigma(garchf.3))
f3 <-
# Save The fitted forecasts:
fitted(garchf.1)
series1 <- fitted(garchf.2)
series2 <- fitted(garchf.3)
series3 <-
cbind(f1, f2, f3)
sigmaf <- cbind(series1, series2, series3)
fitf <- sigmaf^2
vol <-colnames(vol) = c("garch", "gjrgarch", "egarch")
xtable(vol)
tv =# For your Latex paper format: print(tv, type = 'latex', file
# = 'forc.tex')
# in HTML:
kable(vol)
garch | gjrgarch | egarch | |
---|---|---|---|
T+1 | 0.0001769 | 0.0001671 | 0.0001866 |
T+2 | 0.0001776 | 0.0001679 | 0.0001864 |
T+3 | 0.0001784 | 0.0001686 | 0.0001863 |
T+4 | 0.0001791 | 0.0001693 | 0.0001861 |
T+5 | 0.0001799 | 0.0001700 | 0.0001860 |
T+6 | 0.0001806 | 0.0001707 | 0.0001859 |
T+7 | 0.0001813 | 0.0001714 | 0.0001858 |
T+8 | 0.0001820 | 0.0001721 | 0.0001856 |
T+9 | 0.0001828 | 0.0001728 | 0.0001855 |
T+10 | 0.0001834 | 0.0001734 | 0.0001854 |
Now we know that we can use the values in the above table of vol forecasts to forward estimate the VaR’s we fitted in the previous practical:
# as series1 is the estimate of forward mean. and f1 sigma
# fcast from garch11, we can use:
series1 + f1 * qnorm(0.05)
var95 <-
# Hence we expect at 95% surety our EW portfolio's return not
# to dip below 1.9% for each day t to t+10.
To now compute the 10 day VaR forecast (and estimate the 5% most extreme downside for the next 10 days for the equal weighted portfolio) we have to sum the forecast sigmas:
sqrt(sum(f1^2))
sigma10d <- sigma10d
## [1] 0.04245113
10 * series1[1] + sigma10d * qnorm(0.05)
var9510d <- var9510d
## [1] -0.0662118
# Thus we expect the returns of the portfolio to exceed -5.9%
# at 95%
In order to backtest the accuracy of a model in terms of its ability to provide accurate forecasts, we need to re-estimate the model each period (or every few periods, defined by the modeller) to capture the impact of data changes on parameter fits. rugarch offers a procedure, ugarchroll which does just that - it allows for the generation of 1-step ahead forecasts and the periodic re-estimation of model parameters. We can either choose a moving window (with fixed window size) or an expanding window (where the first period remains fixed and the sample used for parameter estimation grows each period).
It then allows an incredibly useful means of assessing the ability of the chosen GARCH model to predict accurate VaR estimates. It now gives us the expected and actual exceedences of the VaR estimates - and calculate the Kupiec unconditional coverage ratio Kupiec and the Christoffersen conditional coverage ratios (Christoffersen, Hahn, and Inoue 2001) which can be used to test the H0 of correct exceedence levels. We did so for SA economic sectors and provide detailed account of the measures in Katzke and Garbers (2016).
makePSOCKcluster(10)
cl = ugarchspec(variance.model = list(model = 'gjrGARCH', garchOrder = c(1, 1)),
spec =
mean.model = list(armaOrder = c(1, 0), include.mean = TRUE),
distribution ='norm')
# Thus the model spec is a ARIMA(1,1,0)-GJRGARCH(1,1), with normal distribution
ugarchroll(spec,
roll =
porteqw,
forecast.length = 1000,
refit.every = 50,
refit.window = 'moving',
window.size = 1200,
calculate.VaR = TRUE,
VaR.alpha = c(0.01, 0.05),
keep.coef = TRUE,
cluster = cl)
# For this, only 1-step ahead can be done automatically.
show(roll)
##
## *-------------------------------------*
## * GARCH Roll *
## *-------------------------------------*
## No.Refits : 20
## Refit Horizon : 50
## No.Forecasts : 1000
## GARCH Model : gjrGARCH(1,1)
## Distribution : norm
##
## Forecast Density:
## Mu Sigma Skew Shape Shape(GIG) Realized
## 2015-11-06 0.0010 0.0104 0 0 0 -0.0174
## 2015-11-09 0.0017 0.0116 0 0 0 -0.0033
## 2015-11-10 0.0008 0.0114 0 0 0 -0.0161
## 2015-11-11 0.0016 0.0123 0 0 0 -0.0073
## 2015-11-12 0.0011 0.0122 0 0 0 -0.0189
## 2015-11-13 0.0018 0.0134 0 0 0 -0.0215
##
## ..........................
## Mu Sigma Skew Shape Shape(GIG) Realized
## 2019-08-29 3e-04 0.0135 0 0 0 -0.0101
## 2019-08-30 5e-04 0.0135 0 0 0 0.0318
## 2019-09-02 2e-04 0.0143 0 0 0 -0.0051
## 2019-09-03 4e-04 0.0139 0 0 0 0.0085
## 2019-09-04 3e-04 0.0136 0 0 0 0.0048
## 2019-09-05 3e-04 0.0132 0 0 0 0.0273
##
## Elapsed: 27.67296 secs
report(roll, type="VaR", VaR.alpha = 0.01, conf.level = 0.95)
## VaR Backtest Report
## ===========================================
## Model: gjrGARCH-norm
## Backtest Length: 1000
## Data:
##
## ==========================================
## alpha: 1%
## Expected Exceed: 10
## Actual VaR Exceed: 19
## Actual %: 1.9%
##
## Unconditional Coverage (Kupiec)
## Null-Hypothesis: Correct Exceedances
## LR.uc Statistic: 6.473
## LR.uc Critical: 3.841
## LR.uc p-value: 0.011
## Reject Null: YES
##
## Conditional Coverage (Christoffersen)
## Null-Hypothesis: Correct Exceedances and
## Independence of Failures
## LR.cc Statistic: 7.276
## LR.cc Critical: 5.991
## LR.cc p-value: 0.026
## Reject Null: YES
report(roll, type="fpm")
##
## GARCH Roll Mean Forecast Performance Measures
## ---------------------------------------------
## Model : gjrGARCH
## No.Refits : 20
## No.Forecasts: 1000
##
## Stats
## MSE 0.0002511
## MAE 0.0117000
## DAC 0.4870000
To add regressors to the mean and variance equations, use the following in your model specifications:
Bauwens, Luc, Sébastien Laurent, and Jeroen VK Rombouts. 2006. “Multivariate Garch Models: A Survey.” Journal of Applied Econometrics 21 (1): 79–109.
Christoffersen, Peter, Jinyong Hahn, and Atsushi Inoue. 2001. “Testing and Comparing Value-at-Risk Measures.” Journal of Empirical Finance 8 (3): 325–42.
Hentschel, Ludger. 1995. “All in the Family Nesting Symmetric and Asymmetric Garch Models.” Journal of Financial Economics 39 (1): 71–104.
Katzke, Nico, and Chris Garbers. 2016. “Do Long Memory and Asymmetries Matter When Assessing Downside Return Risk?” Investment Analyst Journal.
Ruppert, David. 2011. Statistics and Data Analysis for Financial Engineering. Springer.
Silvennoinen, Annastiina, and Timo Teräsvirta. 2009. “Multivariate Garch Models.” In Handbook of Financial Time Series, 201–29. Springer.
Tsay, Ruey S. 2012. An Introduction to Analysis of Financial Data with R. Wiley Publishing.
———. 2014. An Introduction to Analysis of Financial Data with R. John Wiley & Sons.
In the ugarchspec, include archm=TRUE as: mean.model = list(armaOrder = c(1, 0), include.mean = TRUE, archm=TRUE)↩︎