In this practical we will be calculating the Value-at-Risk and Expected shortfall, as well as several other donwside risk measures in R. We will do this for a portfolio of assets, so as to be able to provide useful portfolio analyses to investors regarding:
Their possible risk and return profiles
Downside potential (ability to lose capital)
We will be making use of several packages to do calculations and make forward predictions, including PerformanceAnalytics which we used earlier.
We will construct equally weighted and weighted portfolios, and look at constructing portfolio return and risk measures.
We will end off by looking at VaR and ES measures.
Let’s again load the daily TRI data set from an earlier practical.
library(rmsfuns)
::p_load("tidyr", "tbl2xts","devtools","lubridate", "readr", "PerformanceAnalytics", "ggplot2", "dplyr")
pacman<- fmxdat::findata dailydata
Let’s remove the NA’s using the nalocf function from practical 2, and calculate the daily continuous (log) returns using the TTR package’s ROC call. I introduce this package here as it does not require the data to be xts, and also provides several other technical wrapper functions that might prove useful that you can explore.
::p_load("TTR")
pacman<-
dailydata %>% arrange(Date) %>%
dailydata mutate(across(.cols = -Date, .fns = ~TTR::ROC(., type = c("continuous", "discrete")[2]))) %>%
# Equivalent to: # mutate_at(.vars = vars(-Date), ~./lag(.)-1) %>%
# continuous equivalent to: # mutate_at(.vars = vars(-Date), ~(log(.)-log(lag(.))))
mutate_at(.vars = vars(-Date), ~na.locf(., na.rm = F, maxgap = 5)) %>% filter(Date > first(Date))
# Pad NA's back max 5 days:
# Let's not waste our time - remove spaces in column names!
colnames(dailydata) <-
gsub("JSE\\.","",colnames(dailydata))
colnames(dailydata) <-
gsub("\\.Close","",colnames(dailydata))
To get a table of statistics of this data, we could use PA’s statistics table:
<-
tablestats %>% tbl_xts() %>%
dailydata table.Stats(., ci = 0.95, digits = 3)
print(tablestats[,1:5])
## ABG BVT CPI FSR NED
## Observations 3568.000 3568.000 3568.000 3568.000 3568.000
## NAs 0.000 0.000 0.000 0.000 0.000
## Minimum -0.145 -0.096 -0.126 -0.148 -0.105
## Quartile 1 -0.010 -0.009 -0.007 -0.010 -0.010
## Median 0.000 0.000 0.000 0.000 0.000
## Arithmetic Mean 0.000 0.001 0.001 0.001 0.000
## Geometric Mean 0.000 0.001 0.001 0.000 0.000
## Quartile 3 0.010 0.010 0.010 0.012 0.010
## Maximum 0.119 0.105 0.119 0.130 0.126
## SE Mean 0.000 0.000 0.000 0.000 0.000
## LCL Mean (0.95) 0.000 0.000 0.001 0.000 0.000
## UCL Mean (0.95) 0.001 0.001 0.002 0.001 0.001
## Variance 0.000 0.000 0.000 0.000 0.000
## Stdev 0.019 0.017 0.018 0.020 0.018
## Skewness 0.146 0.094 0.009 0.009 0.108
## Kurtosis 3.399 2.288 4.997 3.184 2.815
As a side note - look at the incredibly high level of kurtosis (fat tails) of ABI and ABL (three is parity).
PerformanceAnalytics also offers a means of cleaning the data using Boudt’s method, which is specifically designed to avoid some of the cleaning pitfalls as pertaining to portfolio construction and risk analysis. The technique reduces the magnitude, but not the direction of observations that exceed a 1−α risk threshold (See Boudt, Peterson & Croux (2008) for more details)
::p_load("tbl2xts")
pacman#install:
::p_load("DEoptimR")
pacman::p_load("robustbase")
pacman<-
rtnc data(managers)
Return.clean(managers[,1:4],
method = c("none", "boudt", "geltner")[2], alpha = 0.01)
Below we will do a proper example of calculating portfolio returns. We will have a weight vector according to which the portfolio needs to be rebalanced at periodic dates. We will compare the performance of a randomly balanced portfolio, and that of an equally weighted portfolio.
We will then calculate the portfolio values and positions at any given point in time.
Be very mindful when doing this calculation - note that portfolio returns change the positions in a portfolio over time, implying that before each rebalancing, you need to calculate the position in a stock - and then whether to buy or sell some of the stock to meet the new weight.
Mathematically, this implies calculating each day what the new portfolio position is given the stock’s daily return.
# install.packages("rportfolios")
library(rportfolios)
<- fmxdat::findata
dailydata
<-
dailydata.subset
%>%
dailydata
gather(Stocks, Px, -Date) %>%
arrange(Date) %>%
group_by(Stocks) %>%
mutate(Returns = Px/lag(Px)-1) %>% ungroup() %>% filter(Date > first(Date)) %>%
select(-Px)
# Let's assume the portfolio rebalances each January and July.
# First, let's save the exact rebalance dates and save the random weight and date information to be used later:
# Below is a very nice way to save months and years: let's rebalance at month 1 and 7...
<- c(1,7) # Make a parameter that can easily be changed later.
RebMonths
<-
RandomWeights
%>%
dailydata.subset
mutate(Months = as.numeric(format(Date, format = "%m")),
YearMonths = as.numeric(format(Date, format = "%Y%m"))) %>%
filter(Months %in% RebMonths) %>%
group_by(YearMonths, Months, Stocks) %>% filter(Date == last(Date)) %>% ungroup()
# Now let's create a column with the random weights assigned to each stock conforming to the following parameters:
# Let's also create a random weighting vector for our selected stocks, with the following parameters:
# They have to sum to 1...
# Let's add constraints too - you can only have a maximum exposure to a single stock up to 20% of the equal weight.
<- length(unique(RandomWeights$Stocks))
N_Stocks
<-(1/N_Stocks)*1.20
Max_Exposure
# Minimum exposure is, say, 2%:
<- 0.02
Min_Exposure
# Now to append the weight vector, let's use the random.bounded function from rportfolios.
<-
RandomWeights_adj bind_cols(RandomWeights %>% arrange(Date),
%>% group_by(Date) %>%
RandomWeights
do( Randweights = random.bounded(n = nrow(.),
x.t = 1, # Full investment...
x.l = rep( Min_Exposure, nrow(.)), # Lower Bound
x.u = rep( Max_Exposure, nrow(.)),
max.iter = 1000) ) %>% ungroup() %>% unnest(Randweights) %>% select(-Date)
)
# Sanity check: Create a stop function if it doesn't hold...
if( RandomWeights_adj %>% group_by(Date) %>%
summarise(Fully_Invested = sum(Randweights)) %>% filter(Fully_Invested > 1.000001 | Fully_Invested < 0.9999999 ) %>% nrow() > 0 ) stop("\n=============\n Ooops! \nWeights do not sum to 1... Please check!\n===========\n")
# Create equal weight portfolios as well:
<-
RandomWeights_adj
%>%
RandomWeights_adj
group_by(Date) %>%
mutate(EqualWeights = 1/n()) %>%
ungroup() %>% select(-Months, -YearMonths)
# Right, so now we have equal and random-weights that we can use at rebalancing dates: January and July.
The code below shows how we can create random portfolios using our returns above, as well as our random and equal weights.
When doing portfolio return calculations, please use rmsfuns’ Safe_.Portfolip_Return.Portfolio function rather.
PerformanceAnalytics’ Return.Portfolio function has a few nuances which might (without error) - give you very wrong results.
I wrote a gist to show why we need to use the safe version for portfolio return calcs - it also confirms the calculation with a by-hand example.
::p_load("PerformanceAnalytics")
pacman
# Now we use the Safe_Return.portfolio function from PerformanceAnalytics
# Note, as with most PA functions, the inputs are xts and wide...
# Also, let's assume you are investing R1000 at the start:
<- 1000
Fund_Size_at_Start
<-
Rand_weights %>% select(Date, Stocks, Randweights) %>% spread(Stocks, Randweights) %>% tbl_xts()
RandomWeights_adj
<-
EW_weights %>% select(Date, Stocks, EqualWeights) %>% spread(Stocks, EqualWeights) %>% tbl_xts()
RandomWeights_adj
<-
df_Returns %>% spread(Stocks, Returns)
dailydata.subset
is.na(df_Returns)] <- 0
df_Returns[<- df_Returns %>% tbl_xts()
xts_df_Returns
<-
Rand_RetPort ::Safe_Return.portfolio(xts_df_Returns,
rmsfuns
weights = Rand_weights, lag_weights = TRUE,
verbose = TRUE, contribution = TRUE,
value = Fund_Size_at_Start, geometric = TRUE)
<-
EW_RetPort ::Safe_Return.portfolio(xts_df_Returns,
rmsfuns
weights = EW_weights, lag_weights = TRUE,
verbose = TRUE, contribution = TRUE,
value = Fund_Size_at_Start, geometric = TRUE)
# Clean and save portfolio returns and weights:
<-
Rand_Contribution $"contribution" %>% xts_tbl()
Rand_RetPort
<-
Rand_BPWeight
$"BOP.Weight" %>% xts_tbl()
Rand_RetPort
<-
Rand_BPValue
$"BOP.Value" %>% xts_tbl()
Rand_RetPort
# Clean and save portfolio returns and weights:
<-
EW_Contribution $"contribution" %>% xts_tbl()
EW_RetPort
<-
EW_BPWeight $"BOP.Weight" %>% xts_tbl()
EW_RetPort
<-
EW_BPValue $"BOP.Value" %>% xts_tbl()
EW_RetPort
names(Rand_Contribution) <- c("date", names(Rand_RetPort$"contribution"))
names(Rand_BPWeight) <- c("date", names(Rand_RetPort$"BOP.Weight"))
names(Rand_BPValue) <- c("date", names(Rand_RetPort$"BOP.Value"))
names(EW_Contribution) <- c("date", names(Rand_RetPort$"contribution"))
names(EW_BPWeight) <- c("date", names(Rand_RetPort$"BOP.Weight"))
names(EW_BPValue) <- c("date", names(Rand_RetPort$"BOP.Value"))
# Look at what these data.frames each convey - incredible right?
# Let's bind all of these together now:
<-
df_port_return_Random left_join(dailydata.subset %>% rename("date" = Date),
%>% gather(Stocks, weight, -date),
Rand_BPWeight by = c("date", "Stocks") ) %>%
left_join(.,
%>% gather(Stocks, value_held, -date),
Rand_BPValue by = c("date", "Stocks") ) %>%
left_join(.,
%>% gather(Stocks, Contribution, -date),
Rand_Contribution by = c("date", "Stocks"))
<-
df_port_return_EW left_join(dailydata.subset %>% rename("date" = Date),
%>% gather(Stocks, weight, -date),
EW_BPWeight by = c("date", "Stocks") ) %>%
left_join(.,
%>% gather(Stocks, value_held, -date),
EW_BPValue by = c("date", "Stocks") ) %>%
left_join(.,
%>% gather(Stocks, Contribution, -date),
EW_Contribution by = c("date", "Stocks"))
# Calculate Portfolio Returns:
<-
df_Portf_Random %>% group_by(date) %>% summarise(PortfolioReturn = sum(Returns*weight, na.rm =TRUE)) %>%
df_port_return_Random filter(PortfolioReturn != 0)
# Calculate Portfolio Returns:
<-
df_Portf_EW %>% group_by(date) %>% summarise(PortfolioReturn = sum(Returns*weight, na.rm =TRUE)) %>%
df_port_return_EW filter(PortfolioReturn != 0)
Note the last calculation for portfolio returns:
Remember: we cannot simply sum the weighted log returns within time periods. Formally: rt,P=ln(1+Rt,P)=ln(1+n∑iωiRi,t)≠n∑iωiRi,t
Thus, the following should be remembered:
Simple returns provide convenience in summing across assets for a given date.
log returns provide convenience in summing across .
Luckily, it is easy to move between simple and compounded returns mathematically.
By definition, simple and log returns can be written interchangeably as:
From Simple → log returns: R=exp(r)−1
From log → Simple returns: r=ln(R+1)
Using the above definition, and also the convenience of summing simple returns to get weighted returns, we need to:
rewrite the log to simple returns (if you started with log returns)
sum the weighted simple returns to get the weighted portfolio returns (see eq ???).
write it back into log return form again (if needed). Also, if geometric = FALSE, it uses a simple arithmetic chain (sum returns), which is what we want to use now. If geometric =TRUE, it uses a product approach useful for calculating wealth indexes.
Calculating the value of your investment over time requires geometrically chaining our simple returns. Luckily, this is pretty easy:
<-
Cum_Rand %>%
df_Portf_Random mutate(cumreturn_Rand = (cumprod(1 + PortfolioReturn))) %>%
# Start at 1
mutate(cumreturn_Rand = cumreturn_Rand / first(cumreturn_Rand)) %>% select(-PortfolioReturn)
<-
Cum_EW %>%
df_Portf_EW mutate(cumreturn_EW = (cumprod(1 + PortfolioReturn))) %>%
mutate(cumreturn_EW = cumreturn_EW / first(cumreturn_EW)) %>% select(-PortfolioReturn)
<-
Cum_Comp left_join(Cum_Rand, Cum_EW, by = "date") %>% gather(Type, Value, -date)
# Now let's plot the wealth index (if you invested R100 in each) of the two portfolios::
%>%
Cum_Comp group_by(Type) %>%
ggplot() + geom_line( aes(date, Value, color = Type) ) +
theme_bw()
# StackBar of monthly weights (Note the stand-out rebalance weights...):
%>% tbl_xts() %>% .[endpoints(.,'months')] %>% chart.StackedBar() Rand_BPWeight
Now let’s compare the riskiness of these two portfolios that we just created.
::p_load("knitr", "gt")
pacman# EW Portfolio
<-
t table.CalendarReturns(df_Portf_Random %>% tbl_xts(), digits = 1, geometric = TRUE)
<- ncol(t)
Cols_length
gt(t/100) %>%
tab_header(title = glue::glue("Portfolio Calendar Returns: Random")) %>%
fmt_percent(
columns = 1:Cols_length,
decimals = 1
)
Portfolio Calendar Returns: Random | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | PortfolioReturn |
1.4% | −3.5% | −1.9% | 0.5% | −0.6% | 1.9% | 0.8% | −0.6% | −0.3% | 4.2% | −0.8% | −0.2% | 0.4% |
−0.5% | −0.4% | 0.8% | 0.4% | 0.1% | 0.0% | 0.9% | 2.1% | 0.1% | −1.0% | −0.6% | −2.7% | −0.9% |
−1.4% | −0.5% | −1.7% | 0.3% | 1.0% | −1.4% | 0.5% | −1.5% | 4.1% | 1.7% | 0.4% | 0.5% | 1.9% |
−1.4% | −5.7% | 3.0% | 0.3% | 0.0% | 0.2% | 0.2% | −0.1% | 0.3% | −1.0% | 0.8% | 1.6% | −2.1% |
0.4% | 0.6% | 0.8% | 0.4% | −0.7% | −2.0% | −0.8% | 0.7% | 1.2% | 0.1% | −1.5% | 0.2% | −0.7% |
−1.0% | −0.1% | 0.4% | 0.7% | 1.4% | 0.4% | −0.6% | 2.6% | 1.2% | −1.1% | 4.6% | −0.5% | 8.1% |
0.3% | −0.1% | −0.4% | 0.8% | 0.7% | 1.2% | −2.0% | 0.2% | 1.2% | −0.9% | −0.4% | −1.0% | −0.4% |
−0.8% | 1.5% | 0.0% | 1.0% | 0.2% | 1.4% | 1.8% | 0.4% | −1.0% | 0.2% | 0.1% | 0.0% | 4.7% |
1.3% | 0.8% | −0.3% | −0.1% | −0.4% | 0.8% | 0.3% | −0.9% | 0.1% | 2.7% | −0.3% | 0.9% | 5.0% |
0.9% | −1.6% | 0.9% | 0.6% | −1.7% | 1.0% | 1.0% | 0.2% | 1.5% | 0.4% | −1.6% | −0.8% | 0.8% |
4.2% | −1.6% | 1.2% | −1.8% | −0.1% | 1.8% | 0.4% | −1.7% | −1.7% | 2.2% | −1.2% | −0.6% | 0.9% |
0.5% | −1.1% | −5.6% | −0.3% | −2.5% | 0.4% | −0.5% | 0.3% | 0.7% | −0.3% | −0.5% | 2.0% | −7.0% |
0.1% | −2.3% | 2.0% | 2.5% | 0.8% | 4.0% | 1.1% | 1.3% | −0.9% | −0.1% | −2.7% | 0.7% | 6.5% |
1.0% | −0.4% | 1.7% | 0.2% | 0.6% | −0.6% | −1.3% | 3.1% | 2.8% | NA | NA | NA | 7.1% |
# Random Portfolio
<-
t2 table.CalendarReturns(df_Portf_EW %>% tbl_xts(), digits = 1, geometric = TRUE)
gt(t2/100) %>%
tab_header(title = glue::glue("Portfolio Calendar Returns: EW")) %>%
fmt_percent(
columns = 1:Cols_length,
decimals = 1
)
Portfolio Calendar Returns: EW | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | PortfolioReturn |
1.3% | −3.5% | −1.8% | 0.3% | −0.5% | 2.0% | 0.7% | −0.6% | −0.3% | 3.9% | −0.8% | −0.1% | 0.5% |
−0.6% | −0.2% | 0.9% | 0.5% | 0.1% | −0.1% | 0.9% | 2.2% | 0.0% | −1.0% | −0.5% | −2.6% | −0.5% |
−1.6% | −0.1% | −2.0% | 0.4% | 0.6% | −1.3% | 0.5% | −1.3% | 4.4% | 1.2% | 0.5% | 0.4% | 1.5% |
−1.9% | −5.6% | 3.1% | 0.2% | 0.1% | 0.2% | 0.2% | 0.0% | 0.5% | −1.2% | 1.0% | 1.5% | −2.3% |
0.5% | 0.6% | 1.0% | 0.4% | −0.8% | −2.2% | −0.9% | 1.0% | 1.2% | 0.2% | −1.4% | 0.1% | −0.5% |
−1.0% | −0.2% | 0.4% | 0.7% | 1.3% | 0.4% | −0.6% | 2.7% | 1.4% | −1.1% | 4.6% | −0.5% | 8.2% |
0.4% | 0.0% | −0.3% | 1.0% | 0.7% | 1.2% | −1.7% | 0.3% | 1.0% | −0.8% | −0.2% | −1.0% | 0.5% |
−0.9% | 1.4% | 0.1% | 1.0% | 0.2% | 1.7% | 1.5% | 0.4% | −1.0% | 0.2% | 0.1% | 0.0% | 4.9% |
1.2% | 0.8% | −0.3% | −0.1% | −0.6% | 0.8% | 0.1% | −1.1% | 0.1% | 2.7% | −0.6% | 0.8% | 3.9% |
1.0% | −1.6% | 0.7% | 0.5% | −1.7% | 1.2% | 1.0% | 0.3% | 1.4% | 0.5% | −1.6% | −0.7% | 0.9% |
4.5% | −1.7% | 1.4% | −2.0% | −0.4% | 1.7% | 0.3% | −1.7% | −1.5% | 2.5% | −1.3% | −0.7% | 1.0% |
0.3% | −1.3% | −5.2% | −0.3% | −2.2% | 0.3% | −0.5% | 0.2% | 0.6% | −0.4% | −0.5% | 2.0% | −6.9% |
0.8% | −2.2% | 2.0% | 2.5% | 1.0% | 4.0% | 1.3% | 1.3% | −1.0% | 0.3% | −2.7% | 0.7% | 7.9% |
1.0% | −0.4% | 1.6% | 0.2% | 0.6% | −0.5% | −1.3% | 3.2% | 2.7% | NA | NA | NA | 7.3% |
# >>>>>>>>>> NOTE (NBNB):
# Try kable(t, caption = "Random Portfolio Calendar Returns", format = "latex")
# in your pdf document...
<-
tabdownside table.DownsideRisk(left_join(df_Portf_Random %>% rename("Rand" = PortfolioReturn),
%>% rename("EW" = PortfolioReturn),
df_Portf_EW by = "date") %>% tbl_xts(.),
ci = 0.95, Rf=0, MAR=0)
# Suppose I am only interested in specific elements of the table, then use:
<- tabdownside[c(1,5,7,8:11),]
tabdownside
%>% data.frame() %>% tibble::rownames_to_column() %>%
tabdownside gt() %>%
tab_header(title = glue::glue("Downside Risk estimates")) %>%
fmt_percent(
columns = 2:3,
decimals = 2
)
Downside Risk estimates | ||
---|---|---|
Rand | EW | |
Semi Deviation | 1.04% | 1.04% |
Downside Deviation (Rf=0%) | 1.01% | 1.01% |
Maximum Drawdown | 46.82% | 47.52% |
Historical VaR (95%) | −2.31% | −2.30% |
Historical ES (95%) | −3.21% | −3.21% |
Modified VaR (95%) | −2.30% | −2.29% |
Modified ES (95%) | −3.50% | −3.45% |
# kable(tabdownside, caption = "Downside Risk estimates")
We now look at which of the stocks have been the highest contributors to our portfolio’s risk.
A naive approach to measuring contribution to risk in a portfolio is to view it from a stand-alone perspective (estimating each asset’s unique risk). This is too simplistic, though, as it ignores diversification effects that might be in play when combining imperfectly correlated assets.
We rather look at the Expected Shortfall contribution from each stock in our portfolio to get a more accurate sense of this overall contribution.
This is done by considering the weighted decomposition of the contribution each portfolio element makes to the standard deviation of the whole portfolio. This is, effectively, the partial derivative of each univariate standard deviation with respect to the weights of each stock in our portfolio.
The contribution can be negative - which would imply the stock having a diversifying element in the portfolio.
Mathematically, using the SD approach, we are doing the following (with ρj the Marginal Contribution to Risk (MCR):
Rp=Σjwjrjσ=√ωTΣωδσδω=1σΣω=ρρi=1σ∗Σjσi,jωj∀i
To do this calc, we need to calculate the weighted marginal contributions (ρj above) of each stock we had earlier, and use PerformanceAnalytics’ ETL (expected tail loss) or StdDev function, both using: method = “component”:
::p_load(PortRisk)
pacman
# First get the contributions to overall portfolio return made by each stock:
<- Rand_RetPort$"contribution" %>% xts_tbl()
Rand_Contribution
# Notice that this adds up to the portfolio return at each date:
left_join(Rand_Contribution %>% gather(Stocks, RetCont, -date) %>% group_by(date) %>% summarise(Verify_Ret = sum(RetCont)),
$returns %>% xts_tbl, by = "date") %>% mutate(Diff = round(Verify_Ret - portfolio.returns, 5)) Rand_RetPort
## # A tibble: 3,548 × 4
## date Verify_Ret portfolio.returns Diff
## <date> <dbl> <dbl> <dbl>
## 1 2006-01-31 0.0137 0.0137 0
## 2 2006-02-01 0.0165 0.0165 0
## 3 2006-02-02 -0.00812 -0.00812 0
## 4 2006-02-03 -0.0201 -0.0201 0
## 5 2006-02-06 0.0188 0.0188 0
## 6 2006-02-07 -0.00805 -0.00805 0
## 7 2006-02-08 -0.0245 -0.0245 0
## 8 2006-02-09 0.0225 0.0225 0
## 9 2006-02-10 0.00220 0.00220 0
## 10 2006-02-13 -0.0216 -0.0216 0
## # … with 3,538 more rows
## # ℹ Use `print(n = ...)` to see more rows
# Now, let's consider the highest to lowest contributors to portfolio risk using ES (Expected Tail Loss or ETL):
<-
ES_Risk_Contributors
ETL(Rand_Contribution %>% tbl_xts(), portfolio_method="component") %>%
$pct_contrib_MES %>% data.frame(Risk_Contrib = .) %>%
.
::rownames_to_column("Stock") %>% arrange(desc(Risk_Contrib))
tibble
print(ES_Risk_Contributors)
## Stock Risk_Contrib
## 1 JSE.FSR.Close 0.16661014
## 2 JSE.SBK.Close 0.13749266
## 3 JSE.CPI.Close 0.13636132
## 4 JSE.RMH.Close 0.13279309
## 5 JSE.SLM.Close 0.12430295
## 6 JSE.ABG.Close 0.12088233
## 7 JSE.BVT.Close 0.10353238
## 8 JSE.NED.Close 0.07802513
# This should be your default calculation, as the SD approach is more simplistic in terms of its risk treatment.
# To calculate the risk contribution for SD, note we are effectively taking the average weights over time:
<-
wts
$BOP.Weight %>% xts_tbl %>%
Rand_RetPort
summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt)
# And using the actual returns:
<-
SD_Risk_Contributors_direct
StdDev( R = dailydata.subset %>% filter(Date >= first(Rand_Contribution$date)) %>%
tbl_xts(spread_by = Stocks, cols_to_xts = Returns),
portfolio_method = "component", weights = wts$wt) %>%
$pct_contrib_StdDev %>% data.frame(Risk_Contrib = .) %>%
.
::rownames_to_column("Stock") %>% arrange(desc(Risk_Contrib))
tibble
print(SD_Risk_Contributors_direct)
## Stock Risk_Contrib
## 1 JSE.FSR.Close 0.15503118
## 2 JSE.SBK.Close 0.14757284
## 3 JSE.RMH.Close 0.14579221
## 4 JSE.ABG.Close 0.12850363
## 5 JSE.NED.Close 0.12171724
## 6 JSE.SLM.Close 0.11481075
## 7 JSE.BVT.Close 0.09454841
## 8 JSE.CPI.Close 0.09202374
# We can prove this by doing the above by hand (using the math we had before here):
<- dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns)
Rets <- cov(dailydata.subset %>% spread(Stocks, Returns) %>% select(-Date), use = "pairwise.complete.obs")
Sigma <- Rand_RetPort$BOP.Weight %>% xts_tbl %>% summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt) %>% arrange(Type, colnames(Sigma)) %>% pull(wt)
Wts
= sqrt((t(Wts) %*% Sigma %*% Wts)[1, 1])
sigma_p
# Marginal Contribution:
= ((Sigma %*% Wts)/sigma_p)[, 1]
mctr # Conditional Contribution:
<- Wts * mctr
cctr
tibble(stocks = mctr %>% names, mc = cctr) %>% mutate(CCTR = mc / sum(mc)) %>% arrange(desc(CCTR)) %>% pull(CCTR)
## JSE.FSR.Close JSE.SBK.Close JSE.RMH.Close JSE.ABG.Close JSE.NED.Close
## 0.15502401 0.14782755 0.14593043 0.12854079 0.12168223
## JSE.SLM.Close JSE.BVT.Close JSE.CPI.Close
## 0.11463899 0.09443297 0.09192303
# You can also use the PortRisk package's Conditional Contribution to Total Risk (CCTR) approach:
# Again, get the aggregate weights vector:
<- Rand_RetPort$BOP.Weight %>% xts_tbl %>% summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt)
wts # Specify stocks in your portfolio corresponding to the weights:
<- wts$Type
Stx <- wts$wt
wtssel
<- PortRisk::cctr(Stx,
mc weights = wtssel,
start = first(Rand_BPWeight$date),
end = last(Rand_BPWeight$date),
data = dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns))
<- tibble(stocks = mc %>% names, mc = mc) %>% mutate(MCC = mc / sum(mc)) %>% arrange(desc(MCC))
Risk_Cont print(Risk_Cont%>% pull(MCC))
## JSE.FSR.Close JSE.SBK.Close JSE.RMH.Close JSE.ABG.Close JSE.NED.Close
## 0.15503118 0.14757284 0.14579221 0.12850363 0.12171724
## JSE.SLM.Close JSE.BVT.Close JSE.CPI.Close
## 0.11481075 0.09454841 0.09202374
# DIY: Read some more on this...
# Of course, you could also try out the above using a Bayesian approach as:
# mc <- cctr.Bayes(tickers = Stx,
# weights = wtssel,
# start = first(Rand_BPWeight$date),
# end = last(Rand_BPWeight$date),
# data = dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns),
# sim.size = 1000)
For the above, see the math for the above breakdown of portfolio SD contribution on p.4 here.
You can also view the ES contribution discussion on p.108 here.
See another concise discussion of risk breakdown here.
In this section, let’s just briefly again revisit the calculation of the widely used portfolio risk measures of VaR and ES. We will do each by hand and then using PA package.
= apply(df_Portf_Random %>% select(-date),
var 2,quantile, probs=c(0.05, 0.01))
print(var)
## PortfolioReturn
## 5% -0.02313198
## 1% -0.03723858
To calculate the Historical ES by hand, we need to first define a function to carry out, by column, the ES method:
= function(x, alpha=0.05) {
ES.f = quantile(x, probs=alpha)
q mean(x[x <= q])
# See that this corresponds to the class notes...
}
= apply(df_Portf_Random %>% select(-date),
ES 2, ES.f, alpha=0.05)
print(ES)
## PortfolioReturn
## -0.03209776
# The following will give exactly the same results as above:
VaR(df_Portf_Random %>% tbl_xts(), p= 0.95, method = "historical")
## PortfolioReturn
## VaR -0.02313198
ES(df_Portf_Random %>% tbl_xts(), p=0.95,method = "historical")
## PortfolioReturn
## ES -0.03209776
And look: They are the same as the ones done by hand!
= VaR(df_Portf_Random %>% tbl_xts(), p=0.95,method="modified")
mod.var
= ES(df_Portf_Random %>% tbl_xts(), p=0.95,method="modified")
mod.es
print(mod.var)
## PortfolioReturn
## VaR -0.02297002
print(mod.es)
## PortfolioReturn
## ES -0.03503858
PA also allows a simple means of comparing the three different types of VaRs tested above. It does so using the VaRSensitivity command:
chart.VaRSensitivity(R = df_Portf_Random %>% tbl_xts(),
main = "VaR Sensitivity Plot",
methods=c("HistoricalVaR", "ModifiedVaR","GaussianVaR"),
colorset=bluefocus, lwd=2)
Then it also shows a great plot that recursively calculates the VaR and plots it vs the actual data, to give you an idea of where the returns actually fell below the VaR threshold (this takes a few minutes… be warned):
chart.BarVaR(df_Portf_Random %>% tbl_xts(),
main = "Data vs Empirical VaR using rolling approach",
methods="HistoricalVaR")
For homework, complete the following task.
For the ALSI top 40, calculate the rolling contribution to risk (use fmxdat::J200).
TIP: filter out stocks with valid returns of less than 10 (reweight to sum).
That’s it.
Please take the time to go through the code and understand, specifically, the code used to construct the random portfolio.