• Portfolio Risk Analysis
    • Calendar performances:
    • Downside Risks
    • Risk Contribution Estimation:
    • Value at Risk and Expected Shortfall Measures
    • VaR comparison
  • DIY

Introduction

In this practical we will be calculating the Value-at-Risk and Expected shortfall, as well as several other donwside risk measures in R. We will do this for a portfolio of assets, so as to be able to provide useful portfolio analyses to investors regarding:

  • Their possible risk and return profiles

  • Downside potential (ability to lose capital)

We will be making use of several packages to do calculations and make forward predictions, including PerformanceAnalytics which we used earlier.

We will construct equally weighted and weighted portfolios, and look at constructing portfolio return and risk measures.

We will end off by looking at VaR and ES measures.

Data and return calculations

Let’s again load the daily TRI data set from an earlier practical.

library(rmsfuns)
pacman::p_load("tidyr", "tbl2xts","devtools","lubridate", "readr", "PerformanceAnalytics", "ggplot2", "dplyr")
dailydata <- fmxdat::findata

Let’s remove the NA’s using the nalocf function from practical 2, and calculate the daily continuous (log) returns using the TTR package’s ROC call. I introduce this package here as it does not require the data to be xts, and also provides several other technical wrapper functions that might prove useful that you can explore.

pacman::p_load("TTR")
dailydata <- 
dailydata %>% arrange(Date) %>% 
mutate(across(.cols = -Date, .fns = ~TTR::ROC(., type = c("continuous", "discrete")[2]))) %>% 
    # Equivalent to:   # mutate_at(.vars = vars(-Date), ~./lag(.)-1) %>% 
    # continuous equivalent to:   # mutate_at(.vars = vars(-Date), ~(log(.)-log(lag(.)))) 
  mutate_at(.vars = vars(-Date), ~na.locf(., na.rm = F, maxgap = 5)) %>% filter(Date > first(Date))
    # Pad NA's back max 5 days:
# Let's not waste our time - remove spaces in column names!
colnames(dailydata) <- 
  gsub("JSE\\.","",colnames(dailydata))
colnames(dailydata) <- 
  gsub("\\.Close","",colnames(dailydata))

To get a table of statistics of this data, we could use PA’s statistics table:

tablestats <-
  dailydata %>% tbl_xts() %>% 
  table.Stats(., ci = 0.95, digits = 3)
print(tablestats[,1:5])
##                      ABG      BVT      CPI      FSR      NED
## Observations    3568.000 3568.000 3568.000 3568.000 3568.000
## NAs                0.000    0.000    0.000    0.000    0.000
## Minimum           -0.145   -0.096   -0.126   -0.148   -0.105
## Quartile 1        -0.010   -0.009   -0.007   -0.010   -0.010
## Median             0.000    0.000    0.000    0.000    0.000
## Arithmetic Mean    0.000    0.001    0.001    0.001    0.000
## Geometric Mean     0.000    0.001    0.001    0.000    0.000
## Quartile 3         0.010    0.010    0.010    0.012    0.010
## Maximum            0.119    0.105    0.119    0.130    0.126
## SE Mean            0.000    0.000    0.000    0.000    0.000
## LCL Mean (0.95)    0.000    0.000    0.001    0.000    0.000
## UCL Mean (0.95)    0.001    0.001    0.002    0.001    0.001
## Variance           0.000    0.000    0.000    0.000    0.000
## Stdev              0.019    0.017    0.018    0.020    0.018
## Skewness           0.146    0.094    0.009    0.009    0.108
## Kurtosis           3.399    2.288    4.997    3.184    2.815

As a side note - look at the incredibly high level of kurtosis (fat tails) of ABI and ABL (three is parity).

PerformanceAnalytics also offers a means of cleaning the data using Boudt’s method, which is specifically designed to avoid some of the cleaning pitfalls as pertaining to portfolio construction and risk analysis. The technique reduces the magnitude, but not the direction of observations that exceed a 1α risk threshold (See Boudt, Peterson & Croux (2008) for more details)

pacman::p_load("tbl2xts")
#install:
pacman::p_load("DEoptimR")
pacman::p_load("robustbase")
rtnc <-
data(managers)
Return.clean(managers[,1:4], 
             method = c("none", "boudt", "geltner")[2], alpha = 0.01)

Portfolio Returns

Below we will do a proper example of calculating portfolio returns. We will have a weight vector according to which the portfolio needs to be rebalanced at periodic dates. We will compare the performance of a randomly balanced portfolio, and that of an equally weighted portfolio.

We will then calculate the portfolio values and positions at any given point in time.

Be very mindful when doing this calculation - note that portfolio returns change the positions in a portfolio over time, implying that before each rebalancing, you need to calculate the position in a stock - and then whether to buy or sell some of the stock to meet the new weight.

Mathematically, this implies calculating each day what the new portfolio position is given the stock’s daily return.

# install.packages("rportfolios")
library(rportfolios)
dailydata <- fmxdat::findata

dailydata.subset <- 
  
  dailydata %>% 
  
  gather(Stocks, Px, -Date) %>% 
  
  arrange(Date) %>% 
  
  group_by(Stocks) %>% 
  
  mutate(Returns = Px/lag(Px)-1) %>% ungroup() %>% filter(Date > first(Date)) %>% 
  
  select(-Px)

# Let's assume the portfolio rebalances each January and July.

# First, let's save the exact rebalance dates and save the random weight and date information to be used later:
# Below is a very nice way to save months and years: let's rebalance at month 1 and 7... 

RebMonths <- c(1,7) # Make a parameter that can easily be changed later.

RandomWeights <- 
  
dailydata.subset %>% 
  
    mutate(Months = as.numeric(format(Date, format = "%m")), 
           
           YearMonths = as.numeric(format(Date, format = "%Y%m"))) %>% 
  
  filter(Months %in% RebMonths) %>% 
  
  group_by(YearMonths, Months, Stocks) %>% filter(Date == last(Date)) %>% ungroup()

# Now let's create a column with the random weights assigned to each stock conforming to the following parameters:
# Let's also create a random weighting vector for our selected stocks, with the following parameters:
# They have to sum to 1...
# Let's add constraints too - you can only have a maximum exposure to a single stock up to 20% of the equal weight.
N_Stocks <- length(unique(RandomWeights$Stocks))

Max_Exposure <-(1/N_Stocks)*1.20

# Minimum exposure is, say, 2%:
Min_Exposure <- 0.02

# Now to append the weight vector, let's use the random.bounded function from rportfolios.

RandomWeights_adj <-  
  bind_cols(RandomWeights %>% arrange(Date),
            RandomWeights %>% group_by(Date) %>% 
              
  do( Randweights = random.bounded(n = nrow(.), 
                 x.t = 1, # Full investment... 
                 x.l = rep( Min_Exposure, nrow(.)), # Lower Bound 
                 x.u = rep( Max_Exposure, nrow(.)), 
                 max.iter = 1000) ) %>% ungroup() %>% unnest(Randweights) %>% select(-Date)
  )

# Sanity check: Create a stop function if it doesn't hold...
if( RandomWeights_adj %>% group_by(Date) %>% 
    
    summarise(Fully_Invested = sum(Randweights)) %>% filter(Fully_Invested > 1.000001 | Fully_Invested < 0.9999999 ) %>% nrow() > 0 ) stop("\n=============\n Ooops! \nWeights do not sum to 1... Please check!\n===========\n")

# Create equal weight portfolios as well:
RandomWeights_adj <- 
  
RandomWeights_adj %>% 
  
  group_by(Date) %>% 
  
  mutate(EqualWeights = 1/n()) %>% 
  
  ungroup() %>% select(-Months, -YearMonths)

# Right, so now we have equal and random-weights that we can use at rebalancing dates: January and July.

Creating portfolios

The code below shows how we can create random portfolios using our returns above, as well as our random and equal weights.

When doing portfolio return calculations, please use rmsfuns’ Safe_.Portfolip_Return.Portfolio function rather.

PerformanceAnalytics’ Return.Portfolio function has a few nuances which might (without error) - give you very wrong results.

I wrote a gist to show why we need to use the safe version for portfolio return calcs - it also confirms the calculation with a by-hand example.

  • Back to our random portfolio calculation;
pacman::p_load("PerformanceAnalytics")

# Now we use the Safe_Return.portfolio function from PerformanceAnalytics
# Note, as with most PA functions, the inputs are xts and wide...
# Also, let's assume you are investing R1000 at the start:
Fund_Size_at_Start <- 1000

Rand_weights <- 
RandomWeights_adj %>% select(Date, Stocks, Randweights) %>% spread(Stocks, Randweights) %>% tbl_xts()

EW_weights <- 
RandomWeights_adj %>% select(Date, Stocks, EqualWeights) %>% spread(Stocks, EqualWeights) %>% tbl_xts()

df_Returns <- 
dailydata.subset %>% spread(Stocks, Returns)

df_Returns[is.na(df_Returns)] <- 0
xts_df_Returns <- df_Returns %>% tbl_xts()

    Rand_RetPort <- 
      rmsfuns::Safe_Return.portfolio(xts_df_Returns, 
                                     
                       weights = Rand_weights, lag_weights = TRUE,
                       
                       verbose = TRUE, contribution = TRUE, 
                       
                       value = Fund_Size_at_Start, geometric = TRUE) 

    EW_RetPort <- 
      rmsfuns::Safe_Return.portfolio(xts_df_Returns, 
                                     
                       weights = EW_weights, lag_weights = TRUE,
                       
                       verbose = TRUE, contribution = TRUE, 
                       
                       value = Fund_Size_at_Start, geometric = TRUE) 

# Clean and save portfolio returns and weights:
Rand_Contribution <- 
      Rand_RetPort$"contribution" %>% xts_tbl() 

Rand_BPWeight <- 
  
      Rand_RetPort$"BOP.Weight" %>% xts_tbl() 

Rand_BPValue <- 
  
      Rand_RetPort$"BOP.Value" %>% xts_tbl()  
    
# Clean and save portfolio returns and weights:
EW_Contribution <- 
      EW_RetPort$"contribution" %>% xts_tbl() 

EW_BPWeight <- 
      EW_RetPort$"BOP.Weight" %>% xts_tbl()  

EW_BPValue <- 
      EW_RetPort$"BOP.Value" %>% xts_tbl()
    

    names(Rand_Contribution) <- c("date", names(Rand_RetPort$"contribution"))
    names(Rand_BPWeight) <- c("date", names(Rand_RetPort$"BOP.Weight"))
    names(Rand_BPValue) <- c("date", names(Rand_RetPort$"BOP.Value"))
  
    names(EW_Contribution) <- c("date", names(Rand_RetPort$"contribution"))
    names(EW_BPWeight) <- c("date", names(Rand_RetPort$"BOP.Weight"))
    names(EW_BPValue) <- c("date", names(Rand_RetPort$"BOP.Value"))
  
    # Look at what these data.frames each convey - incredible right?
    
    # Let's bind all of these together now:
    
    df_port_return_Random <- 
      left_join(dailydata.subset %>% rename("date" = Date),
                Rand_BPWeight %>% gather(Stocks, weight, -date),
                by = c("date", "Stocks") ) %>% 
      
      left_join(.,
                Rand_BPValue %>% gather(Stocks, value_held, -date),
                by = c("date", "Stocks") ) %>% 
      
      left_join(.,
                Rand_Contribution %>% gather(Stocks, Contribution, -date),
                by = c("date", "Stocks"))

    df_port_return_EW <- 
      left_join(dailydata.subset %>% rename("date" = Date),
                EW_BPWeight %>% gather(Stocks, weight, -date),
                by = c("date", "Stocks") ) %>% 
      
      left_join(.,
                EW_BPValue %>% gather(Stocks, value_held, -date),
                by = c("date", "Stocks") ) %>% 
      
      left_join(.,
                EW_Contribution %>% gather(Stocks, Contribution, -date),
                by = c("date", "Stocks"))

# Calculate Portfolio Returns:
df_Portf_Random <- 
    df_port_return_Random %>% group_by(date) %>% summarise(PortfolioReturn = sum(Returns*weight, na.rm =TRUE)) %>% 
      filter(PortfolioReturn != 0)
      
# Calculate Portfolio Returns:
df_Portf_EW <- 
    df_port_return_EW %>% group_by(date) %>% summarise(PortfolioReturn = sum(Returns*weight, na.rm =TRUE)) %>% 
      filter(PortfolioReturn != 0)

Note the last calculation for portfolio returns:

Remember: we cannot simply sum the weighted log returns within time periods. Formally: rt,P=ln(1+Rt,P)=ln(1+niωiRi,t)niωiRi,t

Thus, the following should be remembered:

  • Simple returns provide convenience in summing across assets for a given date.

  • log returns provide convenience in summing across .

Luckily, it is easy to move between simple and compounded returns mathematically.

By definition, simple and log returns can be written interchangeably as:

  • From Simple log returns: R=exp(r)1

  • From log Simple returns: r=ln(R+1)

Using the above definition, and also the convenience of summing simple returns to get weighted returns, we need to:

  1. rewrite the log to simple returns (if you started with log returns)

  2. sum the weighted simple returns to get the weighted portfolio returns (see eq ???).

  3. write it back into log return form again (if needed). Also, if geometric = FALSE, it uses a simple arithmetic chain (sum returns), which is what we want to use now. If geometric =TRUE, it uses a product approach useful for calculating wealth indexes.

Portfolio Cumulative Return

Calculating the value of your investment over time requires geometrically chaining our simple returns. Luckily, this is pretty easy:

Cum_Rand <- 
df_Portf_Random %>%
    mutate(cumreturn_Rand = (cumprod(1 + PortfolioReturn))) %>% 
  # Start at 1
  mutate(cumreturn_Rand = cumreturn_Rand / first(cumreturn_Rand)) %>% select(-PortfolioReturn)

Cum_EW <- 
df_Portf_EW %>% 
    mutate(cumreturn_EW = (cumprod(1 + PortfolioReturn))) %>% 
    mutate(cumreturn_EW = cumreturn_EW / first(cumreturn_EW)) %>% select(-PortfolioReturn)

Cum_Comp <- 
  left_join(Cum_Rand, Cum_EW, by = "date") %>% gather(Type, Value, -date)

# Now let's plot the wealth index (if you invested R100 in each) of the two portfolios::

Cum_Comp %>% 
  group_by(Type) %>% 
  ggplot() + geom_line( aes(date, Value, color = Type) ) + 
  theme_bw()

Weights plot

# StackBar of monthly weights (Note the stand-out rebalance weights...):
Rand_BPWeight %>% tbl_xts() %>% .[endpoints(.,'months')] %>% chart.StackedBar()

Portfolio Risk Analysis

Now let’s compare the riskiness of these two portfolios that we just created.

Calendar performances:

pacman::p_load("knitr", "gt")
# EW Portfolio
t <-
  table.CalendarReturns(df_Portf_Random %>% tbl_xts(), digits = 1, geometric = TRUE)
Cols_length <- ncol(t)

gt(t/100) %>% 
      tab_header(title = glue::glue("Portfolio Calendar Returns: Random")) %>% 
      fmt_percent(
      columns = 1:Cols_length,
      decimals = 1
    )
Portfolio Calendar Returns: Random
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec PortfolioReturn
1.4% −3.5% −1.9% 0.5% −0.6% 1.9% 0.8% −0.6% −0.3% 4.2% −0.8% −0.2% 0.4%
−0.5% −0.4% 0.8% 0.4% 0.1% 0.0% 0.9% 2.1% 0.1% −1.0% −0.6% −2.7% −0.9%
−1.4% −0.5% −1.7% 0.3% 1.0% −1.4% 0.5% −1.5% 4.1% 1.7% 0.4% 0.5% 1.9%
−1.4% −5.7% 3.0% 0.3% 0.0% 0.2% 0.2% −0.1% 0.3% −1.0% 0.8% 1.6% −2.1%
0.4% 0.6% 0.8% 0.4% −0.7% −2.0% −0.8% 0.7% 1.2% 0.1% −1.5% 0.2% −0.7%
−1.0% −0.1% 0.4% 0.7% 1.4% 0.4% −0.6% 2.6% 1.2% −1.1% 4.6% −0.5% 8.1%
0.3% −0.1% −0.4% 0.8% 0.7% 1.2% −2.0% 0.2% 1.2% −0.9% −0.4% −1.0% −0.4%
−0.8% 1.5% 0.0% 1.0% 0.2% 1.4% 1.8% 0.4% −1.0% 0.2% 0.1% 0.0% 4.7%
1.3% 0.8% −0.3% −0.1% −0.4% 0.8% 0.3% −0.9% 0.1% 2.7% −0.3% 0.9% 5.0%
0.9% −1.6% 0.9% 0.6% −1.7% 1.0% 1.0% 0.2% 1.5% 0.4% −1.6% −0.8% 0.8%
4.2% −1.6% 1.2% −1.8% −0.1% 1.8% 0.4% −1.7% −1.7% 2.2% −1.2% −0.6% 0.9%
0.5% −1.1% −5.6% −0.3% −2.5% 0.4% −0.5% 0.3% 0.7% −0.3% −0.5% 2.0% −7.0%
0.1% −2.3% 2.0% 2.5% 0.8% 4.0% 1.1% 1.3% −0.9% −0.1% −2.7% 0.7% 6.5%
1.0% −0.4% 1.7% 0.2% 0.6% −0.6% −1.3% 3.1% 2.8% NA NA NA 7.1%
# Random Portfolio
t2 <- 
  table.CalendarReturns(df_Portf_EW %>% tbl_xts(), digits = 1, geometric = TRUE)
gt(t2/100) %>% 
      tab_header(title = glue::glue("Portfolio Calendar Returns: EW")) %>% 
      fmt_percent(
      columns = 1:Cols_length,
      decimals = 1
    )
Portfolio Calendar Returns: EW
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec PortfolioReturn
1.3% −3.5% −1.8% 0.3% −0.5% 2.0% 0.7% −0.6% −0.3% 3.9% −0.8% −0.1% 0.5%
−0.6% −0.2% 0.9% 0.5% 0.1% −0.1% 0.9% 2.2% 0.0% −1.0% −0.5% −2.6% −0.5%
−1.6% −0.1% −2.0% 0.4% 0.6% −1.3% 0.5% −1.3% 4.4% 1.2% 0.5% 0.4% 1.5%
−1.9% −5.6% 3.1% 0.2% 0.1% 0.2% 0.2% 0.0% 0.5% −1.2% 1.0% 1.5% −2.3%
0.5% 0.6% 1.0% 0.4% −0.8% −2.2% −0.9% 1.0% 1.2% 0.2% −1.4% 0.1% −0.5%
−1.0% −0.2% 0.4% 0.7% 1.3% 0.4% −0.6% 2.7% 1.4% −1.1% 4.6% −0.5% 8.2%
0.4% 0.0% −0.3% 1.0% 0.7% 1.2% −1.7% 0.3% 1.0% −0.8% −0.2% −1.0% 0.5%
−0.9% 1.4% 0.1% 1.0% 0.2% 1.7% 1.5% 0.4% −1.0% 0.2% 0.1% 0.0% 4.9%
1.2% 0.8% −0.3% −0.1% −0.6% 0.8% 0.1% −1.1% 0.1% 2.7% −0.6% 0.8% 3.9%
1.0% −1.6% 0.7% 0.5% −1.7% 1.2% 1.0% 0.3% 1.4% 0.5% −1.6% −0.7% 0.9%
4.5% −1.7% 1.4% −2.0% −0.4% 1.7% 0.3% −1.7% −1.5% 2.5% −1.3% −0.7% 1.0%
0.3% −1.3% −5.2% −0.3% −2.2% 0.3% −0.5% 0.2% 0.6% −0.4% −0.5% 2.0% −6.9%
0.8% −2.2% 2.0% 2.5% 1.0% 4.0% 1.3% 1.3% −1.0% 0.3% −2.7% 0.7% 7.9%
1.0% −0.4% 1.6% 0.2% 0.6% −0.5% −1.3% 3.2% 2.7% NA NA NA 7.3%
# >>>>>>>>>> NOTE (NBNB): 
# Try kable(t, caption = "Random Portfolio Calendar Returns", format = "latex")
# in your pdf document...

Downside Risks

tabdownside <-
  table.DownsideRisk(left_join(df_Portf_Random %>% rename("Rand" = PortfolioReturn), 
                               df_Portf_EW %>% rename("EW" = PortfolioReturn), 
                               by = "date") %>% tbl_xts(.), 
                     ci = 0.95, Rf=0, MAR=0)
# Suppose I am only interested in specific elements of the table, then use:
tabdownside <- tabdownside[c(1,5,7,8:11),]
 

tabdownside %>% data.frame() %>% tibble::rownames_to_column() %>% 
gt() %>% 
        tab_header(title = glue::glue("Downside Risk estimates")) %>% 
      fmt_percent(
      columns = 2:3,
      decimals = 2
    )
Downside Risk estimates
Rand EW
Semi Deviation 1.04% 1.04%
Downside Deviation (Rf=0%) 1.01% 1.01%
Maximum Drawdown 46.82% 47.52%
Historical VaR (95%) −2.31% −2.30%
Historical ES (95%) −3.21% −3.21%
Modified VaR (95%) −2.30% −2.29%
Modified ES (95%) −3.50% −3.45%
# kable(tabdownside, caption = "Downside Risk estimates")

Risk Contribution Estimation:

We now look at which of the stocks have been the highest contributors to our portfolio’s risk.

A naive approach to measuring contribution to risk in a portfolio is to view it from a stand-alone perspective (estimating each asset’s unique risk). This is too simplistic, though, as it ignores diversification effects that might be in play when combining imperfectly correlated assets.

We rather look at the Expected Shortfall contribution from each stock in our portfolio to get a more accurate sense of this overall contribution.

This is done by considering the weighted decomposition of the contribution each portfolio element makes to the standard deviation of the whole portfolio. This is, effectively, the partial derivative of each univariate standard deviation with respect to the weights of each stock in our portfolio.

The contribution can be negative - which would imply the stock having a diversifying element in the portfolio.

Mathematically, using the SD approach, we are doing the following (with ρj the Marginal Contribution to Risk (MCR):

Rp=Σjwjrjσ=ωTΣωδσδω=1σΣω=ρρi=1σΣjσi,jωji

To do this calc, we need to calculate the weighted marginal contributions (ρj above) of each stock we had earlier, and use PerformanceAnalytics’ ETL (expected tail loss) or StdDev function, both using: method = “component”:

pacman::p_load(PortRisk)

# First get the contributions to overall portfolio return made by each stock:
Rand_Contribution <- Rand_RetPort$"contribution" %>% xts_tbl()

# Notice that this adds up to the portfolio return at each date:
left_join(Rand_Contribution %>% gather(Stocks, RetCont, -date) %>% group_by(date) %>% summarise(Verify_Ret = sum(RetCont)), 
          Rand_RetPort$returns %>% xts_tbl, by = "date") %>% mutate(Diff = round(Verify_Ret - portfolio.returns, 5))
## # A tibble: 3,548 × 4
##    date       Verify_Ret portfolio.returns  Diff
##    <date>          <dbl>             <dbl> <dbl>
##  1 2006-01-31    0.0137            0.0137      0
##  2 2006-02-01    0.0165            0.0165      0
##  3 2006-02-02   -0.00812          -0.00812     0
##  4 2006-02-03   -0.0201           -0.0201      0
##  5 2006-02-06    0.0188            0.0188      0
##  6 2006-02-07   -0.00805          -0.00805     0
##  7 2006-02-08   -0.0245           -0.0245      0
##  8 2006-02-09    0.0225            0.0225      0
##  9 2006-02-10    0.00220           0.00220     0
## 10 2006-02-13   -0.0216           -0.0216      0
## # … with 3,538 more rows
## # ℹ Use `print(n = ...)` to see more rows
# Now, let's consider the highest to lowest contributors to portfolio risk using ES (Expected Tail Loss or ETL):
ES_Risk_Contributors <- 
  
ETL(Rand_Contribution %>% tbl_xts(), portfolio_method="component") %>% 
  
  .$pct_contrib_MES %>% data.frame(Risk_Contrib = .) %>% 
  
  tibble::rownames_to_column("Stock") %>% arrange(desc(Risk_Contrib))

    print(ES_Risk_Contributors)
##           Stock Risk_Contrib
## 1 JSE.FSR.Close   0.16661014
## 2 JSE.SBK.Close   0.13749266
## 3 JSE.CPI.Close   0.13636132
## 4 JSE.RMH.Close   0.13279309
## 5 JSE.SLM.Close   0.12430295
## 6 JSE.ABG.Close   0.12088233
## 7 JSE.BVT.Close   0.10353238
## 8 JSE.NED.Close   0.07802513
# This should be your default calculation, as the SD approach is more simplistic in terms of its risk treatment.
        
# To calculate the risk contribution for SD, note we are effectively taking the average weights over time:
wts <- 
  
  Rand_RetPort$BOP.Weight %>% xts_tbl %>% 
  
  summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt)

# And using the actual returns:
SD_Risk_Contributors_direct <- 
  
StdDev( R = dailydata.subset %>% filter(Date >= first(Rand_Contribution$date)) %>% 
          
        tbl_xts(spread_by = Stocks, cols_to_xts = Returns), 
        
        portfolio_method = "component", weights = wts$wt) %>% 
  
  .$pct_contrib_StdDev %>% data.frame(Risk_Contrib = .) %>% 
  
  tibble::rownames_to_column("Stock") %>% arrange(desc(Risk_Contrib))

print(SD_Risk_Contributors_direct)
##           Stock Risk_Contrib
## 1 JSE.FSR.Close   0.15503118
## 2 JSE.SBK.Close   0.14757284
## 3 JSE.RMH.Close   0.14579221
## 4 JSE.ABG.Close   0.12850363
## 5 JSE.NED.Close   0.12171724
## 6 JSE.SLM.Close   0.11481075
## 7 JSE.BVT.Close   0.09454841
## 8 JSE.CPI.Close   0.09202374
# We can prove this by doing the above by hand (using the math we had before here):
Rets <- dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns)
Sigma <- cov(dailydata.subset %>% spread(Stocks, Returns) %>% select(-Date), use = "pairwise.complete.obs")
Wts <- Rand_RetPort$BOP.Weight %>% xts_tbl %>% summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt) %>% arrange(Type, colnames(Sigma)) %>% pull(wt)

sigma_p = sqrt((t(Wts) %*% Sigma %*% Wts)[1, 1])

# Marginal Contribution:
mctr = ((Sigma %*% Wts)/sigma_p)[, 1]
# Conditional Contribution:
cctr <- Wts * mctr

tibble(stocks =  mctr %>% names, mc = cctr) %>% mutate(CCTR = mc / sum(mc)) %>% arrange(desc(CCTR)) %>% pull(CCTR)
## JSE.FSR.Close JSE.SBK.Close JSE.RMH.Close JSE.ABG.Close JSE.NED.Close 
##    0.15502401    0.14782755    0.14593043    0.12854079    0.12168223 
## JSE.SLM.Close JSE.BVT.Close JSE.CPI.Close 
##    0.11463899    0.09443297    0.09192303
# You can also use the PortRisk package's Conditional Contribution to Total Risk (CCTR) approach:
# Again, get the aggregate weights vector:    
wts <- Rand_RetPort$BOP.Weight %>% xts_tbl %>% summarise( across(.cols = -date, .fns = ~mean(., na.rm=T)) ) %>% gather(Type, wt)
# Specify stocks in your portfolio corresponding to the weights:
Stx <- wts$Type
wtssel <- wts$wt

mc <- PortRisk::cctr(Stx, 
           weights = wtssel, 
           start = first(Rand_BPWeight$date), 
           end = last(Rand_BPWeight$date),
           data = dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns))
    
Risk_Cont <- tibble(stocks =  mc %>% names, mc = mc) %>% mutate(MCC = mc / sum(mc)) %>% arrange(desc(MCC)) 
print(Risk_Cont%>% pull(MCC)) 
## JSE.FSR.Close JSE.SBK.Close JSE.RMH.Close JSE.ABG.Close JSE.NED.Close 
##    0.15503118    0.14757284    0.14579221    0.12850363    0.12171724 
## JSE.SLM.Close JSE.BVT.Close JSE.CPI.Close 
##    0.11481075    0.09454841    0.09202374
# DIY: Read some more on this...

# Of course, you could also try out the above using a Bayesian approach as:
# mc <- cctr.Bayes(tickers = Stx, 
#            weights = wtssel, 
#            start = first(Rand_BPWeight$date), 
#            end = last(Rand_BPWeight$date),
#            data = dailydata.subset %>% tbl_xts(spread_by = Stocks, cols_to_xts = Returns),
#            sim.size = 1000)

For the above, see the math for the above breakdown of portfolio SD contribution on p.4 here.

You can also view the ES contribution discussion on p.108 here.

See another concise discussion of risk breakdown here.

Value at Risk and Expected Shortfall Measures

In this section, let’s just briefly again revisit the calculation of the widely used portfolio risk measures of VaR and ES. We will do each by hand and then using PA package.

Historical VAR

By Hand
var = apply(df_Portf_Random %>% select(-date), 
            2,quantile, probs=c(0.05, 0.01))
print(var)
##    PortfolioReturn
## 5%     -0.02313198
## 1%     -0.03723858

To calculate the Historical ES by hand, we need to first define a function to carry out, by column, the ES method:

ES.f = function(x, alpha=0.05) {
q = quantile(x, probs=alpha)
mean(x[x <= q])
} # See that this corresponds to the class notes...

ES = apply(df_Portf_Random %>% select(-date), 
           2, ES.f, alpha=0.05)

print(ES)
## PortfolioReturn 
##     -0.03209776
Using PerformanceAnalytics (Much better…)
# The following will give exactly the same results as above:
VaR(df_Portf_Random %>% tbl_xts(), p= 0.95, method = "historical")
##     PortfolioReturn
## VaR     -0.02313198
ES(df_Portf_Random  %>% tbl_xts(), p=0.95,method = "historical")
##    PortfolioReturn
## ES     -0.03209776

And look: They are the same as the ones done by hand!

Cornish-Fisher VaR & ES

mod.var = VaR(df_Portf_Random %>% tbl_xts(), p=0.95,method="modified")

mod.es = ES(df_Portf_Random %>% tbl_xts(), p=0.95,method="modified")

print(mod.var)
##     PortfolioReturn
## VaR     -0.02297002
print(mod.es)
##    PortfolioReturn
## ES     -0.03503858

VaR comparison

PA also allows a simple means of comparing the three different types of VaRs tested above. It does so using the VaRSensitivity command:

chart.VaRSensitivity(R = df_Portf_Random %>% tbl_xts(),
                     main = "VaR Sensitivity Plot",
                     methods=c("HistoricalVaR", "ModifiedVaR","GaussianVaR"),
                     colorset=bluefocus, lwd=2)

Then it also shows a great plot that recursively calculates the VaR and plots it vs the actual data, to give you an idea of where the returns actually fell below the VaR threshold (this takes a few minutes… be warned):

chart.BarVaR(df_Portf_Random %>% tbl_xts(),
             main = "Data vs Empirical VaR using rolling approach",
             methods="HistoricalVaR")

DIY

For homework, complete the following task.

For the ALSI top 40, calculate the rolling contribution to risk (use fmxdat::J200).

TIP: filter out stocks with valid returns of less than 10 (reweight to sum).

End

That’s it.

Please take the time to go through the code and understand, specifically, the code used to construct the random portfolio.