• Introduction
    • rmsfuns package
    • dateconverter
    • ViewXL
    • load_pkg
    • PromptAsTime
    • build_path
    • tbl2xts package
    • Load packages relevant to this tutorial
  • Setting up your Practical folder
    • Project
  • ggplot Auxilliary functions
  • Returns Series
    • Let’s first discuss a primer on returns:
    • Tale of Two Returns…
      • Which should we use?
    • Back to our example…
      • Gathering
      • Now for the above in tidy format using simple returns
  • xts package
    • period.apply
    • Combining dplyr and tbl2xts
  • Performance Analytics Package
    • Plotting functionalities
    • DIY
    • Financial Ratios
    • Your turn….
    • Annualizing Returns
    • Calculating Rolling Returns
      • Annualized Standard Deviation
    • References

Introduction

The aim of this tutorial is to introduce you to some basic concepts in portfolio and financial analysis in R.

In particular, we will be looking at the xts and PerformanceAnalytics packages to do risk management and return calculation estimations.

Important: I assume you have gone through and thoroughly recapped what we did in the data science course.

rmsfuns package

rmsfuns is a wrapper package that offers several convenient wrappers and commands that we will often be using in this course. See the details of this package here and the source code here.

Install the rmsfuns package as follows:

if (!require("rmsfuns")) install.packages("rmsfuns")
library(rmsfuns)
library(tidyverse)
pacman::p_load(tbl2xts)
  • Below is a quick outline of the wrapper functions that might be useful to you:

dateconverter

  • This function allows you to fill in dates between a starting and end date. Again, this will save you pain in R:
library(rmsfuns)
dates1 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "alldays")
dates2 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "weekdays")
dates3 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "calendarEOM")  # Calendar end of month
dates4 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "weekdayEOW")  # weekday end of week
dates5 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "weekdayEOM")  # weekday end of month
dates6 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "weekdayEOQ")  # weekday end of quarter
dates7 <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"),
    "weekdayEOY")  # weekday end of year

ViewXL

  • If you want to view any dataframe from R straight in excel, use ViewXL. This will open a excel sheet in your temporary folder, which you can proceed to save to a different location. This is great for quickly viewing your calculation, double checking what you’ve done in R, or simply explore your file in excel.

Let’s create a random data frame, with a date column and random returns:

datescol <- dateconverter(as.Date("2015-01-01"), as.Date("2017-05-01"),
    "weekdays")

RandomReturns <- rnorm(n = length(datescol), mean = 0.01, sd = 0.03)

# This creates a vector of random data with a mean of 0.01
# and sd of 0.03
df <- data.frame(Date = datescol, Return = RandomReturns)

# Let's now quickly view this in excel..
ViewXL(df)

load_pkg

  • Another useful wrapper that you can use in your code, is to load a package using load_pkg.

  • This will first check if you have the package installed, and load it if true. If not, it will attempt to install it from CRAN.

  • If it is not on CRAN, you should probably first double check the package’s validity…

I, however, have since replaced this call with pacman’s p_load. Up to you…

Use it as e.g.:

load_pkg("PerformanceAnalytics")

PromptAsTime

  • Sometimes when running long calculations, you may find that you want to time your calculations.

  • While there are many ways to skin this cat, it might be easiest for you to make your prompt (what you see on the bottom of your Rstudio screen), be the current time. Now you can track through your estimation how long each step had taken (explicitly saving time could be done using Sys.time()):

PromptAsTime(TRUE)
x <- 100
Sys.sleep(3)
x * x
print(x)
PromptAsTime(FALSE)

Now you can see after the fact, that x*x took 3 seconds (not really, I purposefully made it lag - but you get the point!).

build_path

It can be challenging building path structures in R, but build_path makes this easy. This could e.g. be used in a function that creates folders and populates it with figures / data points. E.g., suppose you want to create a folder structure as follows:

  • /Results/Figures/Financials/Figures/Return_plot.png

  • /Results/Figures/Industrials/Figures/Return_plot.png

  • /Results/Figures/HealthCare/Figures/Return_plot.png

To create the folders to house the plots (and save their locations in a vector to use it later), run:

# Specify a root on your computer, e.g.:
Root <- "C:/Finmetrics/Practical2/Folder_Example"

# Specify the sectors:
Sectors <- c("Financials", "Industrials", "HealthCare")

# Construct the structure and bind them together:

# base R's paste0:
Locs <- build_path(paste0(Root, "/", Sectors, "/Figures/"))

# glue's glue... (preferred)
Locs <- build_path(glue::glue("{Root}/{Sectors}/Figures/"))

Another package that we will be using in this session is the tbl2xts package. I created this to facilitate the transition from coding in dplyr (using tbl_dfs or dataframes) to xts (which can be quite a painful exercise normally).

tbl2xts package

We will get to using this later, for now download it as follows (you can view the documentation for it here and source code here):

pacman::p_load("tbl2xts")

Run the examples in the documentation page to see how easy it is to transition to and from xts, the power of which you’ll see a bit later.

Load packages relevant to this tutorial

We can now load the packages that we will primarily be using in this tutorial. Use the load_pkg function from rmsfuns to load a list of packages, and install it from CRAN if it is not already installed on your computer.

pacman::p_load("xts", "tidyverse", "tbl2xts", "PerformanceAnalytics",
    "lubridate", "glue")

Setting up your Practical folder

Let’s create a folder to save all your code and work. First ensure you have the latest version of fmxdat installed (first close all instances of R):

pacman::p_install_gh("Nicktz/fmxdat")

Now, the updated make_project function requires only a copy of a location directory (use backslash, not forward slash, as per example). It will check if the directory is empty, and then build a folder outline.

Easiest is to create a folder location by hand, copy the adress and run fmxdat::make_project.

To see this in action, copy the first part (control + C) and run the second in R:

# Copy this part: C:\Notes\Finmetrics\Practicals\Practical1
# Run this:
fmxdat::make_project(Open = T)

Project

  • Now proceed to set up a project in this folder. Also, edit the README to the root folder explaining that this folder is to be used for analyzing this tutorial.

    • Make notes in here as you work through the tutorial.
# Load the data:
data <- fmxdat::BRICSTRI

ggplot Auxilliary functions

fmxdat now also comes with a suite of updated ggplot aux functions.

This should make coding easier and stop you from needing to google the same commands frequently.

Let me illustrate a few additions:

library(tidyverse);library(fmxdat)

p <- 
  fmxdat::Jalshtr %>% mutate(Index = "ALSI") %>% 
  ggplot() + 
  geom_line(aes(date, TRI, color = Index), size = 2, alpha = 0.7) + 
  # Nice clean theme, with many additions that are now simplified (explore this yourself):
  # E.g. using fmxdat::ggpts, we can change the sizes more easily in millimeters. 
  # theme_fmx also offers simplified size settings, as e.g. below:
  fmxdat::theme_fmx(title.size = ggpts(30), 
                    subtitle.size = ggpts(28),
                    caption.size = ggpts(25),
                    # Makes nicer caption. If no caption given, this will break function, so careful:
                    CustomCaption = T) + 
  # crisp colours:
  fmxdat::fmx_cols() + 
  labs(x = "", y = "Cumulative Returns", caption = "Note:\nCalculation own",
       title = "Illustrating fmxdat Auxilliary functions for ggplot",
       subtitle = "If not subtitle, make blank and Subtitle size small to make a gap\nbetween plot and Title. Test this yourself") + 
  guides(color = F)
  
# Finplot now adds finishing touches easily:

  fmxdat::finplot(p, x.vert = T, x.date.type = "%Y", x.date.dist = "2 years")

Returns Series

The above loaded data currently is in Total Return Index (TRI) format. This is a price adjusted for dividends, stock splits and other corporate actions that will create pricing distortions.

When analyzing returns, always use TRI.

We would now like to transform it into a usable returns series for further data analysis. This could be done using several techniques.

Let’s first discuss a primer on returns:

From the class notes remember we discussed:

  • Simple Returns:

    Rt=PtPt11

  • Simple Returns with dividends Dt:

    Rt=Pt+DtPt1Pt1=PtPt11+DtPt1

  • Simple Returns (Real):

    1+RRealt=ptPt1×CPIt1CPIt

  • Excess Returns:

    Rexcesst=RtRRF

  • Continuously compounded (log) returns:

rt=ln(Pt)ln(Pt1)=ln(Pt)ln(Pt1)=ln(1+Rt)

  • Log returns Dividends adjusted:

rt=ln(1+Rt)=ln(Pt+DtPt1)=ln(Pt+Dt)ln(Pt1)

  • Log returns (Real):

    rt=ln(1+RT)=ln(PtPt1×CPIt1CPIt)=ln(Pt+Dt)ln(Pt1)

Tale of Two Returns…

Aggregation over time is easiest to do using log returns (can sum across time dimensions), while aggregation over assets is more easily done with simple returns (can sum across assets within time t).

Let’s explore this a bit.

Calculating the weekly index simple returns (or cumulative returns):

1+Rt[k]=PtPtk=PtPt1×...×Ptk+1Ptk=k1Pj=0(1+Rtj)

Thus the k-period simple gross return is just the product of k one-period simple returns. This is the compound return. NOTE: This is different from the k-period simple net return Rt[k]=PtPtkPtk:

For Continuously compounded series its even easier. Here we just sum the weekly returns to monthly (check the math and class notes!):

rt[k]=rt+...+ttk+1

Which should we use?

Bacon (2011) goes into great detail on the various types of returns that can be calculated over and above the simple and continuously compounded returns we mentioned here. Such returns could be money-based or time-based, e.g., and depends on the needs and wants of the investor. I omit a deeper discussion into this, but quote Bacon (2011) in the following:

Continuously compounded returns should be used in statistical analysis because, unlike simple returns, they are not positively biased… Bacon (2011)(p.29).

Back to our example…

Calculating simple returns of our five TRIs can then simply be done as follows:

# although the columns are arranged by date, make sure of
# it first:
data <- fmxdat::BRICSTRI

d1 <- data %>%
    arrange(Date) %>%
    mutate(across(.cols = -Date, .fns = ~./lag(.) - 1))

# Important to arrange by date, else the calculation will
# clearly be wrong

# Also note, the above is equivalent to using:
# mutate(across( brz:zar, ~ ./lag(.) - 1) ) But this is
# terrible coding, as it requires constancy in order
# (imagine you added another series or changed the names to
# capital letters.)

Gathering

Gathering and placing data into tidy format makes calculations much easier and unleashes the real power of dplyr…

We will get to this later, but consider how the above can be done simply as follows:

data %>%
    gather(Country, TRI, -Date)
## # A tibble: 3,950 × 3
##    Date       Country   TRI
##    <date>     <chr>   <dbl>
##  1 2000-01-14 brz     1420.
##  2 2000-01-21 brz     1382.
##  3 2000-01-28 brz     1344.
##  4 2000-02-04 brz     1426.
##  5 2000-02-11 brz     1416.
##  6 2000-02-18 brz     1372.
##  7 2000-02-25 brz     1385.
##  8 2000-03-03 brz     1439.
##  9 2000-03-10 brz     1424.
## 10 2000-03-17 brz     1383.
## # ℹ 3,940 more rows
# Notice that it is gathering data in a long format. Now we
# can compute returns as:
data %>%
    gather(Country, TRI, -Date) %>%
    group_by(Country) %>%
    mutate(Return = TRI/lag(TRI) - 1) %>%
    ungroup()
## # A tibble: 3,950 × 4
##    Date       Country   TRI   Return
##    <date>     <chr>   <dbl>    <dbl>
##  1 2000-01-14 brz     1420. NA      
##  2 2000-01-21 brz     1382. -0.0267 
##  3 2000-01-28 brz     1344. -0.0279 
##  4 2000-02-04 brz     1426.  0.0615 
##  5 2000-02-11 brz     1416. -0.00709
##  6 2000-02-18 brz     1372. -0.0315 
##  7 2000-02-25 brz     1385.  0.00960
##  8 2000-03-03 brz     1439.  0.0392 
##  9 2000-03-10 brz     1424. -0.0107 
## 10 2000-03-17 brz     1383. -0.0289 
## # ℹ 3,940 more rows

The log returns can also easily be calculated as using the dplyr notation:

# Check that there are no missing values:
colSums(is.na(data))

# Calculate the log difference of each column:
d1 <- data %>%
    arrange(Date) %>%
    mutate(across(.cols = -Date, .fns = ~log(.) - log(lag(.))))

Study ?across examples, and note the following:

If you want to apply multiple functions inside across, as well as name it, note the following notation:

data %>%
    arrange(Date) %>%
    gather(Tickers, Returns, -Date) %>%
    # Remember - before a function, just add ~
mutate(across(.cols = Returns, .fns = ~./lag(.) - 1, .names = "Simple_{col}")) %>%
    mutate(across(.cols = Returns, .fns = ~log(.) - lag(log(.)),
        .names = "Dlog_{col}")) %>%
    filter(Date > first(Date)) %>%
    group_by(Tickers) %>%
    mutate(across(.cols = c(starts_with("Simple"), starts_with("Dlog")),
        .fns = list(Max = ~max(.), Min = ~min(.), Med = ~median(.)),
        .names = "{fn}_{col}")) %>%
    ungroup()
## # A tibble: 3,945 × 11
##    Date       Tickers Returns Simple_Returns Dlog_Returns Max_Simple_Returns
##    <date>     <chr>     <dbl>          <dbl>        <dbl>              <dbl>
##  1 2000-01-21 brz       1382.       -0.0267      -0.0270               0.292
##  2 2000-01-28 brz       1344.       -0.0279      -0.0283               0.292
##  3 2000-02-04 brz       1426.        0.0615       0.0597               0.292
##  4 2000-02-11 brz       1416.       -0.00709     -0.00711              0.292
##  5 2000-02-18 brz       1372.       -0.0315      -0.0320               0.292
##  6 2000-02-25 brz       1385.        0.00960      0.00956              0.292
##  7 2000-03-03 brz       1439.        0.0392       0.0385               0.292
##  8 2000-03-10 brz       1424.       -0.0107      -0.0107               0.292
##  9 2000-03-17 brz       1383.       -0.0289      -0.0293               0.292
## 10 2000-03-24 brz       1459.        0.0555       0.0540               0.292
## # ℹ 3,935 more rows
## # ℹ 5 more variables: Min_Simple_Returns <dbl>, Med_Simple_Returns <dbl>,
## #   Max_Dlog_Returns <dbl>, Min_Dlog_Returns <dbl>, Med_Dlog_Returns <dbl>

But if you’re a bit insecure about your calcs, you could always use the PerformanceAnalytics package. This is a very powerful package that can be used to do all kinds of risk, return and portfolio calculations with.

The problem previously was the effort to move from dplyr to xts - so use the tbl2xts package to facilitate this transition: (using tbl_xts and xts_tbl) - we will cover it a bit further down…

library(PerformanceAnalytics)
library(tbl2xts)

d2 <- data %>%
    arrange(Date) %>%
    gather(Tickers, Returns, -Date) %>%
    # You can skip gather and spread here - more for
    # illustration:
tbl2xts::tbl_xts(cols_to_xts = Returns, spread_by = Tickers) %>%
    PerformanceAnalytics::Return.calculate(., method = "log") %>%
    tbl2xts::xts_tbl()

# Now you can compare d1 and d2 - they're exactly the same

To get the monthly returns from the weekly, we can simple trim the dates to be the last date of the month: (note our earlier equations, tread carefully here…)

First, let’s use the very nice feature of the as.Date(). Let’s define the years and months as columns, and then using group_by to calculate the monthly returns for each:

Year_Month <- function(x) format(as.Date(x), "%Y_%B")
# The following creates a Year and Month column:
cumdata <- data %>%
    mutate(YM = Year_Month(Date))

# Now we can use group_by() to calculate the monthly dlog
# returns for each numeric logret column: (Notice the use
# of mutate_if...)
cumdata_Logret_Monthly <- cumdata %>%
    arrange(Date) %>%
    filter(Date >= lubridate::ymd(20000128)) %>%
    group_by(YM) %>%
    filter(Date == last(Date)) %>%
    ungroup() %>%
    mutate(across(.cols = where(is.numeric), .fns = ~log(.) -
        lag(log(.)))) %>%
    filter(Date > first(Date)) %>%
    mutate(Year = format(Date, "%Y")) %>%
    group_by(Year) %>%
    # Now we can sum to get to annual returns:
summarise(across(.cols = where(is.numeric), .fns = ~sum(.)))
  • Notice that if we had simple returns, it is not as simple as just summing… We would need to chain geometrically (using cumprod) our weekly returns and then calculate the monthly returns this way:

Of course, assuming we wanted to go weekly to monthly for illustration purposes, you could’ve just taken the TRI returns for last day of month…

Now for the above in tidy format using simple returns

Year_Month <- function(x) format(as.Date(x), "%Y_%B")
Cols_to_Gather <- data %>%
    select_if(is.numeric) %>%
    names()

# First get weekly, then use that to get monthly (for
# illustration):
data %>%
    arrange(Date) %>%
    mutate(YM = Year_Month(Date)) %>%
    gather(Country, TRI, all_of(Cols_to_Gather)) %>%
    group_by(Country) %>%
    mutate(WeeklyReturn = TRI/lag(TRI) - 1) %>%
    filter(Date >= lubridate::ymd(20000128)) %>%
    mutate(WeeklyReturn = coalesce(WeeklyReturn, 0)) %>%
    mutate(MonthlyIndex = cumprod(1 + WeeklyReturn)) %>%
    group_by(YM) %>%
    filter(Date == last(Date)) %>%
    group_by(Country) %>%
    mutate(MonthlyReturn = MonthlyIndex/lag(MonthlyIndex) - 1)
## # A tibble: 910 × 7
## # Groups:   Country [5]
##    Date       YM           Country   TRI WeeklyReturn MonthlyIndex MonthlyReturn
##    <date>     <chr>        <chr>   <dbl>        <dbl>        <dbl>         <dbl>
##  1 2000-01-28 2000_January brz     1344.     -0.0279         0.972       NA     
##  2 2000-02-25 2000_Februa… brz     1385.      0.00960        1.00         0.0306
##  3 2000-03-31 2000_March   brz     1426.     -0.0230         1.03         0.0296
##  4 2000-04-28 2000_April   brz     1240.      0.0274         0.897       -0.131 
##  5 2000-05-26 2000_May     brz     1173.      0.0181         0.849       -0.0534
##  6 2000-06-30 2000_June    brz     1369.     -0.00220        0.991        0.167 
##  7 2000-07-28 2000_July    brz     1346.     -0.0524         0.974       -0.0167
##  8 2000-08-25 2000_August  brz     1401.      0.0143         1.01         0.0408
##  9 2000-09-29 2000_Septem… brz     1301.      0.00263        0.941       -0.0716
## 10 2000-10-27 2000_October brz     1169.     -0.0153         0.846       -0.101 
## # ℹ 900 more rows
# Notice the above was redundant as the data was already in
# indexed price form So you might as well just have
# calculated monthly returns without needing to use
# cumprod(1+R).

Notice how returns have to be geometrically chained when summing accross time-dimensions. For the simple returns, however, we can sum within time-dimensions of course.

  • Below I assume an equally weight portfolio, rebalanced every week back to equal weighted.
Cols_to_Gather <- data %>%
    select_if(is.numeric) %>%
    names

data %>%
    arrange(Date) %>%
    mutate(YM = Year_Month(Date)) %>%
    gather(Country, TRI, Cols_to_Gather) %>%
    mutate(Weight = 1/length(Cols_to_Gather)) %>%
    # By country to get return:
group_by(Country) %>%
    mutate(Ret = TRI/lag(TRI) - 1) %>%
    # By date to calculate EW portfolio:
group_by(Date) %>%
    summarise(EW_Port = sum(Ret * Weight, na.rm = T))
  • We will cover in a later practical how to calculate returns when we do not rebalance every period back to target weights…

xts package

Using tbl2xts you can now move very easily between data.frames and the xts environment, and use it very easily in packages like PerformanceAnalytics.

E.g., let’s use our previous random dataframe, and do some nice things with it:

While the above is the by hand way of calculating such returns - we could also use subsetting using the powerful xts package. See the following useful commands (and keep these subsetting tools in mind with your write-ups):

# ======== useful xts subsetting commands: First do the xts
# transformation:
xts.data <- tbl_xts(data)

# Subsetting the xts.data
xts.data["2013"]  # Dates in 2013

xts.data["2013/"]  # Dates since start of 2013

xts.data["2015-1/2015-2"]  # Dates from start of Jan through to end of Feb 2015

xts.data["/2008-9"]  # Dates prior to 2008 October (before Crisis period)

first(xts.data, "1 month")  # The first one month of xts.data

# Of course, you can easily truncate and move back to a
# tbl_df() by using: xts_tbl
truncated_df <- xts_tbl(xts.data["2013/"])

To see the potential of xts in subsetting daily returns, let’s create use a daily return series and subset it:

dailydata <- fmxdat::DailyTRIs
# Let's focus our analysis only on three companies: SLM SJ,
# SOL SJ and TKG SJ
xts.data <- dailydata %>%
    select(Date, "SLM SJ", "SOL SJ") %>%
    tbl_xts()

# If you want to remove the SJ from the column names, use:
colnames(dailydata) <- gsub(" SJ", "", colnames(dailydata))

# Now we can pipe:
xts.data <- dailydata %>%
    select(Date, SLM, SOL) %>%
    tbl_xts()

### What can xts do?

# Use the generic .index* functions to extract certain
# months or days of month See : ?.index for more...
xts.data[.indexwday(xts.data) == 5]  # Friday's only

xts.data[.indexwday(xts.data) == 1]  # Mondays only

xts.data[.indexmon(xts.data) == 0]  # Date's in Jan in all years

xts.data[.indexmday(xts.data) == 1]  # First day of every month

# Using the endpoints
xts.data[endpoints(xts.data, "months", k = 3)]  # Last day of every quarter (hence k = 3)

# Plotting with default lables is simple
plot(xts.data[, "SLM"], major.ticks = "years", minor.ticks = FALSE,
    main = "Sanlam", col = 3, ylab = "SLM SJ TRI")

# Let's suppose SLM had some dates with no values. Let's
# create some random missing Values:
xts.data[150, "SLM"] <- NA

xts.data[1990, "SLM"] <- NA

# If we now wanted to see whether there are any missing
# values and identify the dates:
index(xts.data)[is.na(xts.data)[, "SLM"]]

# Let's proceed to 'pad' these NA values with the previous
# day's TRI values.  We do so using the last observation
# carried forward (na.locf) command - which thus looks for
# NA's, and replaces it with the previous valid return
# series. It goes back max 5 days in our example:
xts.data.padded <- na.locf(xts.data, na.rm = F, maxgap = 5)

# See what happened, compare non-padded and padded:
xts.data.padded["2003-08-05/2003-08-10"]$SLM

xts.data["2003-08-05/2003-08-10"]$SLM

# To get the last date of each month / Week to create
# monthly returns:
xts.data.padded.monthly <- xts.data.padded[endpoints(xts.data.padded,
    "months")]

xts.data.padded.weekly <- xts.data.padded[endpoints(xts.data.padded,
    "weeks")]

# Create a monthly return series for each column:
xts.data.padded.monthly.returns <- diff.xts(xts.data.padded.monthly,
    lag = 1, log = F, arithmetic = F) - 1
# Notice that diff.xts() offers new params (lag, log,
# arithmetic arguments..)

# You can also quickly view periodicity as:
periodicity(xts.data.padded.monthly.returns)

period.apply

xts has a very powerful utility called period.apply.

See e.g. below how we can do quite advanced calcs pretty simply using tbl_xts, xts and PerformanceAnalytics.

  • We first calculate daily log returns

  • we then calculate monthly annualised SD numbers.

library(tbl2xts)

library(PerformanceAnalytics)

dailydata <- fmxdat::DailyTRIs

dailydata <- dailydata %>%
    gather(Stocks, Px, -Date) %>%
    mutate(Stocks = gsub(" SJ", "", Stocks))

Monthly_Annualised_SD <- dailydata %>%
    # Make xts:
tbl_xts(., cols_to_xts = Px, spread_by = Stocks) %>%
    PerformanceAnalytics::CalculateReturns(., method = "log") %>%
    na.locf(., na.rm = F, maxgap = 5) %>%
    xts::apply.monthly(., FUN = PerformanceAnalytics::sd.annualized) %>%

# Make tibble again:
xts_tbl() %>%
    gather(Stock, SD, -date) %>%
    mutate(date = as.Date(date))

Please see all the examples here.

E.g., piping straight into a ggplot also follows trivially:

tbl2xts::TRI %>%
    tbl_xts(., cols_to_xts = TRI, spread_by = Country) %>%
    PerformanceAnalytics::Return.calculate(.) %>%
    xts::apply.yearly(., FUN = PerformanceAnalytics::CVaR) %>%
    xts_tbl %>%
    tidyr::gather(Country, CVaR, -date) %>%
    # Now plot it:
ggplot() + geom_line(aes(date, CVaR), color = "steelblue", size = 1.1,
    alpha = 0.8) + fmxdat::theme_fmx() + facet_wrap(~Country) +
    labs(x = "", title = "CVaR Plot example", subtitle = "Using tbl2xts package")

Combining dplyr and tbl2xts

While the above is nice, and there are really awesome wrangling techniques to use in xts - I would prefer you combine dplyr and xts to do the above.

See below for an example using the dplyr - xts - PerformanceAnalytics combination to do, e.g., portfolio return calculations.

  • Let’s use our dailydata series to do some quick calcs:

    • calculate Returns (simple) for all stocks

    • Calculate the cumulative returns for our stocks (if you invested R1 in a stock, what would you have now?)

Π(1+ri)=1+rFull

      - Bonus: plot Naspers' cumulative returns.
    
- calculate the portfolio returns for an equally weighted portfolio 
bought on 12 April 2003 and held to end of January 2004.

Important notice:

Notice below - we will be using rmsfuns::Safe_Return.portfolio for portfolio return calculation.

To see the motivation for this, please work through the following public gist:

https://gist.github.com/Nicktz/dddbe80ac427b7a96a58c39ef6e6f0d8

# First: calculate ordinary returns
library(lubridate)

#------------------ 
# Step one: gather to make tidy:
dailydata <- fmxdat::DailyTRIs

colnames(dailydata) <- gsub(" SJ", "", colnames(dailydata))

Data_Adj <- dailydata %>%
    gather(Stocks, Price, -Date) %>%
    # Then arrange by date (as we'll be using lags)...
arrange(Date) %>%
    # Now calculate ordinary returns per stock:
group_by(Stocks) %>%
    mutate(Returns = Price/lag(Price) - 1) %>%
    ungroup()

#------------------ 
# Second: Calculate Cumulative Returns
CumRets <- Data_Adj %>%
    # for cumulative returns - we have to change NA to
    # zero, else an NA would break the chain....
mutate(Returns = coalesce(Returns, 0)) %>%
    # Any NA in Returns is changed to zero
group_by(Stocks) %>%
    mutate(Cum_Ret = cumprod(1 + Returns))

# Bonus: Figure for Naspers:
ggplot(CumRets %>%
    filter(Stocks == "NPN")) + geom_line(aes(Date, Cum_Ret),
    color = "steelblue") + theme_bw() + labs(title = "Naspers cumulative Return",
    y = "Growth of R1 invested in 2003.")

#------------------ 
# Third: Calculate Equal weighted portfolio return First
# date wrangle...
Trimmed_Ret_Data <- Data_Adj %>%
    filter(Date >= lubridate::ymd(20030412) & Date < lubridate::ymd(20040101))

# Now, we will be using PerformanceAnalytics'
# Return.portfolio() which, if you type: ?Return.portfolio,
# # shows that it wants the weight vector as xts (with
# stock names and weights), as well as the returns in xts.
# ...With tbl2xts, this is trivial.

# First the weights:
W_xts <- Trimmed_Ret_Data %>%
    # Check this cool trick:
filter(Date == first(Date)) %>%
    mutate(weight = 1/n()) %>%
    tbl_xts(., cols_to_xts = weight, spread_by = Stocks)

# Now Returns:
R_xts <- Trimmed_Ret_Data %>%
    tbl_xts(., cols_to_xts = Returns, spread_by = Stocks)

# Now... first ensure that column names between R_xts and
# W_xts match:
R_xts <- R_xts[, names(W_xts)]

# Set all NA returns to zero:
R_xts[is.na(R_xts)] <- 0
# Set all NA weights to zero:
W_xts[is.na(W_xts)] <- 0

# Also set NA's to zero:
Portfolio <- rmsfuns::Safe_Return.portfolio(R = R_xts, weights = W_xts,
    geometric = TRUE)

Portf_Rets <- Portfolio$portfolio.returns %>%
    xts_tbl()  # because who has time for xts now...

# And that's a wrap. Simple!

Performance Analytics Package

Below I show you how to calculate some of the metrics defined in Performance Analytics.

For a technical discussion into the definitions and uses of the return metrics calculated in PerformanceAnalytics package, see Bacon (2011).

Plotting functionalities

PA offers several useful plots for returns time-series. E.g. using our xts.data.dailyreturns series before, let’s look at the distributional properties of Sanlam:

pacman::p_load(tbl2xts, PerformanceAnalytics)

dailydata <- fmxdat::DailyTRIs

colnames(dailydata) <- gsub(" SJ", "", colnames(dailydata) )

xts.data.dailyreturns <- 
  dailydata %>% 
  gather(Stock, TRI, -Date) %>% 
  group_by(Stock) %>% 
  mutate(Return = TRI / lag(TRI)-1) %>%  
  ungroup() %>% 
  tbl_xts(tblData = ., cols_to_xts = Return, spread_by = Stock)

chart.Histogram(xts.data.dailyreturns$SLM, methods = "add.normal", 
                main = "Sanlam Histogram")

# Add normal line and VaR estimates
chart.Histogram(xts.data.dailyreturns$SLM, 
  methods = c("add.density", "add.normal", "add.risk"),
  main = "Adding Value at Risk (95%)")

chart.Boxplot(xts.data.dailyreturns$SLM, main = "Sanlam Boxplot")

chart.QQPlot(xts.data.dailyreturns$SLM, distribution = "norm", 
             main = "QQ-Plot of Sanlam")

chart.Drawdown(xts.data.dailyreturns$SLM,main = "Drawdowns: Sanlam", 
               col = "steelblue")

chart.CumReturns(xts.data.dailyreturns$SLM,main = "Drawdowns: Sanlam", 
               col = "steelblue")

chart.Scatter(x = xts.data.dailyreturns$SLM, 
              y = xts.data.dailyreturns$SOL, 
              main = "Scatter: Sanlam & Sasol", col = "steelblue", 
              xlab = "Sanlam", ylab = "Sasol")

## Or check out this risk-return scatter for several stocks since 2010...
chart.RiskReturnScatter(R=xts.data.dailyreturns['2003-01-01/'][,c("AGL", "AMS", "ANG", "AOD")])

chart.RollingPerformance(R=xts.data.dailyreturns['2003-01-01/'][,c("AGL", "AMS", "ANG", "AOD")],
                         
                         FUN="sd",
                         
                         width=120, 
                         
                         main="Rolling 120 day Standard Deviation", 
                         
                         legend.loc="bottomleft")

DIY

Attempt to replicate all these graphs in ggplot.

Financial Ratios

PA also helps with the calculation of some crucial financial ratios and metrics. You can browse through these from the vignette, here follows a couple of highlights:

table.Stats(xts.data.dailyreturns["2003-01-01/"][, c("AGL", "AMS",
    "ANG", "AOD")])
##                      AGL      AMS      ANG      AOD
## Observations    500.0000 500.0000 500.0000 175.0000
## NAs               1.0000   1.0000   1.0000 326.0000
## Minimum          -0.0629  -0.0632  -0.0536  -0.0882
## Quartile 1       -0.0141  -0.0054  -0.0050  -0.0149
## Median           -0.0019   0.0000   0.0001  -0.0003
## Arithmetic Mean  -0.0005   0.0019   0.0012   0.0005
## Geometric Mean   -0.0007   0.0017   0.0011   0.0001
## Quartile 3        0.0127   0.0083   0.0076   0.0121
## Maximum           0.0804   0.0839   0.0500   0.0801
## SE Mean           0.0010   0.0007   0.0006   0.0020
## LCL Mean (0.95)  -0.0024   0.0005   0.0001  -0.0035
## UCL Mean (0.95)   0.0014   0.0032   0.0024   0.0044
## Variance          0.0005   0.0002   0.0002   0.0007
## Stdev             0.0218   0.0154   0.0134   0.0265
## Skewness          0.2228   0.2732   0.0509   0.4171
## Kurtosis          0.6652   3.4540   2.6745   1.1238
table.TrailingPeriods(R = xts.data.dailyreturns["2003-01-01/"][,
    c("AGL", "AMS", "ANG", "AOD")], periods = c(3, 6, 12, 36,
    60))
##                         AGL    AMS     ANG     AOD
## Last 3 day Average  -0.0146 0.0037  0.0074 -0.0073
## Last 6 day Average  -0.0088 0.0028  0.0074  0.0177
## Last 12 day Average -0.0040 0.0007 -0.0006  0.0100
## Last 36 day Average -0.0050 0.0042  0.0039  0.0080
## Last 60 day Average -0.0035 0.0032  0.0053  0.0009
## Last 3 day Std Dev   0.0097 0.0032  0.0108  0.0256
## Last 6 day Std Dev   0.0109 0.0058  0.0092  0.0385
## Last 12 day Std Dev  0.0108 0.0059  0.0160  0.0295
## Last 36 day Std Dev  0.0136 0.0199  0.0163  0.0236
## Last 60 day Std Dev  0.0177 0.0187  0.0162  0.0271
table.DownsideRisk(xts.data.dailyreturns["2003-01-01/"][, c("AGL",
    "AMS", "ANG", "AOD")])
##                                   AGL     AMS     ANG     AOD
## Semi Deviation                 0.0150  0.0105  0.0093  0.0174
## Gain Deviation                 0.0146  0.0119  0.0100  0.0201
## Loss Deviation                 0.0132  0.0107  0.0094  0.0152
## Downside Deviation (MAR=210%)  0.0204  0.0143  0.0136  0.0222
## Downside Deviation (Rf=0%)     0.0153  0.0096  0.0087  0.0171
## Downside Deviation (0%)        0.0153  0.0096  0.0087  0.0171
## Maximum Drawdown               0.4275  0.1688  0.2181  0.3968
## Historical VaR (95%)          -0.0364 -0.0210 -0.0182 -0.0385
## Historical ES (95%)           -0.0460 -0.0326 -0.0297 -0.0520
## Modified VaR (95%)            -0.0347 -0.0212 -0.0198 -0.0391
## Modified ES (95%)             -0.0438 -0.0290 -0.0292 -0.0486

Using PerformanceAnalytics’ own data, see the following summary metrics of fund performance relative to benchmarks:

data(managers)

Rf <- managers$"US 3m TR"  # Risk-free rate

Rb <- managers$"SP500 TR"  # Benchmark

# CAPM Beta:
CAPM.beta(Ra = managers$HAM1, Rb = Rb, Rf = Rf)
## [1] 0.3900712
# As it is monthly data, alpha would then be calculated as:
(1 + CAPM.alpha(Ra = managers$HAM1, Rb = Rb, Rf = Rf))^12 - 1
## [1] 0.0715406
# Rolling 24 month Beta:
chart.RollingRegression(Ra = managers$HAM1, Rb = Rb, width = 24,
    attribute = c("Beta"))

# Summary:
table.SFM(Ra = managers$HAM1, Rb = Rb)
##                     HAM1 to SP500 TR
## Alpha                         0.0077
## Beta                          0.3906
## Beta+                         0.3010
## Beta-                         0.4257
## R-squared                     0.4357
## Annualized Alpha              0.0969
## Correlation                   0.6601
## Correlation p-value           0.0000
## Tracking Error                0.1132
## Active Premium                0.0408
## Information Ratio             0.3604
## Treynor Ratio                 0.3521

Your turn….

  • Load the J200 index prices since 2017 using the code below.

    • Compare the Financials and Industrials returns for 2017.

    • Find the 5 most volatile stocks for the index.

      • TIP: use: mutate(SD = sd(Returns, na.rm = TRUE)) %>% top_n(5, SD)
    • Use PerformanceAnalytics and compare the maximum drawdowns of the Industrials versus the Consumer Goods sectors.

    • Calculate all the Betas for the J200 stocks for 2018 (I’ll help you with this one below…).

J200 <- fmxdat::J200

# Rolling Betas:
df_J200 <- 
  J200 %>% 
  group_by(Tickers) %>% 
  mutate(Ret = Prices / lag(Prices) - 1) %>% 
  group_by(date) %>%
  mutate(J200 = sum(Ret * weight_Adj, na.rm=T)) %>% 
  ungroup()

TickChoice <- paste0( c("BIL", "SBK", "SOL"), " SJ Equity")

PerformanceAnalytics::chart.RollingRegression(Ra = df_J200 %>% filter(Tickers %in% TickChoice) %>%  
                                                
                                                mutate(Ret = coalesce(Ret, 0) ) %>% 
                                                tbl_xts(., cols_to_xts = Ret, spread_by = Tickers),
                                              Rb = df_J200 %>% group_by(date) %>% summarise(J200 = max(J200)) %>%
                                                tbl_xts(., cols_to_xts = J200),width=120,attribute = c("Beta"), legend.loc = "top")

# Static Betas:

CAPM.beta(Ra=df_J200 %>% filter(Tickers %in% TickChoice) %>%  
            mutate(Ret = coalesce(Ret, 0) ) %>%
            tbl_xts(., cols_to_xts = "Ret", spread_by = "Tickers"),
          Rb=df_J200 %>% group_by(date) %>% summarise(J200 = max(J200)) %>% tbl_xts(., cols_to_xts = "J200"))

Annualizing Returns

Something that can easily confuse young analysts is when and how to annualize returns.

The idea behind annualizing returns is to be able to make direct comparisons of returns across different time periods.

Let’s take the following example - you have been tasked to create a barplot of the annualized returns of the different ALSI sector indices using their monthly returns, on a 6 month, 12 month, 3 year, 5 year and 10 year basis, annualized. Do so for the following sectors: FINI, INDI, RESI, ALSI, MIDCAPS, and ALSI TOP40.

The following code chunk will achieve this manually. I will also show you an easy way to verify your numbers using PerformanceAnalytics afterwards.

options(dplyr.summarise.inform = F)
library(lubridate)

# make returns monthly for illustration:

idx <- fmxdat::SA_Indexes %>%
    filter(Tickers %in% c("FINI15TR Index", "INDI25TR Index",
        "JALSHTR Index", "MIDCAPTR Index", "RESI20TR Index",
        "TOP40TR Index")) %>%
    mutate(YM = format(date, "%Y%B")) %>%
    arrange(date) %>%
    group_by(Tickers, YM) %>%
    filter(date == last(date)) %>%
    group_by(Tickers) %>%
    mutate(ret = Price/lag(Price) - 1) %>%
    select(date, Tickers, ret) %>%
    ungroup()

Now - be careful with the trick below to calculate past returns using monthly data.

You will be tempted to calculate 6 month returns using:

idx %>%
    filter(date > last(date) %m-% months(6)) %>%
    summarise(mu = prod(1 + ret, na.rm = T)^(12/(6)) - 1)
## # A tibble: 1 × 1
##       mu
##    <dbl>
## 1 -0.443

While this is correct for the above, notice that the lagging of six months needs to be carefully done depending on when the month ended. If it ended Feb 28, then last(date) %m-% months(6) would give you 7 months. Check:

idx %>%
    filter(date > last(date) %m-% months(6)) %>%
    pull(date) %>%
    unique
## [1] "2020-02-28" "2020-03-31" "2020-04-30" "2020-05-29" "2020-06-30"
## [6] "2020-07-31"
idx %>%
    filter(date <= ymd(20200228)) %>%
    filter(date > last(date) %m-% months(6)) %>%
    pull(date) %>%
    unique
## [1] "2019-08-30" "2019-09-30" "2019-10-31" "2019-11-29" "2019-12-31"
## [6] "2020-01-31" "2020-02-28"

Argue for yourself why this is the case using this format.

You should use, in such instances, instead use the following function for safer lagging of months this way (to get what we want to achieve):

idx %>%
    filter(date >= fmxdat::safe_month_min(last(date), N = 6)) %>%
    pull(date) %>%
    unique
## [1] "2020-02-28" "2020-03-31" "2020-04-30" "2020-05-29" "2020-06-30"
## [6] "2020-07-31"
idx %>%
    filter(date <= ymd(20200228)) %>%
    filter(date >= fmxdat::safe_month_min(last(date), N = 6)) %>%
    pull(date) %>%
    unique
## [1] "2019-09-30" "2019-10-31" "2019-11-29" "2019-12-31" "2020-01-31"
## [6] "2020-02-28"
# And the equivalent fmxdat::safe_year_min for years

My answer here surfaced recently with someone again adding this is the most sensible solution. The above function is tailored from this logic (for the reverse).

# Now, see my trick below to order and rename facet_wraps for plotting using Freq = A, B, ... (you'll see in the plot function why this is done)

#======================
# Manual Calculation:
#======================

dfplot <- 
    bind_rows(
      # Don't annualize for less than a year, e.g.:
      idx %>% filter(date >= fmxdat::safe_month_min(last(date), N = 6)) %>% group_by(Tickers) %>% 
        summarise(mu = prod(1+ret, na.rm=T) -1 ) %>% mutate(Freq = "A"),
      idx %>% filter(date >= fmxdat::safe_month_min(last(date), N = 12))  %>% group_by(Tickers) %>% 
        summarise(mu = prod(1+ret, na.rm=T) ^ (12/(12)) -1 ) %>% mutate(Freq = "B"),
            idx %>% filter(date >= fmxdat::safe_month_min(last(date), N = 36)) %>% group_by(Tickers) %>% 
        summarise(mu = prod(1+ret, na.rm=T) ^ (12/(36)) -1 ) %>% mutate(Freq = "C"),
      idx %>% filter(date >= fmxdat::safe_month_min(last(date), N = 60)) %>% group_by(Tickers) %>% 
        summarise(mu = prod(1+ret, na.rm=T) ^ (12/(60)) -1 ) %>% mutate(Freq = "D")
      
    )

#======================
# PerformanceAnalytics
#======================
library(tbl2xts);library(PerformanceAnalytics);library(fmxdat)
idxxts <- 
  tbl_xts(idx, cols_to_xts = ret, spread_by = Tickers)
dfplotxts <- 
  
    bind_rows(
      # Don't annualize for less than a year, e.g.:
      idxxts %>% tail(6) %>% PerformanceAnalytics::Return.annualized(., scale = 12) %>% data.frame() %>% mutate(Freq = "A"),
      idxxts %>% tail(12) %>% PerformanceAnalytics::Return.annualized(., scale = 12) %>% data.frame() %>% mutate(Freq = "B"),
      idxxts %>% tail(36) %>% PerformanceAnalytics::Return.annualized(., scale = 12) %>% data.frame() %>% mutate(Freq = "C"),
      idxxts %>% tail(60) %>% PerformanceAnalytics::Return.annualized(., scale = 12) %>% data.frame() %>% mutate(Freq = "D"),
      
    ) %>% data.frame() %>% gather(Tickers, mu, -Freq) %>% 
  mutate(Tickers = gsub("\\.", " ", Tickers))



# Barplot_foo:
to_string <- as_labeller(c(`A` = "6 Months", `B` = "1 Year", `C` = "3 Years", `D` = "5 Years"))

  g <- 
  dfplot %>%
  # Compare to (they are the exact same):
  # dfplotxts %>%
  ggplot() + 
  geom_bar( aes(Tickers, mu, fill = Tickers), stat="identity") + 
  facet_wrap(~Freq, labeller = to_string, nrow = 1) + 
  labs(x = "", y = "Returns (Ann.)" , caption = "Note:\nReturns in excess of a year are in annualized terms.") + 
  fmx_fills() + 
  geom_label(aes(Tickers, mu, label = paste0( round(mu, 4)*100, "%" )), size = ggpts(8), alpha = 0.35, fontface = "bold", nudge_y = 0.002) + 
  theme_fmx(CustomCaption = T, title.size = ggpts(43), subtitle.size = ggpts(38), 
                caption.size = ggpts(30), 
                axis.size = ggpts(37), 
                legend.size = ggpts(35),legend.pos = "top") +

  theme(axis.text.x = element_blank(), axis.title.y = element_text(vjust=2)) + 
    
  theme(strip.text.x = element_text(face = "bold", size = ggpts(35), margin = margin(.1, 0, .1, 0, "cm"))) 
  
g

Notice the use of facet labels as per here.

Calculating Rolling Returns

Another very important calculation for evaluating and comparing the performance of different indices is to calculate rolling returns.

This follows as cumulative returns, while insightful in itself, can be a misleading figure as early outperformance can greatly skew later performance. Also, if funds / indices have different start dates, the figure itself will also be distorted.

See example below where we first compare the cumulative returns of several selected Indices (using idx as defined above), noting the distortion effects it creates, even after considering log cumulative returns (which controls for the level effects):

gg <- idx %>%
    arrange(date) %>%
    group_by(Tickers) %>%
    # Set NA Rets to zero to make cumprod work:
mutate(Rets = coalesce(ret, 0)) %>%
    mutate(CP = cumprod(1 + Rets)) %>%
    ungroup() %>%
    ggplot() + geom_line(aes(date, CP, color = Tickers)) + labs(title = "Illustration of Cumulative Returns of various Indices with differing start dates",
    subtitle = "", caption = "Note:\nDistortions emerge as starting dates differ.") +
    theme_fmx(title.size = ggpts(30), subtitle.size = ggpts(5),
        caption.size = ggpts(25), CustomCaption = T)

# Level plot
gg

# Log cumulative plot:
gg + coord_trans(y = "log10") + labs(title = paste0(gg$labels$title,
    "\nLog Scaled"), y = "Log Scaled Cumulative Returns")

Notice that the above cannot be sensibly interpreted - and also doesn’t show the extent of outperformance recently of the RESI20.

For this reason, we often look at rolling returns, in order to give a better indication of the actual performance comparison of different funds.

Let’s compare the returns above now on a rolling 3 year annualized basis:

plotdf <-  plotdf <-
idx %>%
    group_by(Tickers) %>%
    # Epic sorcery:
mutate(RollRets = RcppRoll::roll_prod(1 + ret, 36, fill = NA,
    align = "right")^(12/36) - 1) %>%
    # Note this cool trick: it removes dates that have no
    # RollRets at all.
group_by(date) %>%
    filter(any(!is.na(RollRets))) %>%
    ungroup()

g <-  g <-
plotdf %>%
    ggplot() + geom_line(aes(date, RollRets, color = Tickers),
    alpha = 0.7, size = 1.25) + labs(title = "Illustration of Rolling 3 Year Annualized Returns of various Indices with differing start dates",
    subtitle = "", x = "", y = "Rolling 3 year Returns (Ann.)",
    caption = "Note:\nDistortions are not evident now.") + theme_fmx(title.size = ggpts(30),
    subtitle.size = ggpts(5), caption.size = ggpts(25), CustomCaption = T) +
    fmx_cols()

finplot(g, x.date.dist = "1 year", x.date.type = "%Y", x.vert = T,
    y.pct = T, y.pct_acc = 1)

Now that plot, is a thing of beauty.

Annualized Standard Deviation

Standard deviation is mostly used as a means of estimating risk.

Similar to returns, this should be annualized when comparing different time-periods.

There is, however, a slight nuance to this calculation. You will often see code where analysts annualize monthly SD numbers using:

SD * sqrt(12)

Please note that this is only approximately correct when using simple returns (as we require chaining to go from monthly to annual), while the correct annualized SD would require the SD of logarithmic returns. This follows because annual logarithmic return is the sum of its monthly constituents, which means multiplying by the square root of 12 works.

So, to our example above - let’s calculate the rolling annualized SD of our series:

plot_dlog <- fmxdat::SA_Indexes %>%
    filter(Tickers %in% c("FINI15TR Index", "INDI25TR Index",
        "JALSHTR Index", "MIDCAPTR Index", "RESI20TR Index",
        "TOP40TR Index")) %>%
    mutate(YM = format(date, "%Y%B")) %>%
    arrange(date) %>%
    group_by(Tickers, YM) %>%
    filter(date == last(date)) %>%
    group_by(Tickers) %>%
    mutate(ret = log(Price) - lag(log(Price))) %>%
    select(date, Tickers, ret) %>%
    # Rolling SD annualized calc now:
mutate(RollSD = RcppRoll::roll_sd(1 + ret, 36, fill = NA, align = "right") *
    sqrt(12)) %>%
    filter(!is.na(RollSD))


g <- plot_dlog %>%
    ggplot() + geom_line(aes(date, RollSD, color = Tickers),
    alpha = 0.7, size = 1.25) + labs(title = "Illustration of Rolling 3 Year Annualized SD of various Indices with differing start dates",
    subtitle = "", x = "", y = "Rolling 3 year Returns (Ann.)",
    caption = "Note:\nDistortions are not evident now.") + theme_fmx(title.size = ggpts(30),
    subtitle.size = ggpts(5), caption.size = ggpts(25), CustomCaption = T) +
    fmx_cols()

finplot(g, x.date.dist = "1 year", x.date.type = "%Y", x.vert = T,
    y.pct = T, y.pct_acc = 1)

Take note of these calculations - you are actually now ahead of a major cluster of analysts that will not be able to do these types of calcs as easy as you can in practice.

HOPE YOU ENJOYED this practical!

END


References

Bacon, Carl R. 2011. Practical Portfolio Performance Measurement and Attribution. Vol. 568. John Wiley & Sons.