Value at risk - watching the tail

From the Hub section

Value at risk - watching the tail

The risk management concept of Value-at-Risk (VaR) is a well-known and accepted framework which has importance in all aspects of financial analysis and trading desk operations. Its purpose is to examine near worst cases and to establish the level of loss that could occur.

The simplest every-day way to measure risk is volatility (standard deviation). This applies to mainstream instruments such as indices, stocks and bonds through to funds or portfolios created from them. Its popularity lies in its easy calculation and its universal usage dating back to the origins of Modern Portfolio Theory and before.

Volatility and the notional of central risk

However, volatility is a central and symmetric measurement, examining the variation around the mean. This highlights its limitations for institutions and investors alike to assess the likelihood and magnitude of extreme scenarios, and whether they could they be survived.

For an asset or portfolio that truly exhibits “Normal” or “Gaussian” behaviour, the volatility number captures everything. If a portfolio has a volatility of 10%, then this is the typical up or down move that could be expected in a year. We can calculate that 99% of the time we would not expect a move or more than 26% in that time period either up or down by examining the tail probabilities of the distribution. There are many situations where preparing for extreme moves is important and any firm or investor that has not adequately provisioned for such extreme events by either magnitude or frequency may suffer disastrous consequences.

Preparing for tail events

In order to be prepared for this we need to be aware that there are two distinct ways in which the Normal assumption can break down.

The first is that even regular assets or instruments exhibit significantly different behaviour in distressed conditions. This can include increased market volatility, sudden price jumps, liquidity issues, credit events and increased correlations. All of these factors have been shown historically to be very important and they all tend to increase loss in extreme circumstances. Therefore the asset that was expected to move 26% in a scenario seen only 1 time in a 100, may well fall twice as far.

The second case that needs to be considered is that not all assets have symmetric or linear properties. Structured products and derivatives are at the top of this list since when markets fall the effect of optionality, barriers and behaviour of multi-assets can amplify movement.

Introducing VaR

Market participants and regulators have long looked to a formal framework for measuring and coping with extreme events. In the 1980s, various American investment banks led by JP Morgan started to develop and unify methodologies which eventually became known as Value at Risk (VaR).

VaR measures the change that could occur in an instrument, portfolio or business operation in the worst cases. There are different ways to measure this, for example the time horizon (1 day or 1 month), the confidence level (99% worst case is the most common) and whether to use historical data to generate future simulations or employ a more theoretical set-up.

Historical based approach

The most common approach is generally to use a historical based method that calculates daily changes in all underlying assets and parameters over a given historical window of at least one year and then successively applying such changes on top of today’s market position.

For example if we wished to calculate the VaR of an option linked to the S&P-500 we might identify the S&P-500 spot, volatility and USD interest rates as the most important factors.

If a two-year daily window was used that would give rise to approximately 500 daily changes in all three of these factors that we would need to measure. The changes occurring each day for all the factors are kept together and successively applied to today’s position (the base case).

1. Actual market changes 2. Values to apply for repricing 3. Result
Change in S&P-500 Change in volatility Change in rates S&P-500 level Volatility Rate Option price Change
Base case 0% 4400 19% 1% 224.3
Day 1 1% 0.1% -0.3% 4444 19.1% 0.7% 209.2 -15.1
Day 2 -2% 0% 1% 4312 19% 1.2% 264.7 40.4
Day 3 -3% 0% 1% 4268 19% 2.2% 275.2 50.9
Day 500 3% -0.2% 0.2% 4532 18.8% 0.2% 174.6 -49.7

Table: Sample of daily changes of factors and option prices

This process is performed for each day in the window and has three parts. Firstly, the calculation of changes in all the three parameters day on day (that is day 1 to day 2, day 2 to day 3, all the way through to 499 to 500).

Secondly these values are used to reprice instruments which is done by applying that day’s change on top of the base case. For example if the S&P-500 has a base case (today’s value) of 4400 and we wish to apply the day 1 change observed 2 years ago of +1%, then that gives a value of 4444. All instruments are repriced under such assumptions and the changes over the base case are calculated.

Finally the portfolio is aggregated and the results sorted to obtain the worst results. In the case of taking the 1% worst case out of a sample of 500 this roughly equates to the fifth worst day.

Complexity of calculations

This simple example helps to demonstrate why the accurate calculation of VaR can be difficult for a large portfolio of derivatives or structured products. For a portfolio containing 100 positions where daily VaR is to be calculated over a two-year window, each position needs to be calculated 500 times, giving a total of 50000 calculations. This will be manageable for simple options where Black Scholes models could be used but it is increasingly more difficult for complex multi-asset equity products or interest rate derivatives requiring modelling of yield curves. It is possible to approximate moves by using sensitivities (i.e. Greeks such as Delta, Gamma and Vega) but this is inaccurate in cases of large moves. Calculating VaR in a timely and accurate fashion remains a complex task.

Alternatives and conclusion

While VaR is a very useful concept it does have some critics and has some shortcomings. By explicitly focussing on the loss at the 1% level it ignores outcomes at either more or less extreme points. One solution to avoid this is the “shortfall” measure which is effectively the average VaR below the 1% case and measures losses right through the zone of the worst 1% of cases. The shortfall method has been adopted by the flagship Fundamental Review of the Trading Book (FRTB) regime that is now working its way through global regulatory adoption.

Risk management of a portfolio or trading operation is a complex multi factor task with no short cuts or approaches that are guaranteed to work in all scenarios. However the theory and experience that has been built up for Value-at-Risk is an important part of any monitoring and reporting framework in a variety of applications.

Tags: Valuations

A version of this article has also appeared on

Image courtesy of:     Mehmet Turgut Kirkgoz /

Related Posts: