ANALYSIS OF THE DIFFERENT SCALING RULES FOR VALUE AT RISK ANALIZA RÓŻNYCHZASAD SKALOWANIA VAR

Analysis of the recent research concerning the performance of SRTR rule for VaR scaling as well as other methods such as bootstrap, dependent resampling, nonoverlapping periods, independent resampling and different empirical scaling factors were conducted. An importance of the choosing the appropriate method of VaR scaling for solving different financial task, risk analysis and derivatives pricing was stressed in article.


Introduction
Value-at-Risk (VaR) became dominating approach of measuring risk, spite it has taken serious criticism from researchers.In fact the main reason for its popularity within financial institutions is recommendations of Basel Committee that the capital reserves of portfolios should be proportional to the 10-day Value-at-Risk at the 99% confidence level.As result banks that were approved by the Supervisors to compute regulatory capital via Internal Model Methods (IMM) usually use 99% Value-at-Risk (VaR) models for the calculation of capital requirements.One of the problems in estimating VaR is the lack of sufficient 10-day return data.Besides regulatory capital, banks very often have to provide, according to Basel II, an estimation of the capital (e.g. economic capital) needed to cover losses for a longer time horizon and for higher percentiles.The approach around this recommended by Basel II is scaling1-day Value-at-Risk with the square-root-of-time rule.The usage of 10-day horizon is depicted by several authors.However, the effect of using the square-root-of-time rule for longer time horizons, for example when calculating Economic Capital with one year horizon is less investigated.

Problem of economic capital aggregation
Economic Capital (EC) for market risk is usually aggregated to the other types of bank risks (credit, operational) in order to estimate an internal capital requirements.Calculation of economic capital for banks is mainly based on the bank's own risk profile and IRB approach.In other words economic capital can be interpreted as a 1-year bank's VaR [3].
The problem of VaR scalinghave to do with an extension of some VaR measure to VaR with different confidence level or time horizon.Value at risk can be defined as p probability density function (PDF).
The main task of VaR scaling is an estimation of parameter h such that: In practice, two approaches can be examined to compute losses andcorrespondingly Value at Riskat a 1-year time horizon:  On the base of scenario generation and subsequent obtaining a 1-year Profit and Loss P&L distribution;  Scaling of short-term market risk measures to a longer time horizon and probably higher percentiles.
The first approach is spread all around spite it has some well-known problems and drawbacks.First of all, when we use simulation approach or Monte-Carlo methods, the correct evaluation of a 1-year P&L distribution is quite complicated task because of the limited length of available time series of risk-factor shocks and because of the doubtful assumptions regarding necessity of MC simulations for risk-factors dynamics.In addition, the direct calculation of such a long-term, high-percentile Value at Risk is very sophisticated task when we use standard VaR models.Besides, the need to use high percentiles of the P&L distribution incredibly increases the number of required scenarios.Finally, such approach implicate the assumption of consistency the bank's portfolio structure during the some time horizon, while in reality the portfolio structure evolves in time.
On the contrary, using the second approach on the base of scaling P&L distribution we can avoid the above-mentioned problems.Besides, the second way has the advantage of using the IMM models which deal with day-to-day statistics and already approved by regulators to estimate regulatory requirements.
Basel Committee on Banking Supervision requires the daily estimation of VaR and allows the calculation of the 10 day VaR according the formula:

 
x F X .The negative log-returns mean that the losses correspond to the right tail of the distribution rather than the left tail.Then represents the cdf of the consecutive h day returns.The h day VaR at the level q ∈ (0, 1) can be defined as follow Similarly the 1 day VaR at the level q can be defined as An empirical scaling factor Λq > 0 in that case can be calculated as Luca Spadafora et al. in work [2] pointed out that the choice of the VaR-scaling rule influence substantially on the estimation of the Economic Capital for long time horizon.In particular, the resulting risk-measure can be larger than the estimation obtained using normal assumptions on the P&L distribution by up to a factor of four [5].Their empirical results concerning the properties of P&L distributions, as well as analytical results obtained by authors on the time scaling show that the widely-spread VaR-scaling rules relying on assuming normality of returns and the usage of the SRTR can lead to a sharp underestimation in the bank's long-term risk measure.

Srtr scaling rule
The most spread methodology to evaluate the Economic Capital (EC) using VaR scaling rules is based on the assumption of normality of the P&L distribution, and lays on the next consistent steps: 1.
Calculation chosen percentile by means of SRTR scaling rule: where  x  the percentile corresponding to chosen confidence level 1-α;

3.
Supposing mean of the P&L equal zero and assuming that 1 year is equal to 250 days (in some cases 260), the Economic Capitalat for confidence level 1-α can be calculated as follow:  inverse Normal Cumulative Distribution Function (CDF).
In work [2] authors generalize this simplistic approach by deriving a VaR-scaling methodology based on the following steps [5]:  Fit of the short-term (1-day) P&L distribution, in order to choose the PDF with the highest explanatory power;  Calculation (either analytical or numerical) of the long-term (1-year) P&L distribution, based on the chosen PDF class;  Computation of the Economic Capital as the desired extreme-percentile VaR-measure of the long-term P&L distribution.
The different methodology that allows direct evaluation of the scaling factor for getting the 10 day VaR on base of 1 day VaR was presented in works [5].Authors argue that their methodology is superior to the standard scaling rule used generally in practice.Offered in article computationally simple method improves upon the scaling rule SRTR which is known to underestimate the VaR on long horizon.
Their method is based on the estimation of the ratio of the high quantiles of the aggregated data to the daily data.Using real data sets, the new scaling factors werecompared with others such as square-root-of-time scaling, and a more complex simulation based estimation method.The authors are assure that developed empirical scaling factors surpass the square-root-of-time scaling approach, and are competitive with the simulation based method.
As it was mentioned before when scaling the 1 day VaR with multiplier T , the implicated assumption is that the returns or the log-returns are independent and identically normally distributed with mean zero.However, in literature exist a lot of solid evidence that the financial returns are neither normally distributed nor iid.Therefore, many consider such scaling arbitrary.

24
In fact VaR corresponds to the high quantile of the long-term P&L.Authors in [4] assume the existence of an empirical scaling factor such that an evaluation of such factor can be obtained by the ratio of the order statistics corresponding to the 1 day VaR and the 10 Day VaR: Developed empirical scaling factor is not based on any assumptions about the underlying distribution of P&L and it is very convenient for usage of proponed approach not only in financial sphere.This is in contrast to the square-root-of-time scaling.In doing so, as it was underlined in [4], authors have followed principal ideas of statistics by letting the data speak for themselves.Spite the order statistics might not provide good estimates of the Value at Risk, in [4] was shown that the ratio of the order statistics provides a appropriate estimate of the empirical scaling factor (4).
Existence of incompatibility between two horizons for Value at Risk assessing according with Basel standard was pointed out in work [6], where author analyzed the suitability of the widely spread Square Root of Time rule for Value at Risk scaling.The problem is that we take 10-day horizon for regulatory capital calculation but we consider 1day period for backtesting.In that research performance of SRTR rule was compared with the method utilizing Hurst exponent on the base of normal and stable distribution.They conclude that the normality assumption and the Square Root of Time rule prevail under the regulatory parameters.As it turned out performance ofthe Hurst exponent method under normality was not favourable.But the Hurst exponent complements stable distribution very well under non-Basel parameters.That is why authors concluded that the use of stable distribution and the Hurst exponent method is reasonable when dealing with complex non-linear instruments, during turbulent periods, or for general non-Basel setting [6].
Many scientists consider the performance of the SRTR scaling rule like acceptable, in case when the characteristics of a risk factor series is known.For example, according with research [6] application the simple 10 scaling rule leads to the shift of fat tailed risk factor distribution.Besides, negative autocorrelation result in an overestimated 10-day 99% VaR estimate, while volatility clustering and positive autocorrelation cause the 10-day 99% VaR to be underestimated.On top of these biases, a positive (negative) mean shift induces the 10-day 99% left tail VaR to be overestimated (underestimated) and the right tail VaR to be underestimated (overestimated) [6].M. Janssen noticed that autocorrelation and non-zero means in the risk factor shifts are the most important bias factors.When different shifts act in different directions, the performance of the SRTR scaling rule is deeply dependents on the risk factor series under consideration and very likely on the current market situation.
Actually this is can be the main reason why the scientific literature seems to present contradicting thoughts on regards of the effectiveness of the SRTR scaling rule for VaR 25 calculation.As the shifts become quite serious for some risk factor series, for example,the 10 scaling rule can be considered as being a unreliable 10-day VaR estimator when applied 'blindly' to all these risk factors.Results in [3] show that the usage of the squareroot-of-time rule to Value-at-Risk scaling when the underlying data follows a jumpdiffusion process is bound to provide downward biased risk estimates.Moreover, the bias increases at an increasing rate with longer horizon, larger jump density or lower confidence level.
Swedish researchers in work [1] compared performance of the square-root-of-time rule to other popular empirical methods in scaling 1-day VaR.The scaled VaR were compared to a "real" 10-day VaR, which were obtained analytically or by means of simulation approaches.The distributions that were used to generate simulations were random walks with standard normal and student-t innovations, AR-, GARCH-and AR-GARCH processes.The scaling methods used in [1]: 1) Bootstrap: randomly and with repetition 10 values are chosen   , where s  number of simulations.

2)
Independent resampling: 10 dependant weakly daily log-returns   Such approach ignores the dependence of the overlapping periods.
6) Square-root of time: on the base of known sample of daily log-returns we can evaluate the empirical quantile which then must be scaled by the factor to get the 10-day VaR.

26
Their findings were not contradicting with most of other researches.Using simulated data and real income data they confirmed that the square-root-of-time rule performs comparatively pretty good in a majority of cases.In combinationwith such methods as bootstrap and two-step method scaling Value-at-Risk to up to one year works well for low volatility assets and less so for assets with bigger jumps in price or more stable price.The same situation with scaling of the VaR year − the square-root-oftime rule performs well compared to the other methods mentioned before.In the case of quarterly and yearly VaR SRTR has acceptable accuracy for the less volatile assets, Swedish government bonds, European investment grade bonds and for emerging market bonds.The sudden fall in asset prices all over the world at the crisis 2007-2008 for American and European corporate bonds together with short price history made scaling VaR for such assets during that period problematic.Thus taking in account the simplicity of practical implementation and the conservative nature of the square-root-of-time rule it can be considered as appropriate for practical usage.
In paper [7] two methods for empirically and locally determined time varying scaling exponents were developed.There a recursive window framework on overlapping returns data based on order statistics was used.The first method is based on Hest empirical scaling rule that admit an invariant measure of the scale exponent, which is derived by the gradient of the linear regression of the q-quantile.The second method is based on the numerical local determination of scale variant exponents (H num ) for the 1-day VaR returns and the VaR of n>1 returns.The properties of these data determined scaling exponents were studied in [7] and on the base of use back testing methods they were compared with that of the square-root-of-time rule.The result of back testing indicated that the usage of the empirically determined scaling rules surpassed well known square-root-of-time rule and allow banks to save considerable sum.
An importance of the choosing the method of time scaling of risk for solving different financial task, risk analysis and derivatives pricing was underlined in research [8].In this paper the time scaling rules when returns follow a jump diffusion process were examined.It was shown that the square-root-of-time rule leads to a systematic underestimation of risk, whereby the degree of underestimation worsens with the time horizon, the jump intensity and the confidence level.As a result, even if the square-root-of-time rule has widespread applications in the Basel Accords, it fails to address the objective of the Accords [8].

Conclusions
Analysis of the recent research concerning the performance of SRTR rule as well as other methods such as bootstrap, dependent resampling, non-overlapping periods, independent resampling and different empirical scaling factorsshown that the square-root-oftime rule performs comparatively pretty good in a majority of cases when we admit normal assumptions on the P&L distribution.The choice of the appropriate method of time scaling for VaR plays crucial role for risk analysis, derivatives pricing and solving different financial 500 values.Afterwards they were summed up:

Y
In this method 10 strongly dependent values are chosen out of the sample.In case the sample of 500 values for n=1,…481 we select   periods: we have 50 non-overlapping periods 10-day log-returns: we can evaluate the empirical 99% quantile.It's used as a complement to non-overlapping periods to get more data points.