Solvency II, a fundamental and wide-ranging review of the current insurance supervisory regime in the EU, is slowly but surely rising on the horizon like the morning sun. All key parts of insurance operations will be addressed by this new three-pillar approach: 1) quantitative aspects, 2) qualitative and supervisory issues, and 3) market discipline and disclosure requirements. Our focus in this paper is on internal models that insurance companies may use instead of the standard formula when calculating the solvency capital requirement (SCR). One may wonder why to introduce such a complex feature as internal models in the Solvency II framework. The CEIOPS, the European committee of insurance supervisors, has stated that the
main objectives and potential benefits of using internal models for regulatory purposes include better, more risk-sensitive and innovative risk management, efficiencies in terms of capital and costs, and more effective discussion between insurers and their supervisors as well as with shareholders, analysts and rating agencies.
However, not any kind of model will be accepted for regulatory capital calculation. To build an internal model that is likely to satisfy the approval criteria, which are still in the design pipeline of CEIOPS, is a major project. Luckily, model builders and implementers do still have a few years for their work. According to the European Commission, it is expected that the directive will be implemented into national legislation in 2012. This schedule also implies that the results of the next quantitative impact study (QIS 4) are very important, and should give a good approximation of the final impacts of Solvency II. In QIS 4 internal modelling is one key area, and information will be gathered e.g. regarding the comparability of results produced by the SCR standard formula and those derived from full and partial internal models, and the current state of preparedness of those insurers which would like to use an internal model following the introduction of Solvency II.
For more details on the Solvency II directive proposal (COM(2008) 119 final) and QIS 4 see the EC website (ec.europa.eu/internal_ market/insurance/docs/solvency/) and the website of CEIOPS (www.ceiops.org). Background is given e.g. in Sandström (2006) and in Ronkainen et al. (2007).
2. Modelling and Approval Steps
Statistical modelling is a key part of any internal model that attempts to forecast the probability distribution of the profit and loss account and available own funds of an insurance company. For the SCR a 1-year-ahead
Challenges in developing internal models for Solvency II
forecast with the 99.5 percentile (VaR) is the calibration target. These modelling areas are addressed by the statistical quality test and calibration test in the Solvency II framework. For an overview we list below the key articles of the directive proposal that concern internal models:
- Art. 110: General provision for the approval of internal models
- Art. 111: Specific provisions for the approval of partial internal models
- Art. 113: Policy for changing the model
- Art. 114: Governance and management
- Art. 118: Use test (“Is the model relevant for and used within risk management?”)
- Art. 119: Statistical quality standards
- Art. 120: Calibration standards
- Art. 121: Profit and loss attribution
- Art. 122: Validation standards
- Art. 123: Documentation standards
It is useful to compare these approval steps with the steps traditionally used in statistical modelling. The following list, based on Chat-field (1995) and Box-Jenkins (1975), could form an example of the actual modelling steps:
1.Setting the model objectives (full or partial calculation of the SCR etc) 2.Model data collection, scrutiny, processing, and initial analysis 3.Model formulation (specification) 4.Model fitting (estimation) 5.Model checking (validation) 6.Model documentation (including literature references) and communication 7.Approvals 8.Model application
Note that steps 3-5 are repeated iteratively until a satisfactory model has been found, and that the whole process has to be regularly rerun. From this example we make several observations: the actual model-building steps do not necessarily correspond to the approval steps, the borderlines of various steps are not clear-cut, and the steps are linked and iterated. For instance calibration (art. 120) is closely related to the estimation step, and the validation step is also part of the model building steps 3-5. Thus it seems to us that a mapping between the modelling and approval steps will be useful for model builders and supervisors alike to clarify the processes and facilitate communication and co-operation.
In the estimation - and calibration - step the role of data is crucial. The estimation results can be very sensitive to the chosen data set. Clearly there should not be too much freedom in this respect if we try to guarantee a level playing field for those companies that are using the standard formula with those that are using a partial or full internal model. Because the calibration target is a very rare event (once in 200 years), the available data set is usually too small, which means that the confidence intervals for the estimated parameters are large. The data problem is more severe in Solvency II than it is in Basel 2. In many modelling areas the data have to be supplemented with additional assumptions. Therefore prudently chosen, estimated and aggregated probability distributions seem necessary in particular for the heavy-tailed risks, supplemented with adequate stress-testing.
The examples mentioned above are only some of the challenges CEIOPS faces when developing standards for internal models. Another important and challenging area is consistency of the model in general and in relation to technical provisions.
The general consistency requirement should be kept in mind when modelling the various risk drivers and their dependencies. A familiar example is the correlations (and other forms of dependence) between various asset classes. Another example would be the link between the risk-free discounting interest rate yield curve and the profit-sharing related cash-flows in life and pension insurance. Or claim inflation and asset return forecasts in non-life insurance.
Solvency II will fundamentally change the way technical provisions are calculated, the name of the game now being the so-called market consistent valuation. This will influence both non-life and life insurance, and at the moment some of the new methods have not yet been fully developed and tested. The testing of these new methods and the interpretation of their stochastic outcomes will keep actuaries busy for years to come.
We address in this paper mainly quantitative issues, but the use test (art. 118) should by no means be forgotten. Sufficient understanding of the internal model at all levels of the organisation and the implementation of it in management culture and actions will be a managerial challenge that is likely to experience diseconomies of scale, i.e. it will require more effort the bigger and the more international the firm or group is.
3. Model Risk
With the dissemination of quantitative methods in risk management and advent of internal models mathematical methods and models have come to play an increasingly important role in financial decision making. Since the modelling process is ambiguous the reliance on models to handle risks carries its own risks. The most fundamental risk is an incorrect model that is not applicable. A critical consequence of an incorrect model is that the probability of a significantly adverse event is often substantially greater than one would expect based on model predictions (see e.g. Derman, 1996).
Model risk is part of inadequate internal processes. Hence, it usually falls under the operational risk category. Model risk arises in a situation where the results and decisions emerging from an analysis are sensitive to the choice of model and there is uncertainty about the suitable model. In financial modelling it arises as a particularly severe problem under incomplete market models where the idealised conditions of arbitrage theory do not hold and all participants in the market are not fully rational. Hence the model uncertainty plays an important role in the incomplete insurance market. (see e.g. Kaliva et al., 2007). As pointed out above, we should be especially concerned with the risk that the model underestimates the tail of the portfolio loss distribution.
Under model risk, the true data generating process for the state variables in a financial valuation, risk quantification or hedging problem is unknown. However, in most approaches to risk management, risk measurement starts from the assumption of a given model, which is basically equivalent to ignoring model risk. The problem posed by model risk is similar, but certainly not identical to the problem of market incompleteness. Under market incompleteness, the true model (or probability measure) is assumed to be known, but the equivalent martingale measure is not unique. Under model risk, even the physical measure is unknown, and the market can be either incomplete or complete.
Sources of model risk in pricing models include (see e.g. Kato and Yoshiba, 2000) (1) use of wrong assumptions, (2) errors in estimations of parameters, (3) errors resulting from discretization, and (4) errors in market data. On the other hand, sources of model risk in risk measurement models include (1) the difference between assumed and actual distribution, and (2) errors in the logical framework of the model.
Modelling dependencies and their effects is fundamental to internal models. With a flawed model for dependencies the individual elements of the model may be realistic but when they are combined the result may be critically misleading.
A broad typology for a risk model’s model risk is given e.g. in Down (2002). He classifies:
- Misspecified model: Stochastic process might be misspecified, missing risk factors, misspecified relationships, transaction costs and liquidity factors;
- Incorrect model application;
- Implementation risk;
- • Other sources: Incorrect calibration, programming problems and data problems. Further, Down (2002) argues that there is no single strategy for avoiding model risk, but to combat model risk we can:
- Be aware of model risk;
- Identify, evaluate and check the key assumptions;
- Test models against known problems;
- Choose the simplest reasonable model;
- Backtest and stress-test the model;
- Estimate model risk quantitatively;
- Do not ignore small problems;
- Plot results and use non-parametric statistics;
- Re-evaluate models periodically. Hence, in order to assess model risk an intimate knowledge of the modelling process is required. Bayesian statistics may also provide a helpful approach (see e.g. Cairns, 2000). Cairns states that most actuarial problems related to model uncertainty fall into a situation where there is a range of models which may provide a proxy for a more complex reality about which the modeller has little prior knowledge. A possible remedy for model risk is a Bayesian approach, which provides a coherent framework to make inferences in the presence of model uncertainty. More specifically, Bayesian model averaging (see e.g. Hoeting et al., 1999) provides plausible and statistically well-
founded techniques for accounting for this model uncertainty. See Kaliva et al. (2007)
FIGURE 1. Main external underlying causes and early internal causes from the internal model’s perspective (Korhonen and Koskinen,
A: External Underlying or Tricker Causes
A: Component Risks
- Systemic Risk
- Competition Distortion Risk
- Model Control Risk
for a discussion in the internal model context.
Korhonen and Koskinen (2008) study critical aspects of the use of internal model for insurance company’s risk and capital management, e.g. management criteria and risk factors. They carried out the evaluation by a panel consisting of senior managers of insurance companies. Main external underlying causes and early internal causes from the internal model’s perspective are given in Figure 1. The overall picture from the evaluation is that the management should pay attention to practical issues such as the modelling expertise, suitable software and data sources.
Finally, we would like to emphasize that internal model development is a theoretically very demanding task. Strong theoretical expertise is needed (at least) in actuarial and financial mathematics, computation, financial engineering and statistics.
Box, G. and Jenkins, G. (1975): Time series analysis: forecasting and control, Wiley
Cairns, A. (2000): ‘’A Discussion of Parameter and Model Uncertainty in Insurance’’, Insurance: Economics and mathematics 27, 313-330.
Chatfield, C. (1995): Problem solving, A statistician’s guide, Chapman & Hall.
Derman, E. (1996): “Model risk,” Quantitative Strategies Research Notes, Goldman Sachs. Down, K. (2002): Measuring market risk, Wiley.
Hoeting, J. A., Madigan, A., Raftery, A. \& Volinsky, C: (1999): ‘Bayesian Model Averaging: A Tutorial’’, Statistical Science, 14, 382-417.
Kaliva, K., Koskinen, L. and Ronkainen V. (2007): Internal models and arbitrage-free calibration, AFIR 2007.
Kato, T. and Yoshiba, T. (2000): “Model risk and its control,” Monetary and Economic Studies.
Korhonen, P. and Koskinen, L.(2008): “Challenges of the Use of Internal Model for Insurance Company’s Risk and Capital Management,” Helsinki School of Economics.
Ronkainen, V., Koskinen, L. and Berglund, R. (2007): “Topical modelling issues in Solvency II.”
Scandinavian Actuarial Journal 2, 135-146, 2007.
Sandström, A. (2006): Solvency - Models, Assessment and Regulation, Chapman & Hall, London.