Skip to main content
  • 5. Validation of Internal Estimates

    • 5.1 General Requirements

      5.1.1Validation is an integral part of a bank’s rating system architecture to provide reasonable assurances about its rating system. Banks adopting the IRB Approach should have a robust system in place to validate the accuracy and consistency of their rating systems, processes and the estimation of all relevant risk components. They should demonstrate to SAMA that their internal validation process enables them to assess the performance of internal rating and risk estimation systems consistently and meaningfully.
       
      5.1.2The validation process should include review of rating system developments (see subsection 5.2), ongoing analysis (see subsection 5.3), and comparison of predicted estimates to actual outcomes (i.e. back-testing, as described paragraphs 5.1.3 and 5.1.4 and subsection 5.4).
       
      5.1.3Banks should regularly compare realized default rates with estimated PDs for each grade and be able to demonstrate that the realized default rates are within the expected range for that grade. The actual long run average default rate for each rating grade should not be significantly greater than the PD assigned to that grade. The methods and data used in such comparisons by banks should be clearly documented. This analysis and documentation should be updated at least annually.
       
      5.1.4Similarly, banks using the Advanced IRB Approach should complete such analysis for their estimates of LGD and EAD. Such comparisons should make use of historical data that are over as long a period as possible. The actual loss rates experienced on defaulted facilities should not be significantly greater than the LGD estimates assigned to those facilities.
       
      5.1.5Banks should also use other quantitative validation tools and comparisons with relevant external data sources. The analysis should be based on data that are appropriate to the portfolio, are updated regularly, and cover a relevant observation period. Banks’ internal assessments of the performance of their own rating systems should be based on long data histories, covering a range of economic conditions, and ideally one or more complete business cycles.
       
      5.1.6Banks should have in place a process for vetting data inputs, including the assessment of accuracy, completeness and appropriateness of the data specific to the assignment of an approved rating. Detailed documentation of exceptions to data input parameters should be maintained and reviewed as part of the process cycle of validation.
       
      5.1.7The process cycle of validation should also include: ongoing periodic monitoring of rating system performance, including evaluation and rigorous statistical testing of the dynamic stability of the models used and their key coefficients; identifying and documenting individual fixed relationships in the rating system or model that are no longer appropriate; and a rigorous change control process, which stipulates the procedures that should be followed prior to making changes in the rating system or model in response to validation outcomes.
       
      5.1.8Bank should demonstrate that quantitative testing and other validation methods do not vary systematically with the economic cycle1 which incorporate the general impact of economic downturn and upswings of the subject economy. Changes in methods and data (both data sources and periods covered) should be clearly documented.
       
      5.1.9Some differences across individual grades between observed outcomes and the estimates can be expected.
       
       However, if systematic differences suggest a bias toward lowering regulatory capital requirements, the integrity of the rating system (of either the PD or LGD dimensions or of both) becomes in doubt.
       
      5.1.10Bank should have well-articulated internal standards for situations where deviations in realised PDs, LGDs and EADs from expectations become significant enough to call the validity of the estimates into question. These standards should take account of business cycles and similar systematic variability in default experiences. Where realised values continue to be higher than expected values, banks should revise estimates upward to reflect their default and loss experience.
       

      1 Economic cycle refer to ensuring that validation of internal estimates incorporate the general impact of economic downturn and upswings of the subject economy.

    • 5.2 Review of Rating System Developments

      5.2.1The first analytical support for the validity of a bank’s rating system is review of rating system developments, in particular analyzing its design and construction. The aim of the review is to assess whether the rating system could be expected to work reasonably if it is implemented as designed. Such review should be revisited whenever the bank makes a change to its rating system. As the rating system is likely to change over time as the bank learns about the effectiveness of the system, the review is likely to be an ongoing part of the process. The particular steps taken in the review depends on the type of rating system.
       
      5.2.2Regarding a model-based rating system, the review of rating system developments should include information on the logic that supports the model and an analysis of the statistical model-building techniques. The review should also include empirical evidence on how well the ratings might have worked in the past, as such models are chosen to maximize the fit to outcomes in the development sample. In addition, statistical models should be supported by evidence that they work well outside the development sample. Use of out-of-time and out-of-sample performance tests is a good model-building practice to ensure that the model is not merely a statistical quirk of the particular data set used to build the model. Where a bank uses scoring systems for assigning credit ratings, it should demonstrate that those systems have adequate discriminating power.
       
      5.2.3Regarding an expert judgment-based rating system, the review of rating system developments requires asking two groups of raters how they would rate credits based on the rating definitions, processes and criteria for assigning exposures to grades within the rating system (see sections 4 and 5 of the “Minimum Requirements for Internal Rating Systems under IRB Approach” on requirements for rating criteria and processes). These two sets of rating results could then be compared to determine whether the ratings were consistent. Conducting such tests would help identify any factors, which may lead to different or inconsistent ratings. While some differences and inconsistencies may arise from the exercise of judgment, those findings should be considered for the development of the rating system.
       
      5.2.4Where an expert judgment-based rating system which employs quantitative guidelines or model results as inputs, the review of the rating system that features guidance values of financial ratios or scores of a scoring model might include a description of the logic and evidence relating the values of the ratios or scores to past default and loss outcomes.
       
    • 5.3 Ongoing Analysis

      5.3.1The second analytical support for the validity of a bank’s rating system is the ongoing analysis intended to confirm that the rating system is implemented and continues to perform as intended. Such analysis involves process verification and benchmarking.
       
       Process verification
       
      5.3.2Specific verification activities depend on the rating approach. If a model is used for rating, verification requires reviewers who are independent of the model development to evaluate the soundness of the model, including the theory, assumptions and mathematical/empirical basis. In addition, the evaluation should include the assessment of the compliance with the requirements set out in subsection 4.6 of the “Minimum Requirements for Internal Rating Systems under IRB Approach” on use of models.
       
      5.3.3If expert judgment is used for rating, verification requires other individual reviewers to evaluate whether the rater has followed rating policy. The minimum requirements for verification of ratings assigned by individuals are:
       
       a transparent rating process;
       
       a database with information used by the rater; and
       
       documentation of how the decisions were made.
       
      5.3.4Rating process verification also includes override monitoring. The requirements for overrides are set out in subsection 5.3 of the “Minimum Requirements for Internal Rating Systems under IRB Approach”. A reporting system capturing data on reasons for overrides could facilitate learning about whether overrides improve accuracy.
       
       Benchmarking
       
      5.3.5Benchmarking is a set of activities that uses alternative tools to draw inferences about the correctness of ratings before outcomes are actually known. Benchmarking of a rating system demonstrates whether another rater or rating method attaches the same rating to a particular obligor or facility. At a minimum, banks should establish a process in which a representative sample of its internal ratings is compared to third-party ratings (e.g. independent internal raters, external rating agencies, models, or other market data sources) of the same credits. Regardless of the rating approach, the benchmark can either be a judgment-based or a model based rating. Examples of such benchmarking include: rating reviewers completely re-rate a sample of credits rated by individuals in a judgment-based system; an internally developed model is used to rate credits rated earlier in a judgment-based system; individuals rate a sample of credits rated by a model; internal ratings are compared against results from external agencies or external models.
       
       Banks can also consider benchmarking which includes activities designed to draw broader inferences about whether the rating system – as opposed to individual ratings – is working as expected. Bank can look for consistency in ranking or consistency in the values of rating characteristics for similarly rated credits. Examples of such benchmarking activities include:
       
       analyzing the characteristics of obligors that have received common ratings; monitoring changes in the distribution of ratings over time;
       
       calculating a transition matrix from changes in ratings in a bank portfolio and comparing it to historical transition matrices from publicly available ratings or external data pools.
       
      5.3.6If benchmarking evidence suggests a pattern of rating differences, it should lead the bank to investigate the source of the differences. Thus, the benchmarking process illustrates the possibility of feedback from ongoing validation to model development.
       
    • 5.4 Back-Testing

      5.4.1Back-testing is the comparison of predictions with actual outcomes. It is the empirical test of the accuracy and calibration of the estimates, i.e. PDs, LGDs and EADs, associated with borrower and facility ratings, respectively.
       
      5.4.2At a minimum, banks should:
       
       develop their own statistical tests to back-test their rating systems;
       
       establish internal tolerance limits for differences between expected and actual outcomes; and
       
       have a policy that requires remedial actions be taken when policy tolerances are exceeded.
       
      5.4.3However, the data to perform comprehensive back testing would not be available in the early stages of implementing an IRB rating system. Therefore, banks should rely more heavily on review of rating system developments, process verification, and benchmarking to assure themselves and other interested parties that there rating systems are likely to be accurate. Validation in its early stages should also depend on a bank’s management exercising informed judgment about the likelihood of the rating system working — not simply on empirical tests.
       
      5.4.4Where banks rely on supervisory, rather than internal, estimates of risk parameters, they are encouraged to compare realised LGDs and EADs to those set by the SAMA. The information on realised LGDs and EADs should form part of a bank’s assessment of economic capital.