Chapter 15 Exercises: Credit Risk Modelling and Model Risk Management


Exercise 15.1 — Expected Loss Portfolio Calculation

Type: Quantitative calculation Estimated time: 20 minutes Difficulty: Foundation

Background

Verdant Bank's Chief Compliance Officer Maya Osei has been asked to present a summary of the SME loan book's Expected Loss profile to the Risk Committee as part of the bank's IRB readiness programme. The credit risk team has provided the following data for five representative SME loans currently on the book.

Data

Loan ID Borrower Sector PD (%) LGD (%) EAD (£) Secured?
SME-001 Hargreaves Bakeries Ltd Food manufacturing 2.5% 40% £425,000 Yes (property)
SME-002 Greenfield Logistics plc Transport 6.2% 55% £1,200,000 Partial
SME-003 Ashworth Digital Ltd Technology consulting 1.1% 65% £180,000 No
SME-004 Caldwell Construction Ltd Construction 9.8% 70% £640,000 Yes (equipment)
SME-005 Novella Media Group Media/publishing 4.4% 50% £320,000 No

Tasks

Part A: Calculate the Expected Loss (EL = PD × LGD × EAD) for each individual loan. Present your answer in both pounds sterling and as a percentage of EAD.

Part B: Calculate the total Expected Loss for the portfolio. What is the portfolio-level EL as a percentage of total EAD?

Part C: Caldwell Construction (SME-004) has a revolving credit facility with a committed limit of £800,000. The current drawn balance is £640,000 (which is the EAD figure used above). If the bank were to use a Credit Conversion Factor (CCF) of 75% instead of assuming the full drawn balance represents total EAD, what would the revised EAD and EL be for SME-004?

Part D: The risk team proposes setting an individual loan provision equal to EL and a portfolio-level capital allocation equal to 8% of total EAD multiplied by an average risk weight of 75% (the SME retail risk weight under the Standardised Approach). Calculate: - Total provisions required (sum of EL across the portfolio) - Total capital required under SA - Comment on whether EL is larger or smaller than the capital requirement, and what this implies for the bank's pricing and provisioning strategy

Part E (Stretch): If Verdant Bank could achieve F-IRB approval and its internal PDs are accurately calibrated, sketch the argument for why F-IRB capital requirements might be lower than the SA capital requirement for SME-001 (Hargreaves Bakeries, PD = 2.5%, LGD = 45% supervisory). You do not need to compute the IRB formula — the qualitative argument is sufficient.


Worked Solution: Parts A and B

Loan ID PD LGD EAD EL (£) EL as % EAD
SME-001 2.5% 40% £425,000 £4,250.00 1.00%
SME-002 6.2% 55% £1,200,000 £40,920.00 3.41%
SME-003 1.1% 65% £180,000 £1,287.00 0.715%
SME-004 9.8% 70% £640,000 £43,904.00 6.86%
SME-005 4.4% 50% £320,000 £7,040.00 2.20%
Total £2,765,000 £97,401.00 3.52%

Part C (SME-004 revised): Under F-IRB supervisory CCF, EAD of a revolving facility = Drawn + (CCF × Undrawn). Undrawn = £800,000 − £640,000 = £160,000. Revised EAD = £640,000 + (75% × £160,000) = £640,000 + £120,000 = £760,000. Revised EL = 9.8% × 70% × £760,000 = £52,136.

Part D: Total EL = £97,401. SA Capital = 8% × 75% × £2,765,000 = £165,900. EL (£97,401) is substantially less than capital (£165,900). This is expected: EL is an average expected cost covered by provisions and pricing; capital covers unexpected losses at the 99.9th percentile.


Exercise 15.2 — Model Validation Report Interpretation

Type: Analytical interpretation Estimated time: 25 minutes Difficulty: Intermediate

Background

Priya Nair's Big 4 team has completed an independent validation of Verdant Bank's newly developed SME credit scorecard. The validation report presents the following performance metrics, computed on a hold-out test dataset of 1,500 SME loan decisions from Q1–Q3 2024 (with actual defaults observed 12 months later).

Validation Metrics Summary

Metric Development Sample Hold-Out Test Sample Benchmark Threshold Pass/Fail?
Gini Coefficient 0.53 0.47 ≥ 0.30 ?
AUC-ROC 0.77 0.73 ≥ 0.65 ?
KS Statistic 0.41 0.34 ≥ 0.20 ?
PSI (Score distribution, dev vs test) 0.07 < 0.10 ?
PSI (Score distribution, dev vs Q1 2025 live) 0.19 < 0.10 ?
Hosmer-Lemeshow p-value 0.12 > 0.05 ?
Override rate (Q1–Q3 2024 production) 22% < 15% ?

Tasks

Part A: Complete the Pass/Fail column for each metric. For any metric that fails, identify whether the failure is concerning enough to recommend rejection of the model, or whether it is a conditional concern.

Part B: The Gini has dropped from 0.53 (development) to 0.47 (hold-out test). Is this a problem? What is the typical acceptable "hold-out degradation" for retail/SME credit scorecards, and how should Verdant interpret this?

Part C: The PSI on the hold-out test sample (development vs test period) is 0.07 — acceptable. But the PSI for the live scoring population in Q1 2025 compared to the development sample is 0.19. What does this indicate, and what action should the model validator recommend?

Part D: The override rate is 22%, against a policy benchmark of < 15%. The validator also notes that no formal override log has been maintained — overrides were simply recorded as a field in the loan management system with no commentary on rationale. Write a draft "Finding" and "Recommendation" for the validation report addressing the override issue. Follow SR 11-7 conventions.

Part E: Based on all metrics considered together, what overall outcome would you recommend for this validation? Options: Approved, Conditional Approval, or Rejected. Justify your recommendation in 3–4 sentences, referencing specific metrics.


Notes for Solution Guidance

  • Gini of 0.47 on hold-out: passes (above 0.30). A drop of 0.06 from dev to test is within typical degradation (<10 Gini points). Not a failure, but should be monitored.
  • PSI of 0.19 on live population: in the "monitor" zone (0.10–0.25) — not a failure yet, but warrants a 6-month monitoring trigger and mandatory re-validation if PSI reaches 0.25.
  • Override rate at 22%: exceeds policy. Absence of override logs is a governance failure (SR 11-7 section on model use controls).
  • Recommended outcome: Conditional Approval — model is statistically acceptable but override governance must be remediated within 90 days.

Exercise 15.3 — Population Stability Index Calculation

Type: Quantitative calculation Estimated time: 20 minutes Difficulty: Intermediate

Background

Rafael Torres, now consulting after leaving Meridian Capital, has been engaged to assess whether Verdant Bank's retail mortgage scorecard remains valid six months after implementation. The development team provides him with the following score distributions: the development sample (used to build the model in 2023) and the current live scoring population (Q1 2025).

Data: Score Distribution by Decile

Score Decile Development Sample (Expected %) Current Live Population (Actual %)
1 (300–399) 10.0% 6.5%
2 (400–449) 10.0% 7.0%
3 (450–499) 10.0% 9.0%
4 (500–524) 10.0% 11.0%
5 (525–549) 10.0% 12.5%
6 (550–574) 10.0% 13.0%
7 (575–599) 10.0% 12.0%
8 (600–624) 10.0% 11.0%
9 (625–674) 10.0% 10.0%
10 (675+) 10.0% 8.0%
Total 100.0% 100.0%

Tasks

Part A: Calculate PSI for each decile using the formula:

$$PSI_i = (A_i - E_i) \times \ln\left(\frac{A_i}{E_i}\right)$$

Where $A_i$ = Actual proportion in decile $i$, $E_i$ = Expected proportion in decile $i$.

Part B: Sum the decile PSI values to produce the total PSI. Interpret the result using the standard PSI thresholds.

Part C: Looking at the direction of the shift — the current population has fewer borrowers in the bottom two deciles (lowest scores) and more borrowers in deciles 4–7 (mid-range scores) — what does this suggest about the composition of the bank's current mortgage applicant pool compared to the development sample? Is this shift concerning from a credit risk perspective?

Part D: Rafael notes that mortgage applications have increased by 35% since the model was developed, driven by a marketing campaign targeting first-time buyers. How might this explain the population shift, and what additional monitoring should be recommended?


Worked Solution: Parts A and B

Decile Expected (E) Actual (A) A − E ln(A/E) PSI Component
1 0.100 0.065 −0.035 ln(0.65) = −0.431 (−0.035)(−0.431) = 0.01509
2 0.100 0.070 −0.030 ln(0.70) = −0.357 (−0.030)(−0.357) = 0.01071
3 0.100 0.090 −0.010 ln(0.90) = −0.105 (−0.010)(−0.105) = 0.00105
4 0.100 0.110 +0.010 ln(1.10) = +0.095 (+0.010)(+0.095) = 0.00095
5 0.100 0.125 +0.025 ln(1.25) = +0.223 (+0.025)(+0.223) = 0.00558
6 0.100 0.130 +0.030 ln(1.30) = +0.262 (+0.030)(+0.262) = 0.00786
7 0.100 0.120 +0.020 ln(1.20) = +0.182 (+0.020)(+0.182) = 0.00364
8 0.100 0.110 +0.010 ln(1.10) = +0.095 (+0.010)(+0.095) = 0.00095
9 0.100 0.100 0.000 ln(1.00) = 0.000 0.00000
10 0.100 0.080 −0.020 ln(0.80) = −0.223 (−0.020)(−0.223) = 0.00446
Total PSI 0.110

Interpretation: PSI = 0.110 falls in the 0.10–0.25 "monitor and investigate" zone. The shift does not require immediate model withdrawal but warrants enhanced monitoring and investigation of the driver.

Part C: Fewer borrowers in the lowest deciles (highest risk) suggests the current applicant pool may be of higher average credit quality than the development sample — or that the scorecard is scoring the new applicant type differently. Both explanations require investigation.


Exercise 15.4 — Coding Exercise: Logistic Regression Scorecard with WoE Transformation

Type: Coding / technical Estimated time: 60–90 minutes Difficulty: Advanced Prerequisites: Python 3.10+, pandas, scikit-learn, numpy

Background

Using the CreditScorecard class and WoETransformer introduced in this chapter's Python code section, complete the following implementation tasks. A synthetic SME dataset is provided.

Task A: Data Generation and EDA

Generate a synthetic SME dataset with the following variables: - years_in_business (integer, 1–20) - annual_revenue_gbp (float, log-normal) - credit_bureau_score (integer, 300–900) - debt_service_ratio (float, 0–1) - num_bank_accounts (integer, 1–5) - default (binary outcome, 1 = defaulted)

Using the WoETransformer.fit() method, compute WoE and IV for all input variables. Produce a ranked table of Information Values. Which variables are most predictive?

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

# Your data generation and IV calculation code here
np.random.seed(2024)
n = 8000

# Generate synthetic data
data = pd.DataFrame({
    'years_in_business': np.random.choice(range(1, 21), size=n),
    'annual_revenue_gbp': np.random.lognormal(11.5, 1.2, n).clip(5_000, 10_000_000),
    'credit_bureau_score': np.random.normal(640, 90, n).clip(300, 900).astype(int),
    'debt_service_ratio': np.random.beta(2, 5, n),
    'num_bank_accounts': np.random.choice(range(1, 6), n)
})

# TODO: Generate a correlated default flag using a logistic model
# of the above variables, then:
# 1. Fit a WoETransformer on the training set
# 2. Print the ranked Information Values table
# 3. Identify variables to include/exclude based on IV thresholds

Task B: Scorecard Fitting and Score Generation

Using the CreditScorecard class: 1. Fit the scorecard on the training set. 2. Generate scores for the test set. 3. Print the scorecard table showing each variable's bins, WoE values, and point contributions.

# Your code here:
from chapter15_code import CreditScorecard  # Assume the class is importable

scorecard = CreditScorecard(base_score=600, pdo=20, target_odds=50.0, min_iv=0.02)
# TODO: fit, score, and produce scorecard table

Task C: Validation Metrics

Compute Gini coefficient, AUC-ROC, KS statistic, and PSI on the test set. Then produce a ModelValidationReport object with your results and print the summary.

from chapter15_code import (
    gini_coefficient, ks_statistic, population_stability_index,
    ModelValidationReport
)
from datetime import date

# TODO: Compute metrics and build report
report = ModelValidationReport(
    model_id='EXERCISE-001',
    model_name='SME Exercise Scorecard',
    validation_date=date.today(),
    validator='Your Name',
    gini=...,
    auc=...,
    ks_statistic=...,
    psi=...,
    validation_outcome=...,  # 'approved', 'conditional', or 'rejected'
    conditions=[...],
    findings=[...]
)
print(report.summary())

Task D: Score to PD Mapping

Using the pd_from_score() method, compute PD estimates for scores between 400 and 800 in steps of 20. Plot (or tabulate) the score vs PD relationship. At what score does PD fall below 1%? Below 5%?

Task E: Override Analysis (Extension)

Simulate an override mechanism: take the bottom 10% of scorers (highest predicted PD) and randomly approve 15% of them (simulating relationship manager overrides). Then: - Calculate EL for the overridden population using the model PD, LGD = 55%, and EAD = £250,000 average. - Calculate EL for the non-overridden approved population. - Compare the EL rates and write a two-paragraph commentary on what this analysis implies for override governance.


Guidance Notes

The complete working code for the CreditScorecard class is available in this chapter's index.md. Students should run verdant_bank_example() first to verify their environment, then adapt the code for this exercise.

Key learning outcomes: - Understand how WoE transformation works in practice on synthetic data - Build intuition for how scorecard points translate to PD estimates - Practise interpreting validation metrics holistically (not just checking if they pass thresholds) - Experience the interaction between model outputs and credit decisions


Exercise 15.5 — Model Risk Governance Gap Analysis

Type: Case analysis / regulatory mapping Estimated time: 30 minutes Difficulty: Intermediate

Background

Maya Osei is reviewing Verdant Bank's current model governance practices against SR 11-7 and EBA/GL/2017/16 requirements. She has documented six scenarios from her internal review. For each scenario, identify:

(a) Which specific SR 11-7 requirement is violated (or at risk of being violated) (b) The severity of the issue (Low / Medium / High) (c) A recommended remediation action

Scenarios

Scenario 1: The bank's Head of Risk Analytics developed the retail mortgage scorecard in 2021 and has been the signatory on all subsequent validation reports for the same model. No third-party review has been conducted.

Scenario 2: Verdant uses a vendor-provided credit bureau scoring model for initial applicant screening. The vendor provides a performance summary sheet annually but has never granted the bank access to the model's development data, assumptions, or methodology documentation. The bank has not conducted any independent validation of the vendor model.

Scenario 3: The model inventory spreadsheet lists 14 models. However, a data quality check reveals that three models have been decommissioned and replaced, but their successors are not listed. Two models used by the treasury function for FX hedging limits are not included at all.

Scenario 4: The SME credit scorecard is being used to approve loans up to £2 million. The model's development documentation states clearly that it was validated for loan decisions up to £500,000, with a note: "Application to higher-value exposures requires separate validation." No separate validation has been conducted.

Scenario 5: Following a PSI alert (PSI = 0.22) generated by the automated monitoring system in October 2024, the model owner's response was logged as: "Reviewed — no action required." No analysis supporting this conclusion is on file. The next scheduled validation is in March 2026.

Scenario 6: The bank's IFRS 9 provision model has not been updated since its initial development in 2022. The model documentation notes that the macro scenarios used are "sourced from the Bank of England's November 2021 Monetary Policy Report." No mechanism exists to update the macro scenarios as economic conditions change.


Worked Solution

Scenario SR 11-7 Requirement Violated Severity Remediation
1 Validation independence — developer cannot validate their own model High Engage independent internal validator or external party immediately; retrospectively review prior validation sign-offs
2 Vendor model validation — SR 11-7 explicitly states vendor models are not exempt High Issue formal data access request to vendor; if access refused, commission independent conceptual soundness review from model documentation; escalate to risk committee if unresolved
3 Model inventory completeness — all models in use must be registered, including treasury models Medium Conduct model inventory refresh; establish process for mandatory registration within 30 days of any new model deployment or decommission
4 Model use controls — models must not be applied outside documented scope High Immediately restrict scorecard use to ≤£500,000 until a supplementary validation for higher-value exposures is completed; document the restriction in the model's terms of use
5 Ongoing monitoring and outcomes analysis — alerts require documented investigation, not conclusory sign-off Medium Require written analysis for all PSI alerts above threshold; consider quarterly monitoring report requiring second-line sign-off; update escalation triggers
6 Ongoing monitoring and model relevance — outdated macro scenarios render the model inappropriate for current conditions High Implement a quarterly macro scenario refresh process aligned with published forecasts (Bank of England, OBR); re-validate the provision model's sensitivity to updated scenario inputs

Exercise 15.6 — Research Exercise: IFRS 9 Expected Credit Loss Modelling

Type: Research and analysis Estimated time: 45–60 minutes (including reading) Difficulty: Foundation to Intermediate

Background

IFRS 9 Financial Instruments (IASB, 2014; effective for annual periods beginning on or after 1 January 2018) replaced the IAS 39 incurred loss model with a forward-looking Expected Credit Loss (ECL) framework. This exercise develops your understanding of how IFRS 9 interacts with credit risk modelling.

Part A: Research Questions

Using the primary source (IFRS 9, Section 5.5 "Impairment") and the IASB's Basis for Conclusions, answer the following questions in your own words:

  1. What was the primary criticism of the IAS 39 incurred loss model that motivated the development of IFRS 9's ECL approach?

  2. The IFRS 9 standard requires that ECL calculations incorporate "forward-looking information" including macroeconomic variables. Identify three examples of forward-looking information that might be incorporated into a retail mortgage ECL model.

  3. What does "significant increase in credit risk" (SICR) mean under IFRS 9? Identify two quantitative and two qualitative indicators that might be used to identify SICR.

  4. Why does IFRS 9 require probability-weighted multiple scenarios rather than a single "most likely" scenario?

Part B: Three-Stage Analysis

Cornerstone Financial Group's consumer lending division has a credit card portfolio of 200,000 accounts. Using the IFRS 9 three-stage framework, analyse the appropriate provision treatment for each of the following accounts:

Account Details Stage? Provision basis?
A Current at reporting date; opened 18 months ago; PD unchanged from origination ? ?
B 15 days past due; PD has increased from 0.8% at origination to 3.2% (300bp increase — above SICR threshold) ? ?
C 75 days past due; account under internal collections management ? ?
D 2 days past due; PD has increased from 2.1% to 2.9% (80bp increase — below SICR threshold) ? ?
E 0 days past due; borrower's employer announced restructuring, borrower has contacted bank; PD model estimate up 250bp ? ?

Part C: Model Connectivity

IFRS 9 ECL modelling requires connectivity between several quantitative models:

  • A staging model (to determine Stage 1/2/3 allocation)
  • A PIT PD term structure model (12-month and lifetime PD paths)
  • An LGD model (for each Stage)
  • A macro scenario model (generating probability-weighted scenarios)
  • A prepayment/behavioural model (for lifetime EAD projections)

Draw (sketch or describe) the high-level architecture of how these model components interact to produce a provision estimate for a single credit card account currently classified as Stage 2.

Part D: Procyclicality Debate

IFRS 9 has been criticised for potentially amplifying financial instability — because provisions rise sharply in recessions (when SICR triggers fire and more accounts migrate from Stage 1 to Stage 2), and this may force banks to reduce lending precisely when the economy needs credit.

In approximately 200 words, present arguments on both sides of this debate: (i) the case that IFRS 9 introduces harmful procyclicality, and (ii) the case that IFRS 9 provides a more honest and useful picture of bank solvency.


Guidance on Research Sources

Primary sources: - IFRS 9 Financial Instruments, paragraphs 5.5.1–5.5.20 (available free at ifrs.org) - IASB Basis for Conclusions on IFRS 9, paragraphs BC5.1–BC5.100 - ESRB Report "Financial Stability Implications of IFRS 9" (March 2017)

Secondary sources: - EBA Guidelines on Credit Institutions' Credit Risk Management Practices and Accounting for Expected Credit Losses (EBA/GL/2017/06) - FSB paper "Implications of IFRS 9 for Financial Stability" (2017) - KPMG IFRS 9 Implementation Guide (publicly available)

Part B Worked Solution

Account Stage Reason Provision Basis
A Stage 1 Performing; no SICR detected 12-month ECL
B Stage 2 SICR triggered: 300bp PD increase exceeds typical 200bp threshold Lifetime ECL
C Stage 3 75 days past due meets definition of credit-impaired (>60 days typically treated as strong evidence of default); collection action commenced Lifetime ECL on net carrying amount
D Stage 1 Only 80bp PD increase, below SICR threshold; 2 days past due does not trigger the 30-day rebuttable presumption 12-month ECL
E Stage 2 250bp PD increase exceeds SICR threshold even though zero days past due; qualitative indicator (employer restructuring, customer contact) also supports SICR Lifetime ECL

Chapter 15 of Regulatory Technology (RegTech): A Practitioner's Guide