Model Risk 101: A Checklist for Risk Managers

Model-Risk-101-Chalkboard-3

In my previous Model Risk 101 blog post, I discussed how the first line of defense – model developers and users – are tasked primarily with implementing a consistent, formalized model development approach. That’s a challenge in and of itself – but model risk management and governance doesn’t end there.

Once in production, predictive models are subject to performance erosion driven by changes in portfolio risk characteristics or availability of economic data. Across the model lifespan, it is therefore critical to answer questions such as: “Are these models still fit for purpose?” or “What corrective actions are needed in case of model performance breaches?”

That’s where your second line of defense comes in. For an integrated model risk management system to work, the second line – primarily risk managers – must go beyond periodic validation to ensure that model use and performance remain in alignment with the portfolio risk profile, so that business and regulatory objectives are met.

Annual independent validation is crucial, of course. Banks should drive consistency in validation testing methodologies and in selecting appropriate data sets, so that results aren’t skewed. In addition to back-testing for performance, organizations should leverage champion/challenger testing of competing models and/or decision strategies on random samples from production populations. Validation documentation is critical, and should be made readily accessible to internal stakeholders and regulators.

Beyond validations, a cycle of model performance monitoring should occur. This monitoring confirms that each model has been correctly implemented, and that it is being used and is performing as intended. Those who do it best take advantage of actionable tools, such as dashboards and model inventories, to track model use, performance, review and challenge based on standards and triggers that mandate corrective actions (such as model redevelopment, applying conservatism, etc.).

Model validation and ongoing monitoring ultimately help organizations mitigate model risk, including temporary and permanent remediation. If remediation cycles are long, however, ongoing model performance deterioration may go undetected. That’s why it’s important to implement a mix of short- and long-term remediation planning. While this may not address all issues related to model use and performance, it can help provide the appropriate processes, controls and technologies to determine and address the root causes of model degradation.

What do risk managers need?

To successfully build the second line of defense, risk management personnel need clear insight of the model risk universe, including:

√  The complete inventory of all models ranked by model materiality

√  Mapping to model business owners, developers and validators

√  Clear view of model performance metrics

√  Frequency of model reviews and corrective actions

They also need to:

√  Respond to senior management challenge

√  Demonstrate how various risk levels are appropriate for the organization

√  Address the issue of how to best develop validation and monitoring cycles that (a) help ensure optimal model performance, and (b) are defensible during an internal audit or regulatory review

With the use of analytic models growing at pace and the risk of their misuse (or lack of performance) materially affecting the business, risk managers need advanced tools to capture and help drive policies and processes, automate validation and monitoring, and report on performance and usage. Such a solution would allow banks to:

√  Manage, track and report on all models at a glance

    • Automatically detect when models fall out of effective performance ranges
    • Track development and deployment status
    • Ensure timely review of all models
    • Quickly answer regulator audits/inquiries

√  Automate validation and analysis to ensure models are fit for purpose

    • Define model segments, alert conditions and validation plans
    • Check multiple population segments and performance criteria
    • Quickly focus on problem situations
    • Log definition and changes to validation criteria
    • Defend model appropriateness over its lifetime

second line of defense example

With the first line of defense using advanced analytic tools and automation to bring consistency, speed and transparency to model development, the second line needs more agility to monitor models in use and provide a continuous feedback loop to the first line to help maintain model performance. This may never be a perfect process, as long lag times between redevelopment can hinder performance, while business and economic conditions frequently shift risk objectives. But as shown in the example above, technology can streamline review and monitoring processes, reduce the time it takes to find root cause issues, and help speed up remediation cycles.

For more on the topic of model management and governance, check out our Insights white paperReducing Regulatory Drag on Analytics Teams, or visit http://www.fico.com/en/analytics/model-management-and-compliance. And check back here in a couple weeks, when I’ll blog about strengthening your third line of defense.

Note: This article originally appeared in FICO. Click for link here.

Source