The number and complexity of models in use in UK banks has grown rapidly in recent years and continues to accelerate, driven by widespread adoption of artificial intelligence and machine learning techniques. Banks are eager to embrace these technologies, but are worried about the resulting risk, and the cost of managing it.

Now, after a decade of relative inactivity in MRM regulation, the Prudential Regulation Authority has announced a new rule, effective in 2024. SS1/23 crystallises many banks’ fears about the potentially costly changes banks will have to make to stay compliant.

Data size and diversity continues to rise

Where traditional banking models tended to rely on specific sets of mainly numerical data, the new generation of models requires a far more varied diet of information — often unstructured and complex — from many sources.

That information almost inevitably incorporates bias and unfairness, and the nature of modern models often amplifies bias, basing decisions on undesirable features of the data that a traditional model wouldn’t be sensitive to. For example, a modern AI-based creditworthiness model might be able to exploit information such as the borrower’s past history of communication with the lender, picking up on linguistic or social cues that shouldn’t ethically be used as the basis for lending decisions. And that risk exists even when data is of good quality (which it often isn’t).

As regulators have pointed out, even when data is of good quality and is used appropriately, the volume of data now being used in banking models poses a risk in itself. For example, data sets that are innocuous alone can be combined by modern analytics to provide a surprisingly complete picture of an individual.

Numerical models on the other hand can be highly sensitive to mathematical regularities or outliers in their inputs. These and other data risks are being amplified as model counts and model data requirements rise.

Banks will need to invest in a stronger, more independent end-to-end view of their data supply, and monitor not only the accuracy, but also the bias, applicability, provenance and cross-compatibility of data. This uplift in governance will certainly challenge most banks’ data management processes.

Model risk management that can defy interpretation

If an internal auditor or a regulator asks for the reason for a decision — whether it’s a single consumer loan or a major strategic choice — a bank using traditional models can eventually answer the question. Traditional models have been specific in purpose, and ultimately composed of parameters that have specific (if specialised) business meanings.

By contrast, regulators including the PRA have pointed out that modern models are often broad in scope and composed of parameters that may have no intelligible meaning: millions of ‘weights’ which influence the model’s output but can’t be easily related either to input information or to specific outputs.

This lack of ‘explainability and interpretability’ is a risk in its own right. Modern models can be vulnerable to a wide range of quality issues; lack of interpretability can make it difficult and expensive to isolate and fix such problems.

The PRA’s proposed ‘tiering’ system would grade models by criteria, including complexity and interpretability. All this represents a cost to banks, who must manage not only more models and data, but also more information to validate, classify and justify the behaviour of their models.

In the end, it may be the sheer proliferation and ubiquity of models that imposes the greatest burden on banks. Traditionally, MRM has been about specialists managing a small family of interpretable models via fairly well-understood validation processes. The challenge is now much bigger, and the PRA’s commitment to making senior managers accountable means it can’t be ignored.

SS 1/23 and the future of model risk management

The UK has been the first jurisdiction to respond to the rise of AI/ML with this new MRM regulation. But while the UK currently leads the field, there is little that’s unique in the PRA’s requirements. Rather, a consensus seems to be developing across jurisdictions.

The broadening of the definition of a ‘model’ beyond quantitative algorithms; the focus on data and data quality; concern for how accountability will be preserved in an era of pervasive, continually-updating models; these points are all echoed in recent regulatory statements from the US, Japan and beyond.

The PRA’s latest move, then, is likely a preview of the next generation of global MRM regulation, one which will likely be dominated by the emerging risks of AI/ML models. This will require significant investment from banks that wish to leverage these powerful technologies while remaining compliant.


As seen in Banking Risk and Regulation, an FT publication (November 2023)

Author

Ben Peterson

Ben Peterson, Treliant’s Data Lead for EMEA, is a technology leader with more than 20 years’ experience in Financial Services and fintech.  He understands the role that strong data management plays in increasing revenue and reducing risk, and believes that data management can have a compelling RoI at both program…