We have witnessed technology in the financial services industry innovate at an unprecedented pace over the last decade. And with the recent introduction of artificial intelligence (AI), the pace of technology evolution and the industry’s dependency on it have increased exponentially.

Given this rapidly evolving and complex landscape, how do risk and compliance leaders keep up? What should they anticipate? And what should they be doing to prepare for what’s to come, while continuing to ensure the effectiveness and relevance of their risk and compliance programs?

Risk disciplines, particularly in areas of non-financial risk, have generally been slow to adopt technology capabilities. Regtech remains highly focused on the operational and volume/data-based elements of risk activities (AML, complaints, testing, etc.), and the adoption and advancement of governance, risk, and compliance (GRC) capabilities. The interpretive, qualitative, advisory, and non-operational nature of non-financial risk management (compliance, third-party, change management, etc.) make it more challenging to find tech-enabled opportunities and automate. So fundamentally, the compliance program of today still looks and operates very similar to that of 15 to 20 years ago.

Could AI be the innovation with the potential to redefine risk and compliance as we know it? At the very least, risk and compliance programs will need to adapt quickly to keep pace with the business and operational changes that AI is likely to bring, if they are to remain effective.

We’ve seen over the years that legislators and regulators take a considerable amount of time to observe any industry innovation before taking and enforcing a position. This is a window of great opportunity for those financial institutions with the risk appetite to step into emerging areas as early adopters, to test and learn, and to influence the direction the industry takes and how the regulators view the risks. They could take a cue from Acting Comptroller of the Currency Michael J. Hsu. “It helps to bear in mind three principles: (1) innovate in stages, (2) build the brakes while building the engine, and (3) engage regulators early and often,” he said in a June 15 speech at the ABA Risk and Compliance Conference.

The challenge for risk and compliance professionals during this period is that they must adapt their practices to properly anticipate, identify, and mitigate the risks of emerging and evolving technology, without clear guardrails for doing so. This exercise requires a keen understanding of their organization’s risk appetite, early engagement in the exploration of new initiatives, and risk and compliance professionals with strong skills in critical thinking and risk management.

What Are Regulators Saying About AI?

Given the usual regulatory lag, it is not surprising that regulators have not yet established any specific guidance on if and how AI can be used in financial institutions, or how the related risks are to be managed. And we shouldn’t expect this level of guidance for some time. They have, however, been providing a lot of opinions about related topics (automated models, machine learning, digital apps, algorithms, chatbots) and early indicators of rules to come. We can already see the areas of initial concern and a sense of what to anticipate. An overarching focus on bias, fairness, disparate impact, and customer protection has emerged from the following guidance, opinions, and actions:

  • Following the boom in mortgage originations into 2022, the Consumer Financial Protection Bureau (CFPB) turned its attention to algorithmic bias in home valuations.
  • Various regulatory agencies have focused for the past couple years on credit denials that are made by third-party verification providers and based on algorithms.
  • The Office of the Comptroller of the Currency (OCC) established a new Office of Financial Technology earlier this year, and initiated research into the implications of financial technology for banking in mid-2022.
  • The CFPB produced an “Issue Spotlight” on June 6 of this year about increasingly AI-enabled chatbots, their risks to consumers, and guidance to financial institutions.
  • The Biden administration directly addressed AI last October, when it introduced the White House blueprint for an AI Bill of Rights, focused on ensuring this technology is designed and used in a manner that doesn’t discriminate or violate consumer privacy laws.
  • Regulators have found other occasions to reiterate that in the early days of AI, banks have a duty to proceed in a manner that protects customers from discrimination and harm. This position was affirmed by the CFPB through its policy statement on abusive acts or practices on April 3 of this year, and through its joint statement with the Department of Justice (DOJ), Federal Trade Commission (FTC), and Equal Employment Opportunity Commission (EEOC) focused on enforcement efforts against discrimination and bias related to automated systems.
  • Acting Comptroller Hsu’s recent speech highlighted four areas of AI-related risk that the OCC is prioritizing and that banks must be equipped to effectively manage. These include bias and discrimination, fraud, the creation and dissemination of harmful misinformation, and the concept of “alignment.” Specifically, misalignment is what happens when AI outcomes have become farther and farther removed from the original source programming and/or from the direct management of the financial institution, through reliance on third parties, such that it brings into question where accountability lies for any potential disparate impact. This unintended consequence highlights the importance not only of strong risk and compliance practices, but of effective governance and oversight by institutions.

What Should We Be Anticipating?

U.S. regulators are making their near-term priorities clear as banks wade carefully into the world of AI, and we’re getting insight into the directions that they are and will be taking as they too learn more about AI and related technology innovations. Another important harbinger we must pay attention to is what is taking place in the European Union (EU).

The EU is typically faster to establish regulations and protections than the U.S. And as we experienced with data privacy rules under the EU’s General Data Protection Regulation (GDPR), the U.S. government and individual American states are likely to adopt some variation of the international standards, albeit with significant lag. But it’s important that we pay close attention to the precedents that are being set. On June 14 of this year, the European Parliament approved amendments to the draft EU AI Act. While the act has yet to be enacted and there is more political negotiation and industry reaction still to come, it establishes a number of important standards that we should pay close attention to, including:

  • The AI Act establishes several prohibitions, including on systems that exploit the vulnerabilities of protected groups.
  • The act also establishes a tiered risk approach to AI systems, with high-risk classification being assigned to infrastructure where an AI system could create risk to people’s health and livelihood, access to education and employment, law enforcement, the administration of justice, matters of asylum and border control, and access to essential services such as financial services.
  • For high-risk use of AI, there are several risk management requirements set forth that include documentation standards, required human oversight, and transparency to enable interpretation of output, as well as more typical governance, testing, and lifecycle risk management practices.
  • As with the EU’s GDPR, there are significant penalties established for non-compliance with prohibited uses and obligations.

What Should Risk and Compliance Leaders Focus on Now?

Risk and compliance leaders should stay on top of the messages we are receiving from legislators and regulators about emerging technology—particularly as they might apply to the strategic direction their own financial institutions are taking relative to AI and other advanced technology. This vigilance requires more than just monitoring the landscape of applicable regulatory change, but also surveying the broader scope of activity including speeches, blog posts, legislation, guidance, statements, actions, initiatives, and international legislation and regulation. Considering this broader context will help identify the direction regulators and legislators are taking and provide hints on areas of focus and concern along the way.

Meanwhile, within their own institutions, risk and compliance leaders also need to have a seat at the table for strategic discussions with top management and the board, to enable a proactive approach to assessing risks and determining appropriate controls and oversight. Given what we know from regulators thus far, the priority areas of focus for risk and compliance leaders should include:

  • Consumer Protection: Laser-focused attention on any potential bias and discrimination, disparate impact, violations of consumer privacy laws, and consumer harm through Unfair, Deceptive, or Abusive Acts or Practices (UDAAP) is fundamental. These risks must be thoroughly vetted and protected against, and AI by its nature will make this a challenging endeavor.
  • Change Management: An effective process to identify risks proactively and establish appropriate controls and oversight, in advance of implementing any significant change or new technology capability, is critical to ensuring that the appropriate risk management practices are in place and effective.
  • Third-Party Risk Management: The banking industry has become far more dependent on the services of third parties in recent years and will become even more so with these emerging technologies. Regulators have been focused on this risk for many years, and they issued a joint final guidance on third-party risk management on June 6 of this year. Effective third-party risk management is imperative for organizations as they introduce more complex technology capabilities. The alignment risk described earlier in this article will become increasingly important to clarify and understand, since financial institutions ultimately bear the primary responsibility for any model or technology output and impact.
  • Governance: As organizations evolve their technology capabilities, it will be increasingly important that there is strength and independence in the management and board oversight of technology, business strategy, and related risk management.
  • Strong Regulatory Engagement: In periods of rapid change such as this, it’s important to keep your regulators close and well informed. Keeping them apprised of the steps you are taking as an organization, and the manner in which you are evaluating and managing risk, will not only help reassure them, but can also influence how the regulators themselves perceive the risks and the practices they ultimately direct the industry to adopt.
  • Talent: Recruiting, developing, and retaining the appropriate talent to support evolving technology and the parallel evolution of risk and compliance programs should be a top priority for all risk and compliance leaders today. The innovation we are experiencing and the likely impacts it will have on how risk and compliance programs operate necessitates that risk and compliance professionals have broader risk capabilities, excellent critical thinking and reasoning skills, nimbleness, creativity, and strong technical, data management, and analytical skills. Depending on the risk discipline, particularly in non-financial risk areas, these are not the typical skills pursued over the last couple of decades, but they are fundamental now.

In Summary

AI and related technology innovation will likely redefine the banking industry and risk and compliance practices in ways we can’t yet envision. This uncertainty could be one of the greatest challenges that risk and compliance professionals have faced, but also the source of tremendous excitement and opportunity. The most important thing that risk and compliance leaders can do is to stay informed, be proactive with their regulators, set aside fear of the unknown, and seek instead to learn and prepare. By doing so, when the time comes, you should have the relevant talent and insight to know when and how to adapt your own practices and programs in a manner that ensures the continuity of appropriate and effective risk management.

 

Author

Karin Lockovitch

Karin Lockovitch, a Treliant Senior Managing Director, Regulatory Compliance and Mortgage, is a 25-year banking and financial services executive. At Treliant, she leads the Regulatory Compliance and Mortgage Services division, to provide clients with valuable, applicable, and innovative solutions and support for their regulatory, compliance, and non-financial risk-related needs.