Lynn Woosley is a Senior Director with Treliant. She is a seasoned executive with extensive risk management experience in regulatory compliance, consumer and commercial credit risk, credit and compliance risk modeling, model governance, regulatory change management, acquisition due diligence, and operational risk in both financial services and regulatory environments.
Read the CFPB blog here.
Treliant knows fair lending and risk modeling. If you need assistance developing compliant adverse action notices based on artificial intelligence or machine learning credit models, Treliant can help.
On July 7, 2020, the Consumer Financial Protection Bureau (CFPB) released a blog post titled, “Innovation spotlight: Providing adverse action notices when using AI/ML models.” The blog noted that uncertainty regarding how AI models address the adverse action notice requirements of the Equal Credit Opportunity Act (ECOA) and Fair Credit Reporting Act (FCRA) may be slowing adoption of AI models for credit underwriting.
The blog made two key points with respect to generating compliant adverse action notices. First, the authors pointed out that existing requirements of Regulation B are flexible enough to permit compliant adverse action notices resulting from AI or ML decisions. Although the lender must provide accurate and specific reasons for adverse action, the lender is not required to disclose how or why the disclosed decline reason affected the credit decision, or how the disclosed factor relates to creditworthiness in a credit scoring system. In addition, neither ECOA nor Regulation B specifies a definitive list of decline reasons, which provides creditors flexibility to use decline factors that are not listed on the current model forms.
Second, the blog reminded lenders using innovative tools of CFPB policies designed to foster compliant innovation, with a focus on the Policy to Encourage Trial Disclosure Programs. The CFPB is particularly interested in:
- Whether the current examples methodologies for determining principal reasons for adverse action in the Official Interpretation of Regulation B are applicable to current AI and ML models and explainability methods;
- Whether explainability methods are accurate, especially when applied to deep learning or other complex ensemble models; and
- How to convey the principal decline reasons accurately and understandably to consumers, especially with respect to alternative data and complicated data interrelationships.