Frank Morisano, Senior Managing Director, is an internationally recognized executive with over 30 years of proven accomplishments: globally growing businesses, steering profitable growth in new markets, guiding product and service development, leading the acquisition, divestment and restructuring of companies, developing companies’ risk and financial crime abilities, and implementing environmental, social,…
Recently, social media has been buzzing with talk of gender discrimination in a new co-branded credit card offering. This still-unfolding episode offers valuable lessons to lenders and their origination partners across the financial services industry, as technological innovation continues to redefine their operational and compliance risks.
The episode began when a married couple—with joint accounts and living in a community property state—applied separately for the same credit card. The husband received a credit limit that he described as 20 times the limit offered to his wife. A second couple in similar circumstances reported that the husband received a credit limit that was 10 times his wife’s. When the couples began sharing their experience on social media, many others joined in the complaints.
The lender denied discriminatory decision-making, saying, “Our credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation, or any other basis prohibited by law.”
However, wives’ credit limits were then raised in at least some instances, following inquiries. They complained that the card issuer would not say why the limits were initially different. It was speculated that a computer algorithm had made the credit decisions. Some cardholders also noted other areas where they felt the card issuer failed consumers, including:
- Failure to report repayment habits to credit bureaus, which hampers building a credit history;
- Poor customer service, especially for lost or stolen card claims;
- Lack of integration with consumer financial management and budgeting applications;
- Inability to add authorized users;
- Limited payment options; and
- Sub-par cash back offerings.
Because of the discrimination complaints, the New York Department of Financial Services (NYDFS) has opened an investigation into whether the card issuer violated state law. A spokesperson for NYDFS said, “Any algorithm that, intentionally or not, results in discriminatory treatment of women or any other protected class of people violates New York law.” Meanwhile, both companies that partnered to launch the co-branded card have been the subject of unfavorable press as well as adverse social media discussion.
The takeaway from this episode is that lenders and their origination partners should redouble their focus on managing several types of risk in light of technological innovation. Specific lessons below are relevant to a wide range of financial services companies and partnerships, from bank-FinTech lending collaborations to alliances among banks to co-branded credit card partners. The lessons touch on model risk, compliance risk, complaint response, customer satisfaction, and third-party risk.
Model Risk Management
Lenders modeling their credit risk, whether using machine learning or traditional risk modeling methods, should maintain strong model risk management functions. Managing model risk (that is, the risk that the model itself is flawed or used improperly) begins with the underlying data. Be sure the data sets used to develop the model are complete, unbiased, accurate, and representative of the population to be evaluated by the resulting model. Modelers should assess potential data points, including nontraditional data points, for conceptual soundness. Are model features logically related to creditworthiness?
After assessing the data, lenders should ensure that their overall model risk management function meets the regulatory expectations laid out in the interagency Supervisory Guidance on Model Risk Management (Model Risk Guidance). Among other controls, these include properly documenting models and their development processes, independently validating their algorithms, and periodically reviewing model performance using tools such as population stability analysis, benchmarking, backtesting, and override analysis. In addition, any significant changes in algorithms should be validated before the revised model enters production. This is especially true in a machine learning or artificial intelligence environment, where there can be nearly constant changes to the model.
Compliance Risk Management
In addition to conforming to the Model Risk Guidance, it is a best practice to evaluate the data and resulting model for disparate impact and correlation with membership in a protected demographic group. This is especially important if using nontraditional data points. The Equal Credit Opportunity Act’s (ECOA) Regulation B requires credit scoring algorithms or systems to be “empirically derived, demonstrably and statistically sound,” and the Official Staff Commentary to Regulation B notes, “The credit scoring system must be revalidated frequently enough to ensure that it continues to meet recognized professional statistical standards for statistical soundness. To ensure that predictive ability is being maintained, the creditor must periodically review the performance of the system.” Similarly, lenders should monitor any model overrides for disparate impact on a prohibited basis.
Lenders should also test whether underwriting or credit marketing algorithms cause unequal access to credit based on an applicant’s digital footprint in a manner that amounts to digital redlining and discriminates on a prohibited basis. Digital redlining occurs when products or pricing offered to consumers or presented in display ads or social media marketing differ by digital channel, so that channels with greater minority usage are offered adverse conditions. It can also occur when lenders curate online loan advertising or set loan terms and conditions based on internet tracking, geolocation data, or other nontraditional data points.
In addition, lenders must ensure that rejections and other adverse actions based on algorithms meet the requirements of the ECOA and Regulation B. For model- or score-based decisions, lenders should verify that the reasons for adverse actions are accurate, consistent with the model, and meaningful for telling a consumer why a request was declined and how to improve the future likelihood of obtaining the credit requested.
The Fair Credit Reporting Act and its Regulation V also include required consumer disclosures and notifications that would apply to algorithm-based pricing, affiliate marketing, and information sharing. Are the collection of data, its use, and any data sharing clearly disclosed to the consumer? Are there appropriate resolution processes for both direct disputes and disputes through a consumer reporting agency or credit bureau? Regulation V also incorporates limits on the use of medical information. Model developers should ensure that any alternative data used in algorithms does not inappropriately incorporate such data.
If a credit product is marketed as helping consumers build credit histories, lenders should include credit reporting in their business plan. For lenders that choose to report to consumer reporting agencies, accurate reporting is a must.
Beyond Regulations B and V, lenders and third-party partners should evaluate other consumer protection risks associated with any technology they use in delivering or servicing loans, such as privacy and information security. Are cybersecurity practices robust? Are descriptions of cybersecurity practices provided to consumers accurate?
Lenders should evaluate their advertising, disclosure, transaction posting, and billing practices to determine if those practices are clear and transparent to the consumer. Would a reasonable consumer understand the cost of credit, posted charges and available balance? Are multiple payments in a billing cycle posted appropriately?
Lenders have historically built their complaint management programs around the standard channels of consumer complaints (e.g., phone, mail, and email). In today’s social media environment, however, lenders and their partners should also have plans for monitoring and rapidly responding to negative social media postings.
Excluding social media posts from complaint monitoring eliminates a critical path to identifying potential issues and to gaining an understanding of how consumers view a product or service offering. While not all posts will meet a lender’s standard definition of a complaint or perhaps even be factual in nature, they do speak to consumer perception, which can quickly become a lender’s reality.
Monitoring, aggregating, and analyzing the content of such posts must be included in a lender’s complaint management processes. Regardless of channel, insured depositories must properly identify complaints to be included in the company’s Community Reinvestment Act (CRA) public file.
When introducing a new product, customer satisfaction is critical to gaining market share. Lenders should evaluate the product offering for marketplace competitiveness and consumer needs. Are product features, such as rewards, consistent with other offerings already in the market?
Although not a regulatory requirement, some commentators have noted that offering multiple payment and account management options (such as online, phone, mobile application, and mail) improves customer satisfaction. Similarly, integration with consumer budgeting and financial management applications is a useful feature, especially in credit products offered as opportunities to build credit histories.
Third-Party Risk Management
Finally, in lending arrangements where the request for credit is marketed, received, underwritten, priced, or serviced by a third party, vendor and partner management practices should include investigating each party’s capabilities with respect to consumer protection and relationship management. Do your third-party risk management practices adequately consider your business partners’ capabilities and responsibilities with respect to fair lending, model risk management, and consumer protection? Have you considered the potential reputational risk from a third party’s misstep? Have you asked your business partner what steps they are taking to protect your company from risk?
An effective third-party risk management program must: (1) adequately investigate these potential risks during initial (and ongoing) due diligence, (2) document protections during contract negotiations, and (3) regularly monitor the partner’s performance throughout the life of the relationship.
In conclusion, third-party partnerships offer potential to expand access to credit, broaden your customer base, and increase lending efficiency. However, these opportunities come with the potential for significant risk. Accordingly, lenders and their partners must balance the benefits with the risks associated with partner errors, including model, compliance, and reputational risks. Enhancing compliance and risk management practices is the best way for a lender to protect itself, its partners and, most importantly, its customers.