Everyone in the financial services industry is aware of redlining risk, which is a key regulatory priority. Physical redlining is a form of illegal disparate treatment that occurs when residents of majority-minority neighborhoods receive unequal access to credit, or unequal terms of credit, because of the demographics of either the neighborhood in which the credit seeker resides or the residential property to be mortgaged is located. Redlining may violate both the Fair Housing Act and the Equal Credit Opportunity Act, and cause related issues under the Community Reinvestment Act (CRA). Key factors in physical redlining are locations of branches and loan production offices; marketing practices; product availability, terms, and conditions; geographic distribution of applications and approvals; and neighborhood demographics.

With the rise of digital banking, some regulators and community advocates are becoming concerned about a new form of discrimination, digital redlining. Digital redlining is a form of discrimination where lenders restrict access to credit, or offer credit on unequal terms, because of the applicants’ digital footprint. This type of redlining occurs when products or pricing offered to the consumer or presented in display ads or social media marketing differ by digital channel, so that channels with greater minority usage are offered adverse conditions. It can also occur when lenders curate online loan advertising or set loan terms and conditions based on internet tracking or big data.

How does digital redlining occur? The most direct method is explicit use of “identity affiliation” in determining whether to display credit offers. In October 2016, the ProPublica investigative news organization published a report based on testing Facebook’s advertising practices. Its testing revealed that Facebook permitted advertisers to both target and exclude users from seeing advertising for housing and employment based on Facebook’s “Ethnic Affinities,” which Facebook assigns based on pages and posts that users have “liked” or engaged with on Facebook. These ethnic affinities included African-American, Asian-American, and Hispanic affiliations. As part of its response to the ProPublica report, Facebook launched an automated system in February 2017 to prevent advertisers from using racial categories in advertisements for housing, credit, and employment.

Digital redlining could occur in other ways that do not involve explicit consideration of demographic characteristics. For example, advertising and offers related to credit could be curated based on internet tracking or big data.

Internet Tracking Risks

Many websites, search engines, social media sites, applications, and internet service providers track consumers as they move through the internet. There are various ways this can be done. Most consumers are familiar with cookies, which are little bits of information placed on their computers by web browsers. First-party cookies are placed by sites visited as consumers surf the web. Third-party cookies are placed by someone other than the sites that users visit, such as advertising networks or analytics companies.

As users have become aware of traditional cookies and taken steps to block or remove them, web companies have developed new types of cookies and other mechanisms for tracking, such as device fingerprinting and device identifiers. These techniques track device type, internet activity, and settings, including keyboard language, application usage, items read from news feeds, and application store searches and purchases. They may also track location under some circumstances.

With internet tracking, advertisers can curate offers, including offers for credit, based on past internet activity. It would be possible, therefore, to offer different credit advertisements to users that visited websites or locations or used apps associated with minority communities and users than to users that showed an interest in things associated with nonminority communities. This would be one method of digital redlining.

Big Data Risks

Lenders and statisticians using big data in credit-related algorithms could create underwriting and pricing rules that result in less favorable treatment of members of minority communities. “Big data” refers to the large volume of high-velocity, high-variability, unstructured and multi-structured data that businesses collect. Big data and powerful analytic tools can be combined to create information and find patterns, anomalies, or previously unknown structures within complex data sets. This offers potential for both good, such as targeting appropriate fair and responsible credit products to previously underserved populations, and bad, such as using demographic data embedded within big data to exclude minority communities from offers of credit.

As the growth of financial technology (FinTech) companies has highlighted the use of big data and alternative data sources in credit marketing, underwriting, and pricing algorithms, some advocacy groups have expressed concerns that such data could be correlated with both individual and neighborhood characteristics, and effectively serve as a proxy for protected class membership in credit modeling. This concern is heightened when data points that do not have a clear connection to creditworthiness are used in marketing credit offers or setting credit terms and conditions. Such uses of big data could lead to digital redlining or other forms of discrimination.

Four Precautionary Steps

To prevent digital redlining, lenders should give careful consideration to several aspects of their marketing, underwriting, and pricing strategies. First, review your marketing plans to ensure that prohibited basis factors or close proxies thereto are not included in your marketing funnels, also known as offer waterfalls, inclusion or exclusion criteria, or models for credit. Lenders should ensure that their marketing plan covers all neighborhoods in their reasonably expected market area(s). If a lender places ads through third parties, a thorough understanding of the third party’s placement criteria is a must.

Next, lenders should review their product offerings by channel, including digital channels. Do products, terms or conditions differ by channel? If so, the lender should consider the rationale for any differences and the potential for disparate impact.

Third, review your websites and mobile applications. Do they have accessibility accommodations in compliance with the Americans with Disabilities Act? If not, you could be excluding disabled users from accessing your banking services. How data-intensive are your mobile applications? Some regulators have expressed concerns that low- and moderate-income consumers may be unable to purchase adequate cellular data to use key applications, including banking applications, through the entire billing cycle.

Fourth, evaluate your underwriting and pricing algorithms. Strong model validation programs are a must for lenders. Fair lending best practices in model development and validation include thoroughly understanding the model factors and their correlation to protected class membership or neighborhood demographics. Especially for non-traditional or alternative data points, the lender should be able to explain the relationship between the data points in question and creditworthiness. The less common the usage of the data in credit evaluation and pricing, the more likely it is that a regulator or advocate will question its use. Remember that model validation requirements also apply to third-party algorithms.

The Final Analysis

FinTech offers innovation in delivery channels, credit evaluation, and products, but lenders must be sure that innovation is delivered in a fair and responsible manner. One benefit of FinTech is speed to market, but that speed can create compliance risk if it means that products, applications, or algorithms are placed into production without adequate compliance and model validation reviews. Cutting-edge technologies, such as machine learning and artificial intelligence, may make it difficult for model validation to keep pace with algorithmic changes; for lenders to know which model variant was used for any given client; and for lenders to ensure that the rationale for declining credit maps appropriately to the decline reason that is included on the adverse action notice provided to the applicant.

Finally, regulators treat bank partnerships with FinTech companies just like any other third-party lending relationship for CRA and fair lending purposes. This means failures by a business partner can result in adverse exam findings and CRA downgrades for the banks purchasing the loans.

Such unfortunate outcomes – as well as collateral reputational damage, higher cost of compliance, potential damage to business performance, and other risks – should give banks strong incentive to avoid digital redlining.