Speevr logo

An AI fair lending policy agenda for the federal financial regulators

An AI fair lending policy agenda for the federal financial regulators | Speevr

Algorithms, including artificial intelligence and machine learning models (AI/ML), increasingly dictate many core aspects of everyday life. Whether applying for a job or a loan, renting an apartment, or seeking insurance coverage, AI-powered statistical models decide who will have access to the foundational drivers of opportunity and equality.1

Michael Akinwumi

Chief Tech Equity Officer – National Fair Housing Alliance

Twitter
datawumi

John Merrill

Chief Technology Officer – FairPlay AI

Twitter
jwlmerrill

Lisa Rice

President and CEO – National Fair Housing Alliance

Twitter
ItsLisaRice

Kareem Saleh

Founder & CEO – FairPlay AI

Twitter
kareemsaleh

Maureen Yap

Senior Counsel – National Fair Housing Alliance

These models present both great promise and great risk. They can minimize human subjectivity and bias, facilitate more consistent outcomes, increase efficiencies, and generate more accurate decisions. Properly conceived and managed, algorithmic, and AI-based systems can be opportunity-expanding. At the same time, a variety of factors—including data limitations, lack of diversity in the technology field, and a long history of systemic inequality in America—mean that algorithmic decisions can perpetuate discrimination against historically underserved groups, such as people of color and women.
In light of the growing adoption of AI/ML, federal regulators—including the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), the Department of Housing and Urban Development (HUD), Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), and National Credit Union Administration (NCUA)—have been evaluating how existing laws, regulations, and guidance should be updated to account for the advent of AI in consumer finance. Earlier this year some of these regulators issued a request for information on financial institutions’ use of AI and machine learning in the areas of fair lending, cybersecurity, risk management, credit decisions, and other areas.2
The adoption of responsible AI/ML policies will continue to receive serious attention from regulators. This paper proposes policy and enforcement steps regulators can take to ensure AI/ML is harnessed to advance financial inclusion and fairness. As many other papers have already focused on methods for embracing the benefits of AI, we focus here on providing recommendations to regulators on how to identify and control for the risks in order to build an equitable market.
I. Background
A. AI/ML and consumer finance
For decades, lenders have used models and algorithms to make credit-related decisions, the most obvious examples being credit underwriting and pricing. Today, models are ubiquitous in consumer markets and are constantly being applied in new ways, such as marketing, customer relations, servicing, and default management. Lenders also commonly rely on models and modeled variables provided by third-party vendors.
Recent increases in computing power and exponential growth in available data have spurred the advancement of even more sophisticated statistical techniques. In particular, entities are increasingly using AI/ML, which involves exposing sophisticated algorithms to historical “training” data to discover complex correlations or relationships between variables in a dataset.  The set of discovered relationships—typically referred to as a “model”—is then run against real-world information to predict future outcomes.
In the consumer finance context, AI/ML is similar to traditional forms of statistical analysis in that both are used to identify patterns in historical data to draw inferences and future behavior.  What makes AI/ML unique is the ability to analyze much larger amounts of data and discover complex relationships between numerous data points that would normally go undetected by traditional statistical analysis. AI/ML tools are also capable of adapting to new information—or “learning”—without human intervention. These tools are becoming increasingly popular in both the private and public sectors. As two United States senators recently put it, “algorithms are increasingly embedded into every aspect of modern society.”3
B. The risks posed by AI/ML in consumer finance
While AI/ML models offer benefits, they also have the potential to perpetuate, amplify, and accelerate historical patterns of discrimination. For centuries, laws and policies enacted to create land, housing, and credit opportunities were race-based, denying critical opportunities to Black, Latino, Asian, and Native American individuals. Despite our founding principles of liberty and justice for all, these policies were developed and implemented in a racially discriminatory manner. Federal laws and policies created residential segregation, the dual credit market, institutionalized redlining, and other structural barriers. Families that received opportunities through prior federal investments in housing are some of America’s most economically secure citizens. For them, the nation’s housing policies served as a foundation of their financial stability and the pathway to future progress. Those who did not benefit from equitable federal investments in housing continue to be excluded.
Algorithmic systems often have disproportionately negative effects on people and communities of color, particularly with respect to credit, because they reflect the dual credit market that resulted from our country’s long history of discrimination.4 This risk is heightened by the aspects of AI/ML models that make them unique: the ability to use vast amounts of data, the ability to discover complex relationships between seemingly unrelated variables, and the fact that it can be difficult or impossible to understand how these models reach conclusions. Because models are trained on historical data that reflect and detect existing discriminatory patterns or biases, their outputs will reflect and perpetuate those same problems.5

Related Content

Technology & Innovation
To stop algorithmic bias, we first have to define it

Emily Bembeneck, Rebecca Nissan, and Ziad Obermeyer
Thursday, October 21, 2021

Financial Regulation
Focus on bank supervision, not just bank regulation

Peter Conti-Brown and Sean Vanatta
Tuesday, November 2, 2021

Financial Regulation
Policymakers must enable consumer data rights and protections in financial services

Dan Murphy and Jennifer Tescher
Wednesday, October 20, 2021

Examples of discriminatory models abound, particularly in the finance and housing space. In the housing context, tenant screening algorithms offered by consumer reporting agencies have had serious discriminatory effects.6 Credit scoring systems have been found to discriminate against people of color.7 Recent research has raised concerns about the connection between Fannie Mae and Freddie Mac’s use of automated underwriting systems and the Classic FICO credit score model and the disproportionate denials of home loans for Black and Latino borrowers.8
These examples are not surprising because the financial industry has for centuries excluded people and communities from mainstream, affordable credit based on race and national origin.9 There has never been a time when people of color have had full and fair access to mainstream financial services. This is in part due to the separate and unequal financial services landscape, in which mainstream creditors are concentrated in predominantly white communities and non-traditional, higher-cost lenders, such as payday lenders, check cashers, and title money lenders, are hyper-concentrated in predominantly Black and Latino communities.10
Communities of color have been presented with unnecessarily limited choices in lending products, and many of the products that have been made available to these communities have been designed to fail those borrowers, resulting in devastating defaults.11 For example, borrowers of color with high credit scores have been steered into subprime mortgages, even when they qualified for prime credit.12 Models trained on this historic data will reflect and perpetuate the discriminatory steering that led to disproportionate defaults by borrowers of color.13
Biased feedback loops can also drive unfair outcomes by amplifying discriminatory information within the AI/ML system. For example, a consumer who lives in a segregated community that is also a credit desert might access credit from a payday lender because that is the only creditor in her community. However, even when the consumer pays off the debt on time, her positive payments will not be reported to a credit repository, and she loses out on any boost she might have received from having a history of timely payments. With a lower credit score, she will become the target of finance lenders who peddle credit offers to her.14 When she accepts an offer from the finance lender, her credit score is further dinged because of the type of credit she accessed. Thus, living in a credit desert prompts accessing credit from one fringe lender that creates biased feedback that attracts more fringe lenders, resulting in a lowered credit score and further barriers to accessing credit in the financial mainstream.
In all these ways and more, models can have a serious discriminatory impact. As the use and sophistication of models increases, so does the risk of discrimination.
C. The applicable legal framework
In the consumer finance context, the potential for algorithms and AI to discriminate implicates two main statutes: the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act. ECOA prohibits creditors from discriminating in any aspect of a credit transaction on the basis of race, color, religion, national origin, sex, marital status, age, receipt of income from any public assistance program, or because a person has exercised legal rights under the ECOA.15  The Fair Housing Act prohibits discrimination in the sale or rental of housing, as well as mortgage discrimination, on the basis of race, color, religion, sex, handicap, familial status, or national origin.16
ECOA and the Fair Housing Act both ban two types of discrimination: “disparate treatment” and “disparate impact.”  Disparate treatment is the act of intentionally treating someone differently on a prohibited basis (e.g., because of their race, sex, religion, etc.). With models, disparate treatment can occur at the input or design stage, for example by incorporating a prohibited basis (such as race or sex) or a close proxy for a prohibited basis as a factor in a model. Unlike disparate treatment, disparate impact does not require intent to discriminate.  Disparate impact occurs when a facially neutral policy has a disproportionately adverse effect on a prohibited basis, and the policy either is not necessary to advance a legitimate business interest or that interest could be achieved in a less discriminatory way.17
II. Recommendations for mitigating AI/ML Risks
In some respects, the U.S. federal financial regulators are behind in advancing non-discriminatory and equitable technology for financial services.18 Moreover, the propensity of AI decision-making to automate and exacerbate historical prejudice and disadvantage, together with its imprimatur of truth and its ever-expanding use for life-altering decisions, makes discriminatory AI one of the defining civil rights issues of our time. Acting now to minimize harm from existing technologies and taking the necessary steps to ensure all AI systems generate non-discriminatory and equitable outcomes will create a stronger and more just economy.
The transition from incumbent models to AI-based systems presents an important opportunity to address what is wrong in the status quo—baked-in disparate impact and a limited view of the recourse for consumers who are harmed by current practices—and to rethink appropriate guardrails to promote a safe, fair, and inclusive financial sector. The federal financial regulators have an opportunity to rethink comprehensively how they regulate key decisions that determine who has access to financial services and on what terms. It is critically important for regulators to use all the tools at their disposal to ensure that institutions do not use AI-based systems in ways that reproduce historical discrimination and injustice.
A. Set clear expectations for best practices in fair lending testing, including a rigorous search for less discriminatory alternatives
Existing civil rights laws and policies provide a framework for financial institutions to analyze fair lending risk in AI/ML and for regulators to engage in supervisory or enforcement actions, where appropriate. However, because of the ever-expanding role of AI/ML in consumer finance and because using AI/ML and other advanced algorithms to make credit decisions is high-risk, additional guidance is needed. Regulatory guidance that is tailored to model development and testing would be an important step towards mitigating the fair lending risks posed by AI/ML.
Below we propose several measures that would mitigate those risks.
1. Set clear and robust regulatory expectations regarding fair lending testing to ensure AI models are non-discriminatory and equitable 
Federal financial regulators can be more effective in ensuring compliance with fair lending laws by setting clear and robust regulatory expectations regarding fair lending testing to ensure AI models are non-discriminatory and equitable. At this time, for many lenders, the model development process simply attempts to ensure fairness by (1) removing protected class characteristics and (2) removing variables that could serve as proxies for protected class membership. This type of review is only a minimum baseline for ensuring fair lending compliance, but even this review is not uniform across market players. Consumer finance now encompasses a variety of non-bank market players—such as data providers, third-party modelers, and financial technology firms (fintechs)—that lack the history of supervision and compliance management. They may be less familiar with the full scope of their fair lending obligations and may lack the controls to manage the risk. At a minimum, the federal financial regulators should ensure that all entities are excluding protected class characteristics and proxies as model inputs.19
Removing these variables, however, is not sufficient to eliminate discrimination and comply with fair lending laws. As explained, algorithmic decisioning systems can also drive disparate impact, which can (and does) occur even absent using protected class or proxy variables. Guidance should set the expectation that high-risk models—i.e., models that can have a significant impact on the consumer, such as models associated with credit decisions—will be evaluated and tested for disparate impact on a prohibited basis at each stage of the model development cycle.
Despite the need for greater certainty, regulators have not clarified and updated fair lending examination procedures and testing methodologies for several years. As a result, many financial institutions using AI/ML models are uncertain about what methodologies they should use to assess their models and what metrics their models are expected to follow. Regulators can ensure more consistent compliance by explaining the metrics and methodologies they will use for evaluating an AI/ML model’s compliance with fair lending laws.
2. Clarify that the federal financial regulators will conduct a rigorous search for less discriminatory alternatives as part of fair lending examinations, and set expectations that lenders should do the same 
The touchstone of disparate impact law has always been that an entity must adopt an available, less discriminatory alternative (LDA) to a practice that has discriminatory effect, so long as the alternative can satisfy the entity’s legitimate needs. Consistent with this central requirement, responsible financial institutions routinely search for and adopt LDAs when fair lending testing reveals a disparate impact on a prohibited basis. But not all do. In the absence of a robust fair lending compliance framework, the institutions that fail to search for and adopt LDAs will unnecessarily perpetuate discrimination and structural inequality. Private enforcement against these institutions is difficult because outside parties lack the resources and/or transparency to police all models across all lenders.
Given private enforcement challenges, consistent and widespread adoption of LDAs can only happen if the federal financial regulators conduct a rigorous search for LDAs and expect the lenders to do the same as part of a robust compliance management system. Accordingly, regulators should take the following steps to ensure that all financial institutions are complying with this central tenet of disparate impact law:
a. Inform financial institutions that regulators will conduct a rigorous search for LDAs during fair lending examinations so that lenders also feel compelled to search for LDAs to mitigate their legal risk. Also inform financial institutions how regulators will search for LDAs, so that lenders can mirror this process in their own self-assessments.
b. Inform financial institutions that they are expected to conduct a rigorous LDA search as part of a robust compliance management system, and to advance the policy goals of furthering financial inclusion and racial equity.
c. Remind lenders that self-identification and prompt corrective action will receive favorable consideration under the Uniform Interagency Consumer Compliance Rating System20 and the CFPB’s Bulletin on Responsible Business Conduct.21 This would send a signal that self-identifying and correcting likely fair lending violations will be viewed favorably during supervisory and enforcement matters.
The utility of disparate impact and the LDA requirement as a tool for ensuring equal access to credit lies not only in enforcement against existing or past violations but in shaping the ongoing processes by which lenders create and maintain the policies and models they use for credit underwriting and pricing. Taking the foregoing steps would help ensure that innovation increases access to credit without unlawful discrimination.
3. Broaden Model Risk Management Guidance to incorporate fair lending risk
For years, financial regulators like the OCC and Federal Reserve have articulated Model Risk Management (“MRM”) Guidance, which is principally concerned with mitigating financial safety and soundness risks that arise from issues of model design, construction, and quality.22 The MRM Guidance does not account for or articulate principles for guarding against the risks that models cause or the perpetuation of discrimination. Broadening the MRM Guidance scope would ensure institutions are guarding against discrimination risks throughout the model development and use process. In particular, regulators should clearly define “model risk” to include the risk of discriminatory or inequitable outcomes for consumers rather than just the risk of financial loss to a financial institution.
Effective model risk management practices would aid compliance with fair lending laws in several ways. First, model risk management practices can facilitate variable reviews by ensuring institutions understand the quality of data used and can identify potential issues, such as datasets that are over- or under-representative for certain populations. Second, model risk management practices are essential to ensuring that models, and variables used within models, meet a legitimate business purpose by establishing that models meet performance standards to achieve the goals for which they were developed. Third, model risk management practices establish a routine cadence for reviewing model performance. Fair lending reviews should, at a minimum, occur at the same periodic intervals to ensure that models remain effective and are not causing new disparities because of, for example, demographic changes in applicant and borrower populations.
To provide one example of how revising the MRM Guidance would further fair lending objectives, the MRM Guidance instructs that data and information used in a model should be representative of a bank’s portfolio and market conditions.23 As conceived of in the MRM Guidance, the risk associated with unrepresentative data is narrowly limited to issues of financial loss. It does not include the very real risk that unrepresentative data could produce discriminatory outcomes. Regulators should clarify that data should be evaluated to ensure that it is representative of protected classes. Enhancing data representativeness would mitigate the risk of demographic skews in training data being reproduced in model outcomes and causing financial exclusion of certain groups.
One way to enhance data representativeness for protected classes would be to encourage lenders to build models using data from Minority Depository Institutions (MDIs) and Community Development Financial Institutions (CDFIs), which have a history of successfully serving minority and other underserved communities; adding their data to a training dataset would make the dataset more representative. Unfortunately, many MDIs and CDFIs have struggled to report data to consumer reporting agencies in part due to minimum reporting requirements that are difficult for them to satisfy. Regulators should work with both consumer reporting agencies and institutions like MDIs and CDFIs to identify and overcome obstacles to the incorporation of this type of data in mainstream models.
4. Provide guidance on evaluating third-party scores and models
Financial institutions routinely rely on third-party credit scores and models to make major financial decisions. These scores and models often incorporate AI/ML methods. Third-party credit scores and other third-party models can drive discrimination, and there is no basis for immunizing them from fair lending laws. Accordingly, regulators should make clear that fair lending expectations and mitigation measures apply as much to third-party credit scores and models as they do to institutions’ own models.
More specifically, regulators should clarify that, in connection with supervisory examinations, they may conduct rigorous searches for disparate impact and less discriminatory alternatives related to third-party scores and models and expect the lenders to do the same as part of a robust compliance management system. The Federal Reserve Board, FDIC, and OCC recently released the “Proposed Interagency Guidance on Third-Party Relationships: Risk Management,” which states: “When circumstances warrant, the agencies may use their authorities to examine the functions or operations performed by a third party on the banking organization’s behalf. Such examinations may evaluate…the third party’s ability to…comply with applicable laws and regulations, including those related to consumer protection (including with respect to fair lending and unfair or deceptive acts or practices) ….”24  While this guidance is helpful, the regulators can be more effective in ensuring compliance by setting clear, specific, and robust regulatory expectations regarding fair lending testing for third-party scores and models. For example, regulators should clarify that protected class and proxy information should be removed, that credit scores and third-party models should be tested for disparate impact, and that entities are expected to conduct rigorous searches for less discriminatory alternative models as part of a robust compliance management program.25
5. Provide guidance clarifying the appropriate use of AI/ML during purported pre-application screens
Concerns have been raised about the failure to conduct fair lending testing on AI/ML models that are used in purported pre-application screens such as models designed to predict whether a potential customer is attempting to commit fraud. As with underwriting and pricing models, these models raise the risk of discrimination and unnecessary exclusion of applicants on a prohibited basis. Unfortunately, some lenders are using these pre-application screens to artificially limit the applicant pool that is subject to fair lending scrutiny. They do so by excluding from the testing pool those prospective borrowers who were purportedly rejected for so-called “fraud”-based or other reasons rather than credit-related reasons. In some cases, “fraud”26 is even defined as a likelihood that the applicant will not repay the loan—for example, that an applicant may max out a credit line and be unwilling to pay back the debt. This practice can artificially distort the lender’s applicant pool that is subject to fair lending testing and understate denial rates for protected class applicants.
Regulators should clarify that lenders cannot evade civil rights and consumer protection laws by classifying AI/ML models as fraud detection rather than credit models and that any model used to screen out applicants must be subject to the same fair lending monitoring as other models used in the credit process.
B. Provide clear guidance on the use of protected class data to improve credit outcomes
Any disparate impact analysis of credit outcomes requires awareness or estimation of protected class status. It is lawful—and often necessary—for institutions to make protected-class neutral changes to practices (including models) to decrease any outcome disparities observed during fair lending testing. For example, institutions may change decision thresholds or remove or substitute model variables to reduce observed outcome disparities.
Institutions should also actively mitigate bias and discrimination risks during model development. AI/ML researchers are exploring fairness enhancement techniques to be used during model pre-processing and in-processing, and evidence exists that these techniques could significantly improve model fairness. Some of these techniques use protected class data during model training but do not use that information while scoring real-world applications once the model is in production. This raises the question of the ways in which the awareness or use of protected class data during training is permissible under the fair lending laws. If protected class data is being used for a salutary purpose during model training—such as to improve credit outcomes for historically disadvantaged groups—there would seem to be a strong policy rationale for permitting it, but there is no regulatory guidance on this subject. Regulators should provide clear guidance to clarify the permissible use of protected class data at each stage of the model development process in order to encourage developers to seek optimal outcomes whenever possible.
C. Consider improving race and gender imputation methodologies
Fair lending analyses of AI/ML models—as with any fair lending analysis—require some awareness of applicants’ protected class status. In the mortgage context, lenders are permitted to solicit this information, but ECOA and Regulation B prohibit creditors from collecting it from non-mortgage credit applicants. As a result, regulators and industry participants rely on methodologies to estimate the protected class status of non-mortgage credit applicants to test whether their policies and procedures have a disparate impact or result in disparate treatment. The CFPB, for example, uses Bayesian Improved Surname Geocoding (BISG), which is also used by some lenders and other entities.27  BISG can be useful as part of a robust fair lending compliance management system. Using publicly available data on names and geographies, BISG can allow agencies and lenders to improve models and other policies that cause disparities in non-mortgage credit on a prohibited basis.28
Regulators should continue to research ways to further improve protected class status imputation methodologies using additional data sources and more advanced mathematical techniques. Estimating protected class status of non-mortgage credit applicants is only necessary because Regulation B prohibits creditors from collecting such information directly from those applicants.29 The CFPB should consider amending Regulation B to require lenders to collect protected class data as a part of all credit applications, just as they do for mortgage applications.
D. Ensure lenders provide useful adverse action notices
AI/ML explainability for individual decisions is important for generating adverse action reasons in accordance with ECOA and Regulation B.30 Regulation B requires that creditors provide adverse action notices to credit applicants that disclose the principal reasons for denial or adverse action.31 The disclosed reasons must relate to and accurately describe the factors the creditor considered. This requirement is motivated by consumer protection concerns regarding transparency in credit decision making and preventing unlawful discrimination. AI/ML models sometimes have a “black box” quality that makes it difficult to know why a model reached a particular conclusion. Adverse action notices that result from inexplicable AI/ML models are generally not helpful or actionable for the consumer.
Unfortunately, a CFPB blog post regarding the use of AI/ML models when providing adverse action notices seemed to emphasize the “flexibility” of the regulation rather than ensuring that AI providers and users adhere to the letter and spirit of ECOA, which was meant to ensure that consumers could understand the credit denials that impact their lives.32 The complications raised by AI/ML models do not relieve creditors of their obligations to provide reasons that “relate to and accurately describe the factors actually considered or scored by a creditor.”[33] Accordingly, the CFPB should make clear that creditors using AI/ML models must be able to generate adverse action notices that reliably produce consistent, specific reasons that consumers can understand and respond to, as appropriate. As the OCC has emphasized, addressing fair lending risks requires an effective explanation or explainability method; regardless of the model type used: “bank management should be able to explain and defend underwriting and modeling decisions.”34
There is little current emphasis in Regulation B on ensuring these notices are consumer-friendly or useful. Creditors treat them as formalities and rarely design them to actually assist consumers.  As a result, adverse action notices often fail to achieve their purpose of informing consumers why they were denied credit and how they can improve the likelihood of being approved for a similar loan in the future. This concern is exacerbated as models and data become more complicated and interactions between variables less intuitive.
The model adverse action notice contained in Regulation B illustrates how adverse action notices often fail to meaningfully assist consumers. For instance, the model notice includes vague reasons, such as “Limited Credit Experience.” Although this could be an accurate statement of a denial reason, it does not guide consumer behavior. An adverse action notice that instead states, for example, you have limited credit experience; consider using a credit-building product, such as a secured loan, or getting a co-signer, would provide better guidance to the consumer about how to overcome the denial reason. Similarly, the model notice in Regulation B includes “number of recent inquiries on credit bureau report” as a sample denial reason. This denial reason may not be useful because it does not provide information about directionality. To ensure that adverse action notices are fulfilling their statutory purpose, the CFPB should require lenders to provide directionality associated with principal reasons and explore requiring lenders to provide notices containing counterfactuals—the changes the consumer could make that would most significantly improve their chances of receiving credit in the future.
E. Engage in robust supervision and enforcement activities
Regulators should ensure that financial institutions have appropriate compliance management systems that effectively identify and control risks related to AI/ML systems, including the risk of discriminatory or inequitable outcomes for consumers. This approach is consistent with the Uniform Interagency Consumer Compliance Rating System35 and the Model Risk Management Guidance. The compliance management system should comprehensively cover the roles of board and senior management, policies and procedures, training, monitoring, and consumer complaint resolution. The extent and sophistication of the financial institution’s compliance management system should align with the extent, sophistication, and risk associated with the financial institution’s usage of the AI system, including the risk that the AI system could amplify historical patterns of discrimination in financial services.
Where a financial institution’s use of AI indicates weaknesses in their compliance management system or violations of law, the regulators should use all the tools at their disposal to quickly address and prevent consumer harm, including issuing Matters Requiring Attention; entering into a non-public enforcement action, such as a Memorandum of Understanding; referring a pattern or practice of discrimination to the U.S. Department of Justice; or entering into a public enforcement action. The Agencies have already provided clear guidance (e.g., the Uniform Consumer Compliance Rating System) that financial institutions must appropriately identify, monitor, and address compliance risks, and the regulators should not hesitate to act within the scope of their authority. When possible, the regulators should explain to the public the risks that they have observed and the actions taken in order to bolster the public’s trust in robust oversight and provide clear examples to guide the industry.
F. Release additional data and encourage public research
Researchers and advocacy groups have made immense strides in recent years studying discrimination and models, but these efforts are stymied by a lack of publicly available data. At present, the CFPB and the Federal Housing Finance Agency (FHFA) release some loan-level data through the National Survey of Mortgage Originations (NSMO) and Home Mortgage Disclosure Act (HMDA) databases. However, the data released into these databases is either too limited or too narrow for AI/ML techniques truly to discern how current underwriting and pricing practices could be fairer and more inclusive. For example, there are only about 30,000 records in NSMO, and HMDA does not include performance data or credit scores.
Adding more records to the NSMO database and releasing additional fields in the HMDA database (including credit score) would help researchers and advocacy groups better understand the effectiveness of various AI fairness techniques for underwriting and pricing. Regulators also should consider how to expand these databases to include more detailed data about inquiries, applications, and loan performance after origination. To address any privacy concerns, regulators could implement various measures such as only making detailed inquiry and loan-level information (including non-public HMDA data) available to trusted researchers and advocacy groups under special restrictions designed to protect consumers’ privacy rights.
In addition, NSMO and HMDA both are limited to data on mortgage lending. There are no publicly available application-level datasets for other common credit products such as credit cards or auto loans. The absence of datasets for these products precludes researchers and advocacy groups from developing techniques to increase their inclusiveness, including through the use of AI. Lawmakers and regulators should therefore explore the creation of databases that contain key information on non-mortgage credit products. As with mortgages, regulators should evaluate whether inquiry, application, and loan performance data could be made publicly available for these credit products.
Finally, the regulators should encourage and support public research. This support could include funding or issuing research papers, convening conferences involving researchers, advocates, and industry stakeholders, and undertaking other efforts that would advance the state of knowledge on the intersection of AI/ML and discrimination. The regulators should prioritize research that analyzes the efficacy of specific uses of AI in financial services and the impact of AI in financial services for consumers of color and other protected groups.
G. Hire staff with AI and fair lending expertise, ensure diverse teams, and require fair lending training
AI systems are extremely complex, ever-evolving, and increasingly at the center of high-stakes decisions that can impact people and communities of color and other protected groups. The regulators should hire staff with specialized skills and backgrounds in algorithmic systems and fair lending to support rulemaking, supervision, and enforcement efforts that involve lenders who use AI/ML. The use of AI/ML will only continue to increase. Hiring staff with the right skills and experience is necessary now and for the future.
In addition, the regulators should also ensure that regulatory as well as industry staff working on AI issues reflect the diversity of the nation, including diversity based on race, national origin, and gender. Increasing the diversity of the regulatory and industry staff engaged in AI issues will lead to better outcomes for consumers. Research has shown that diverse teams are more innovative and productive36 and that companies with more diversity are more profitable.37 Moreover, people with diverse backgrounds and experiences bring unique and important perspectives to understanding how data impacts different segments of the market.38 In several instances, it has been people of color who were able to identify potentially discriminatory AI systems.39
Finally, the regulators should ensure that all stakeholders involved in AI/ML—including regulators, financial institutions, and tech companies—receive regular training on fair lending and racial equity principles. Trained professionals are better able to identify and recognize issues that may raise red flags. They are also better able to design AI systems that generate non-discriminatory and equitable outcomes. The more stakeholders in the field who are educated about fair lending and equity issues, the more likely that AI tools will expand opportunities for all consumers. Given the ever-evolving nature of AI, the training should be updated and provided on a periodic basis.
III. Conclusion
Although the use of AI in consumer financial services holds great promise, there are also significant risks, including the risk that AI has the potential to perpetuate, amplify, and accelerate historical patterns of discrimination. However, this risk is surmountable. We hope that the policy recommendations described above can provide a roadmap that the federal financial regulators can use to ensure that innovations in AI/ML serve to promote equitable outcomes and uplift the whole of the national financial services market.

Kareem Saleh and John Merrill are CEO and CTO, respectively, of FairPlay, a company that provides tools to assess fair lending compliance and paid advisory services to the National Fair Housing Alliance. Other than the aforementioned, the authors did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, they are currently not an officer, director, or board member of any organization with an interest in this article.

Credit constrained firms and government subsidies: evidence from a European Union program

Credit constrained firms and government subsidies: evidence from a European Union program | Speevr

by Eszter Balogh, Adám Banai, Tirupam Goel, Péter Lang, Martin Stancsics, Előd Takáts and Álmos TelegdyUsing rejected subsidy applicants as control group and bank queries to the credit-registry to identify firms that applied for but did not receive a loan, we show that subsidies generate a sizeable incremental impact on asset growth of constrained firms relative to unconstrained businesses.

Stress testing – Executive Summary

Stress testing - Executive Summary | Speevr

Stress tests are forward-looking exercises that aim to evaluate the impact of severe but plausible adverse scenarios on the resilience of financial firms.

China’s payments u-turn: Government over technology

China’s payments u-turn: Government over technology | Speevr

China has been at the forefront of a technological revolution in payments in both its private and public sectors. China’s tech firms succeeded in replacing the bank-based magnetic striped card world with a tech-based QR code system. Then the People’s Bank of China (PBOC) launched its central bank digital currency, followed by a series of government actions that appear designed to steer the Chinese system away from these tech firms. What is going on in Chinese payments is a fascinating battle of private sector innovation versus government control and big-tech versus big-banks, putting the usually staid and boring world of payment systems into the spotlight allowing for examination of broader narratives about the future of China and how it is playing the global economic game. It also offers insight into how the Federal Reserve plans to approach digital payments in America.

Aaron Klein

Senior Fellow – Economic Studies

Twitter
AaronDKlein

The Chinese payment wars stand in sharp contrast to the standard analysis of the global economic game. In the standard model, the United States is the advanced incumbent economy while China is playing economic catch-up. China is simultaneously modernizing its own domestic system to resemble western economies while at various levels integrating into the broader global financial system. The story in payments begins along this common narrative. The U.S. created and essentially dominates global retail payments through a magnetic stripe card-based interface running through the global banking system. This system has its roots in a series of inventions from roughly 50 years ago in New York, which began as a set of solutions for restaurants and frequent customers who were unable to access cash over the weekend and sought an alternative to the paper-based check payment system. These ‘Diners Cards’ eventually transformed into a series of plastic cards, building a set of payment rails that process more than 130 billion transactions a year in the United States, which is more than 350 million transactions per day. To put that in perspective, the peak number of daily transactions in Bitcoin is estimated around 400,000.
Magnetic striped cards came to dominate the world of retail payments in developed economies. At an earlier point in its economic development, China attempted to emulate and graft onto this system, with multiple banks introducing their own sets of magnetic stripes and cards including Union Pay as the most prominent example. Founded in 2002, Union Pay’s prevalence rose sharply to achieve over 3.5 billion cards in circulation in just a decade and volume that was roughly half of what Visa was processing in the mid 2010s.
The story diverges with Chinese technology companies, WeChat and Alibaba, who appreciated the inherent inefficiencies in the card-based system: the interchange fees, design apparatus of cards and card readers, and the costs borne by merchants. Chinese merchants, particularly small ones, lacked interest in such a costly system. Exploiting these opportunities, the two tech firms created a QR code digital wallet scan-based system, which essentially leapfrogged the debit magnetic cards. The new system was faster and more efficient than debit magnetic cards, producing a host of direct and indirect benefits for those two companies as well as for broader society. This innovation allowed China to leapfrog the magnetic striped card system that dominates much of the western world’s retail payment system.
China’s new payment system exploded from inception to dominance in under a decade. With over a billion users on each platform, the power of network incentives has been unleashed. The new payment system has replaced cards and cash at registers, changed how families give gifts, and even evolved the way how beggars ask for money, with QR codes replacing tin cups.
This is a powerful example of Chinese innovation, competition, and adoption. It appears, at least to outside observers, to be highly organic and internally driven, not a product of central planning or committees. For example, the two companies diverged in the origin of their payment systems. WeChat Pay is based on a social media platform (for Americans think Facebook) and is heavily engaged in person-to-person payments. WeChat Pay first rolled out as a service to facilitate personal funds in the form of ‘Red Envelopes’ (traditional gifts of cash) around the Lunar New Year in 2014. WeChat Pay proposed digitizing this exchange, which given their person-to-person social media network, was clearly synergistic. The popularity of Red Envelope exchanges seeded many customers’ WeChat Pay accounts with initial funds. WeChat launched the Red Packet digital payment idea in 2014, and 16 million packets were sent. The next year, 1 billion packets were sent. By 2016, it was over 8 billion and in 2017, 46 billion.

Related Content

Financial Regulation
Regulating stablecoins isn’t just about avoiding systemic risk

Timothy G. Massad
Tuesday, October 5, 2021

Financial Regulation
Policymakers must enable consumer data rights and protections in financial services

Dan Murphy and Jennifer Tescher
Wednesday, October 20, 2021

Financial Regulation
Focus on bank supervision, not just bank regulation

Peter Conti-Brown and Sean Vanatta
Tuesday, November 2, 2021

Alipay’s origin differs. Alipay is a payment platform developed by Chinese tech conglomerate Alibaba with roots in digital commerce (think Amazon) and hence more likely to be used for business purposes. Internet commerce requires electronic payment systems, which were integrated with credit and debit cards. The lack of such a system in China incentivized Alibaba to develop Alipay to support its Taobao online shopping platform. With Alipay’s main competitor, UnionPay, having only recently launched and not having gained many customers, the payment market was wide open. Alibaba offers incentives for merchants to use Alipay for purchases throughout their platform. They offer feeless purchases for both parties, preferential placement on digital platforms for merchants, and the ease of payment integration into business processing. Those differences provide economic benefits of lower costs and potentially greater transaction volumes that are not widely available in the bifurcated credit/debit card system.
There are potential drawbacks to this integrated model, including the lack of fees to provide services customers want with payments – such as interest-free grace periods of credit – and anti-competitive concerns of integrating business platforms and social networks with payment platforms.
With this technological advance, China had many of the ingredients necessary to challenge the existing retail payments system and seemed poised to leap into the global payments contest, which is in desperate need of an advance from the 50-year-old plastic cards that seem woefully out of place in the digital environment.
However, it appears that China has not chosen to do this, instead making a u-turn and now heading in the other direction. Rather than aggressively expanding the system and opening it to a broader network in the way that the American card-based system did, China has taken a series of measures to slow the tech companies, enhance the government’s role, and possibly bring payments back into a bank-centric system.
China’s government intervened with the creation of a central bank digital currency. This digital yuan uses much of the same infrastructure as the Ali and WeChat pay systems: digital wallets, QR codes, scanners, etc. Just this month PBOC Governor Yi Gang stated a goal of “interoperability with existing payment tools” for the digital yuan.
The digital yuan is currently running in more than 10 regions of China with more than 150 million users. It was first launched in Shenzhen, the home city of Tencent (the company that runs WeChat Pay). It does not take a skilled U.S.-China international diplomat with a keen understanding of history to understand that deciding to roll out the digital yuan in the hometown of the payment giant sends a clear message. If the U.S. government started its own online bookstore/retailer and happened to choose the city of Seattle, the message would be globally clear.
Couple this with Alibaba’s aborted initial public offering of its financial arm Ant and the sweeping set of problems cited by government officials and regulators and there is a message that China is pausing any potential for global expansion of the Alipay and WeChat payment systems. To the contrary, what seems to be happening is that rather than exporting Chinese-based digital wallets in hopes of becoming as ubiquitous as the Visa, MasterCard and American Express networks are currently, there is instead a desire to reorient the internal Chinese system to be focused on a central bank digital currency run through digital wallets more directly tied to the Chinese banking system.
Now, it is plausible that this change ultimately sets up a digital yuan using very similar technological rails of QR codes, first piloted by Ali and WeChat that would in fact be analogous to history repeating. The original American charge card, Diners Club, coordinated between restaurants (merchants) and consumers, not banks. This model ultimately lost the race. MasterCard is itself a consortium of financial institutions with a very different history than Visa, which was born from Bank of America, and American Express which began as a closed loop payment system and today is part of a bank holding company.
Previously, it seemed plausible that a digital wallet from Alipay or linked to the WeChat network could be a global phenomenon spreading far beyond China in the phones and pockets of billions of people worldwide. That now feels very unlikely. Instead, digital Chinese wallets through Chinese banks appear where China is headed. That model seems an unlikely mode to facilitate international commerce throughout Europe, or even Africa, let alone to challenge the United States for domestic market share. Though Alipay and WeChat are accepted in the United States in retail stores, they are almost exclusively used by Chinese individuals, not by Americans.
This begs the question: when China does make technological advances in globally competitive industries such as payments, is China’s ultimate goal to export this technology and create a network for global commerce? Or is it ultimately an internal process where the benefits and costs will be felt by Chinese nationals and control will be maintained by the Chinese government? Gunpowder was invited in China centuries before the formula came to Europe who used it very differently.
From an American perspective, there’s a bit of a sigh of a relief because China had built a better mousetrap in many respects. It is also a shot in the arm for the Federal Reserve, which has devoted significant resources to considering launching its own central bank digital currency. China was not the only entity pushing the Federal Reserve. Facebook’s original announcement of launching a digital currency (then called Libra, now known as Diem) was another key moment energizing the Fed to consider alternatives. The Fed’s consideration of a central bank digital currency has been heavily impacted by the payments actions proposed by both China and Facebook. This helps explain how the same Federal Reserve that failed to adopt a real-time payment in the U.S. despite the European Union, United Kingdom, Japan, Mexico, and many more countries adopting such a system years and decades earlier is now devoting significant attention to creating a new central bank digital currency. Whether the Fed launches a new digital currency or not, is years away. In the meantime, low income consumers still pay billions as a result of the Fed’s failure to modernize its payment system. By my estimate more than $100 billion has already been taken as a result of the Fed’s failure to act when the United Kingdom transitioned more than a decade ago. It marks one the largest failures of policy that contributes to income inequality and needless inequity in America in my lifetime.
In conclusion, whereas it is currently unclear whether the Federal Reserve will launch a central bank digital currency, it appears that China is committed to a path of a digital yuan. It seems likely that such a move will also favor moving payments more broadly back into its banking system, away from its two technological companies. However, the technological system of QR codes and digital wallets appears likely to remain in China regardless of who operates the system.

The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.

Losing traction? The real effects of monetary policy when interest rates are low

Losing traction? The real effects of monetary policy when interest rates are low | Speevr

by Rashad Ahmed, Claudio Borio, Piti Disyatat and Boris HofmannAre there limits to how far reductions in interest rates can boost aggregate demand? In particular, as interest rates fall to very low levels, does the effectiveness of monetary policy in boosting the economy wane? We provide evidence consistent with this hypothesis. Based on a panel of 18 advanced countries starting in 1985, we find that monetary transmission to economic activity is substantially weaker when interest rates are low.

Navigating by r*: safe or hazardous?

Navigating by r*: safe or hazardous? | Speevr

by Claudio BorioThe concept of the natural rate of interest, or r-star (r*), has risen to prominence in monetary policy following the Great Financial Crisis. No doubt a key reason for the concept’s newfound prominence has been the further decline of real and nominal interest rates to new lows, which has further constrained monetary policy’s room for manoeuvre. This lecture explores the extent to which the concept can be a useful guide to policy. It concludes that, depending on how it is employed, the concept has the potential of leading policy astray and of complicating the task of regaining the needed policy headroom.

Back to the future: intellectual challenges for monetary policy

Back to the future: intellectual challenges for monetary policy | Speevr

by Claudio BorioThe central banking community is facing major challenges – economic, intellectual and institutional. A key economic challenge is the need to rebuild room for policy manoeuvre, which has fallen drastically over time. This lecture focuses on the intellectual challenge, ie facts on the ground are increasingly testing the longstanding analytical paradigms on which central banks can rely to inform their policies. It argues that certain deeply held beliefs underpinning those paradigms can complicate the task of regaining policy headroom.