Financial institutions are moving quickly toward artificial intelligence, and many feel they have no choice.

Margins are tighter. Competition is faster. Customers expect instant answers. Whether the lender is a traditional bank, a digital fintech platform, or an embedded finance provider, the pressure is the same: approve applications faster, assess risk more precisely, personalize customer treatment, and reduce manual intervention wherever possible. Artificial intelligence promises all of that. Machine learning models can process far more data than a traditional scorecard, identify patterns a manual reviewer may miss, and automate decisions in seconds that once took hours or days.

From a business standpoint, the appeal is obvious.

From a compliance standpoint, so is the responsibility.

One of the biggest mistakes institutions can make is treating automation as a pass on fair lending scrutiny. Federal regulators have consistently made clear that existing consumer protection laws fully apply to emerging technologies, including artificial intelligence. Lenders remain responsible for ensuring that automated decisions are fair, explainable, and legally defensible.

In plain terms, the computer does not get the institution off the hook.

Automated Does Not Mean Neutral

This is where many leadership teams become overly comfortable.

A lender may say, "We are not using race, gender, or age in the model, so we should be fine."

Unfortunately, fair lending risk does not work that neatly.

Artificial intelligence learns from historical data and from the variables it is programmed to prioritize. If historical lending populations reflect prior disparities, or if the model relies heavily on variables that closely correlate with protected class characteristics, the output can still produce discriminatory results even though no one intentionally designed the model that way.

Think of it this way: if you teach a machine by feeding it ten years of prior lending behavior, you may also be feeding it ten years of embedded human bias, geographic disparities, or inconsistent approval patterns. The machine may simply become a faster, more efficient version of old problems.

For example, a lender may deploy a model that heavily weights ZIP code stability, employment tenure, digital spending patterns, or relationship depth. None of those variables explicitly ask whether an applicant belongs to a protected class. But collectively, they may disproportionately disadvantage minority communities or lower-income populations in ways management never intended.

The model appears neutral on paper. The outcomes may tell a very different story.

AI Is Not Just a Risk. It Can Be the Institution's Strongest Early Warning Tool.

This is where many institutions are missing the larger opportunity.

Years before artificial intelligence became the dominant industry conversation, I was involved in fair lending oversight during a period of heightened mortgage regulatory scrutiny. Through peer comparative analysis, our institution identified materially weaker lending penetration among African American borrowers within a California market when measured against peer lenders. Nothing in written underwriting guidelines suggested intentional exclusion, and no one believed the institution was making overtly discriminatory decisions. Yet the lending outcomes showed a disparity significant enough to draw examiner attention and require immediate corrective action.

The lesson was not simply that disparities can occur.

The lesson was how long institutions can operate without fully recognizing them when they rely primarily on retrospective human review.

At that time, fair lending monitoring depended heavily on periodic analytics, manual comparative review, and downstream examination preparation. We identified the issue, documented it, and launched remediation, which positioned us far better during regulatory review than if we had remained unaware. But earlier pattern recognition would have materially changed how quickly the institution could respond.

This is exactly where artificial intelligence can become part of the solution rather than merely part of the risk.

Used responsibly, advanced analytics can identify geographic penetration gaps, approval disparities, pricing inconsistencies, demographic anomalies, and treatment variances far earlier than traditional monitoring methods often allow. The same technology that can create hidden bias if left unchecked can also provide the foresight institutions historically lacked. It can also help compliance and risk teams operate more efficiently by reducing reliance on heavily manual retrospective review processes.

Banks should not fear AI. Banks should fear using AI blindly, or failing to use analytics deeply enough to see developing disparities before regulators do.

This Matters Even More for Fintechs

For fintech lenders and digital-first credit platforms, this conversation carries even greater urgency.

Unlike traditional institutions that may still rely on periodic manual reviews and slower policy adjustments, fintechs are often making thousands of rapid automated decisions through embedded underwriting engines, alternative data models, instant approvals, and behavioral segmentation tools. The same speed that gives fintech its competitive advantage can also scale a problematic credit pattern across an entire portfolio far faster than many organizations realize.

At the same time, fintechs possess something legacy institutions historically lacked: the ability to monitor portfolio behavior almost in real time. A well-governed AI environment can allow a fintech to identify approval ratio anomalies, pricing inconsistencies, geographic penetration gaps, or demographic outliers early enough to recalibrate before those patterns become systemic. That is a major strategic advantage.

The fintechs that will distinguish themselves in the next regulatory cycle will not be those that simply automate the fastest. They will be those that can demonstrate they built fairness, explainability, and monitoring into the automation from the beginning.

Treasury and federal regulators have both emphasized that AI in financial services presents not only operational opportunities, but also significant consumer risk if institutions fail to establish adequate governance, testing, and oversight around these technologies. Regardless of how the regulatory environment evolves, the underlying legal obligations around explainability and fair treatment remain.

Sponsor Banks Cannot Afford to Ignore the Black Box

For sponsor banks and partner institutions supporting fintech programs, this issue is magnified further.

Regulatory accountability does not disappear simply because the decision engine sits with a third-party platform or embedded lending partner. If the bank cannot understand and defend how a fintech model is making customer-impacting credit decisions, the bank still owns the exposure.

A sponsor bank cannot simply accept, "the fintech vendor built the model," any more than it can accept, "the vendor generated the denial code." If the institution cannot explain the logic, test the outputs, and challenge the variables, it is effectively outsourcing decision velocity while retaining regulatory liability.

That is not a sustainable governance model.

The Explainability Problem Is Where Many Institutions Will Stumble

Under the Equal Credit Opportunity Act and Regulation B, lenders must provide applicants with specific principal reasons for adverse action. The CFPB has been direct on this point: a creditor cannot hide behind the complexity of an algorithm and issue vague denial notices because "the model said so."

This matters more than many realize.

Institutions often become enamored with vendor technology because approval rates improve or the process becomes faster. But when asked straightforward questions, management struggles:

Why exactly was this applicant denied?

Why was this borrower priced differently?

Why was this customer routed into a higher-friction review path?

If the honest answer is, "We are relying on the vendor's proprietary engine," leadership should be concerned. Because if management cannot explain a lending decision in clear business terms, management does not fully control the compliance risk attached to that decision.

And examiners know to ask that question.

The Institutions That Will Win Will Govern Smarter, Not Slower

Avoiding artificial intelligence is not the answer. That would be commercially short-sighted and increasingly impractical. AI is here to stay. The more productive question is whether institutions are deploying it with the same sophistication they are using to market it.

Do they know where AI is making customer-impacting decisions? Can they test outputs across protected classes? Can they map adverse action reasons back to actual decision logic? Can they challenge vendors and require transparency? Can they identify developing disparities before those disparities become regulatory findings?

Those are no longer optional governance questions. They are becoming competitive differentiators.

Artificial intelligence does not remove fair lending risk. It can magnify it, accelerate it, and bury it deep enough inside a model that management may not see the problem until someone else does.

At the same time, if used thoughtfully, it can also become one of the strongest predictive monitoring tools institutions have ever had.

The institutions that will navigate this next phase successfully are not the ones adopting AI the fastest. They are the ones governing it the smartest.

Alison Stokes, CRCM

Alison Stokes, CRCM

Alison Stokes, CRCM is a senior compliance executive with 15+ years leading regulatory governance, supervisory examination readiness, fair lending oversight, and enterprise compliance modernization across banking, fintech, and data-driven financial services.

alisonstokes.com

References

  1. CFPB Comment on AI in Financial Services
  2. CFPB Guidance on AI Credit Denials
  3. U.S. Treasury AI in Financial Services Request for Information
  4. Treasury Warning on Significant AI Risks in Finance — Reuters
  5. CFPB Supervisory Highlights on Advanced Credit Scoring Models