It’s time to consider a framework where firms start small, explain results, and scale gradually.
At our recent FTF News webinar, “Transforming KYC & AML: A Smarter Era for Asset Managers,” much of the conversation revolved around the promise of artificial intelligence (A.I.) to enhance speed, accuracy, and insight within compliance functions. As the discussion evolved, a new theme surfaced — one rooted not in technological spectacle, but in human trust, simplicity, and incremental change.
During the event, we had the chance to explore this theme further with Chandrakant Maheshwari, the senior model validator at Flagstar Bank, who served as a panelist for the webinar.
Simply put, Maheshwari says that financial services firms that are looking to apply A.I.-based technologies to complex areas of regulatory compliance must start small. Firms should begin their journey with low-risk, high-volume tasks that would favor structure over the need for lots of analysis. This is the basis for building trust in the move to A.I. and for making discoveries that can be scaled upward afterwards.
Our conversation focused on how institutions can responsibly adopt A.I. in the fight against financial crime and for regulatory compliance by pursuing modest, explainable wins, rather than leaping straight into high-risk automation.
A Compliance Conundrum: Speed vs. Trust
The excitement surrounding generative A.I. — particularly large language models (LLMs) — has been hard to ignore. In many industries, these tools are already drafting reports, generating content, and assisting with decision-making. But in meeting compliance demands, adoption has been far more cautious.
That caution is not unwarranted. As explained during the discussion, the stakes in compliance are uniquely high. Decisions must be explainable, outputs traceable, and systems defensible under regulatory scrutiny. “Compliance doesn’t need more magic,” Maheshwari says. “It needs more mechanisms.”
In other words, success in this space isn’t about how advanced a tool is — it’s about how reliably it supports existing processes without compromising control.
Trust Before Scale: A Pragmatic Approach
Maheshwari describes a guiding principle that many financial institutions would do well to embrace: trust before scale. Rather than jumping into complex areas like Suspicious Activity Report (SAR) generation or alert scoring, teams should begin with low-risk, high-volume tasks that require no judgment — only structure. These “checklist-verifiable” use cases are ideal for early A.I. deployment because they’re easy to audit and validate. They include:
- Extracting structured data (like addresses) from messy fields;
- Tagging and categorizing memos, emails, or reports;
- Segmenting dense regulatory documents for easier navigation and reference.
Each of these tasks may seem minor in isolation, but collectively they represent a major efficiency boost. More importantly, they offer safe, auditable opportunities for teams to build familiarity with A.I. tools.
Keep It Simple, Keep It Smart
In Maheshwari’s discussion, the “Keep It Simple, Keep It Smart” mantra underscored that simplicity doesn’t mean settling for shallow impacts. In fact, some of the most powerful use cases are those that relieve the burden of manual, repetitive work — freeing analysts to focus on deeper insights and strategy.
In fact, he likens LLMs to junior analysts: they’re helpful, efficient, and reliable when trained on defined tasks. But they’re not ready to make high-stakes decisions.
In that spirit, the following use cases stood out:
- Address Parsing: Transforming unstructured address data into standardized formats;
- SAR Narrative Structuring: Extracting key elements (who, what, when, where, why, how) from SAR narratives to improve audit readiness; and
- Regulatory Document Tagging: Making dense handbooks and advisories searchable and segmented for easier access.
In all cases, the model supports — not replaces — human decision-making.
Focus on the Oxygen, Not the Jugular
One of Maheshwari’s most memorable insights is his metaphor about A.I. deployment: “The jugular can wait. Start where it breathes.”
Too many organizations rush to automate the most sensitive aspects of compliance. But these “jugular” use cases often involve high risk, high scrutiny, and low trust. Instead, teams should focus first on operational “oxygen” — the routine tasks that keep teams functioning day to day.
It is best if this philosophy is applied by using offline, local LLMs to structure compliance memos, parse regulatory PDFs, and clean up field-level data all without exposing sensitive information or removing human oversight. Every output is logged, reviewed, and version-controlled, ensuring transparency and traceability. The result is greater efficiency, without compromising governance.
Governance as the Enabler
Strong governance is not a bottleneck to innovation — it’s the foundation that makes responsible innovation possible.
Following is a clear structure for what effective governance looks like in the age of A.I.:
- Every model deployment should come with a “Model Card” outlining its purpose, inputs, and limits;
- Prompts and outputs must be version-controlled, just like code;
- An explainability log should be maintained to trace decisions and outputs;
- The Three Lines of Defense — risk, audit, and frontline users — must all play an active role.
This disciplined approach ensures that when regulators come knocking, institutions can point to process, not guesswork.
From Resistance to Readiness
What’s standing in the way of A.I. adoption in compliance is not technology — it’s trust. Many teams are not rejecting A.I. outright, but they are wary of losing visibility and control. The proposed framework provides a path forward: start small, explain results, and scale gradually.
By treating AI as an enabler of better work—not a threat to human roles—compliance teams can transition from resistance to readiness.
The key is to ask the right questions:
- Did this tool help you work better?
- Did it save time?
- Was the output reliable and verifiable?
That’s how trust is built — not through promises, but through repeatable, reviewable wins.
Final Thought: One Safe Step at a Time
In a domain as risk-sensitive as compliance, change must be both deliberate and defensible. But that doesn’t mean A.I. has to wait forever. With the right guardrails and a focus on low-risk use cases, compliance teams can begin benefiting from A.I. today.
“Leadership isn’t about adopting LLMs quickly — it’s about adopting them credibly,” Maheshwari says. “The jugular can wait. Let’s focus on what helps teams breathe — so they can move forward, one explainable step at a time.”
Leave a Reply