AI pilots are everywhere — but only those built on strong data and human oversight will deliver lasting change.
Several reports this year highlighted a growing AI adoption divide between proliferation of GenAI use cases and their actual impact on business transformation. Reports attributed this gap to LLM tool inability to learn from feedback; the misallocation of AI budgets, with nearly 50 percent directed toward initiatives yielding lesser ROI; and inefficient organizational structures around AI implementation.
Financial institutions are well-positioned to benefit from AI. They operate in data-rich environments where workflows are often highly procedural and governed by well-defined processes. They still rely on heavily customized in-house systems that are costly to maintain. Does the sector see the same AI adoption divide?
At the Murex Capital Markets Technology Forum (CMTF) held in Paris this summer, business and IT decision-makers, along with C-level representatives from over 35 EMEA institutions, weighed in.
A flexible, exploratory approach is the preferred strategy.
The roundtable financial institutions are adopting a more flexible, exploratory approach.
“Having an AI roadmap is setting up for failure, because things are moving very fast. Instead, we’re providing our workforce with a secure environment where they can explore and upskill on AI,” a commodity trading firm CIO said.
Similarly, other executives said they were moving away from rigid roadmaps, opting instead for user-driven experimentation within an AI-focused innovation lab.
Prototyping freedom is gaining traction as well. Participants from a European financial entity said they were allowing users to build and test their own AI tools, provided they follow testing and maintenance protocols.
“We created the EUCA (End User Computing Application) system to unblock users who want to move faster to market. It gives them flexibility and removes the excuse that things are too slow,” explained a COO.
Institutions aim to lay data foundations for AI.
While approaches to AI adoption varied, there was strong consensus that data strategy must be structured and deliberate.
“The roadmap has to be for data—building data lakes and normalizing data,” a leading Wall Street bank participant emphasized.
Participating institutions are investing heavily in making high-quality data available for AI and implementing centralized data lakes and golden sources of truth.
Looking ahead, LLM tools need to generate new data points from their exchanges with humans and self-train on them to improve their learning over time, similarly to humans learning on the job. LLMs aren’t there yet.
Pilots are ongoing.
Below are some use cases shared by the roundtable participants.
- KYC (Know Your Customer). LLMs are used across all participants to process KYC documents.
 - Enterprise chatbots. Bots were created early on. At an African bank, chat bots are used by business users to request custom reports using natural language, which significantly reduced the workload for the teams in charge.
 - Quote generation and trade booking automation. LLM agents are used to process client emails requesting quotes and automate quote generation or generate trade ticket based on chat or email information and book it in the trading platform through an API call.
 - Hedging and predictive modeling. This remains at an early stage. “We’re exploring how AI can analyze market liquidity, trade decay, and support hedging strategies by combining historical data analysis with predictive modeling,” said a European head of trading.
 - Infrastructure monitoring. A British banking group rep shared their organization is using Dynatrace’s deterministic AI engine to monitor the MX.3 environment. “In this case, it learned the normal behavior of the MX.3 system and automatically flagged deviations, helping the bank ensure stability and performance in production,” said the bank executive.
 - Software development. AI is improving efficiency though full code generation remains limited. A major German financial institution reported a 20–30 percent improvement in development productivity. “Right now, it’s more about efficiency—like generating test code. We’re not yet at the point of letting AI write full code due to too many errors,” said an IT manager at a Danish bank. A participant added they were using AI dev tools to replatform some internal tools.
 
Are we at a point where AI can fully make trading decisions? The answer was unanimously no. “There are rules a trader must follow, and those can be codified. But AI can’t replace a trader’s instinct or appetite. The trader will always have the final say,” the banking executive said.
AI pilots shared by the roundtable participants confirmed a trend also highlighted in the MIT report findings—what could be described as a “front-office bias.” Traditionally the source of differentiation for financial institutions, the front office received the highest concentration of AI pilot initiatives.
Organization, security, trust and integration are challenges.
Despite the large number of pilots, several challenges continue to slow down the integration of these initiatives in the business processes of financial institutions.
- Insufficient business-IT integration: Selecting the right AI use cases requires deeper collaboration between IT and business units. Proximity and cross-functional setup are seen as key enablers of AI success. One British banking group shared their approach: The MX.3 team sits side by side with the AI team. Four hundred to 500 people—including tech specialists, traders, quants and risk experts—work in the same space. This setup fosters rapid experimentation and successful implementation approach.
 - Regulation, compliance, extensive scale and impact can slow integration pace in large financial institutions: “We’ll get the full value when we can fully integrate AI into our processes,” said a Swiss private bank executive. “But in banking, we’re constantly confronted with data security concerns. Once we can integrate AI into decision-making, that’s when we’ll see real value.”
 - Security issues are complex. AI can originate from various sources—personal user tools, internally developed models, or embedded features in third-party vendor software. Two executives emphasized the importance of centralized AI governance to establish clear policies and enforce transparency in AI usage across business workflows.
 - Documentation and traceability of AI decisions are critical for meeting regulatory requirements. AI-driven decisions must be justified to auditors.
 - A rising number of AI use cases remain unintegrated, posing a risk to operational stability. As more end users are developing AI tools, many of these solutions remain standalone—lacking integration into core workflows and often not maintained over time “Everyone is a developer now,” said an executive at a Dutch banking group. “New traders often use Python to build their own tools, but these solutions are sometimes incomplete, poorly integrated, or not well maintained—posing operational and governance challenges. The bank is actively working with both GenAI and traditional AI to bridge these gaps and improve integration.”
 - The current generation of AI tools lacks features like long-term memory and customization. The pace of innovation in AI is very fast. Recent developments like MCP (Model Context Protocol) and new memory management frameworks aim to improve how LLMs interact with enterprise tools, enabling them to retain context, reason over extended interactions and perform more complex tasks.
 
AI transformation is equal parts people and tech.
Many of the institutions participating in the roundtable are investing in internal training, certifications and apprenticeships to build AI literacy across teams. People at all levels must be comfortable that AI won’t break regulations or compromise compliance.
A trading firm executive highlighted another risk of falling behind on the AI transformation.
“The new generation of employees is AI-native—they expect these tools to be available in the workplace,” they said. “Organizations that fail to meet this expectation risk falling behind in attracting and retaining top talent.”
A subsequent AI wave might provide impactful value.
This year has been a learning curve for AI across our client organizations. While the first wave of AI brought some productivity gains at the individual and team levels—along with many promising pilot initiatives—it often fell short of delivering measurable impact. The second wave, focused on deep integration, holds the potential for more meaningful, long-term value.
This phase involves integrating AI with company-specific data, enterprise data platforms, and broader IT infrastructures. It is also about focus. Organizations must carefully select AI use cases that deliver meaningful value across the front, middle, and back office— prioritizing AI applications that can be measured and that offer the highest return and operational impact. This wave comes with its technological, security and organizational barriers to overcome. It may take us several years to become transformative.
The roundtable made one point very clear to me: Participating financial institutions are keeping their people at the heart of their AI journey. They’re investing in internal AI education and creating safe spaces for employees to explore and innovate. For them, it’s not about replacing people with AI, but about bringing together technology and talent to build faster, better, and more resilient systems.
Need a Reprint?
