Diederik Geeraerts, the CEO of Taskize, shares his thoughts on the impacts of A.I. on many aspects of operations.

Grygo is the chief content officer for FTF & FTF News.
Technologies based on artificial intelligence (A.I.) are having a significant impact on the software, systems, IT infrastructures, and operations of securities firms. A.I. is also raising the stakes of uncertainty for those working in securities operations.
I’m referring to the uncertainty that Bill Keenan mentioned in my Q&A with him, which ran on February 10. Keenan helped write the movie “Bull Run,” which is based on his bestselling memoir “Discussion Materials: Tales of a Rookie Wall Street Investment Banker.” He once worked as an investment banking associate at Deutsche Bank after a career in ice hockey, which is “amazing,” as some younger people might say. (You will have to see the movie or read his book to see how it all turned out.)
When I asked Keenan about the advice he would give to young people working in securities trading, part of his answer was: “You quickly learn how to live with a certain level of professional discomfort and uncertainty, a vital skill that will serve you well regardless of what you end up choosing as a career in the long run.”

Diederik Geeraerts
Those words were resonating when I later spoke with Diederik Geeraerts, the CEO of Taskize. The company describes itself as a provider of a web-native, software as a service (SaaS) “collaboration platform … used by investment operations teams worldwide to modernize counterparty communication and workflows.” The company offers a suite of automated, scalable multi-party solutions for business, communication, interoperability, data, and A.I.
I recorded our discussion and asked him if I could share the section of our conversation that deals with A.I., and Geeraerts agreed to it. So, what follows is an unfiltered transcription of our discussion.
Q: What about your clients, your customers, are they clamoring for A.I., or are they still coming to grips with it?
A: There are several that are leading. There are several that are following. And there are several that are ‘wait and see.’
I think the legal barrier is a big one. … When do you start to trust large language models [LLMs] and who takes the liability if someone gets it wrong? And then again, I’ve had fantastic discussions where they [customers] said, ‘Look, our operational error rate is 1 percent, whatever it is, or even less. But our LLM model is at 99.4 percent. So, why not switch it on? Because you’re going to do better than the operational human errors, which we make anyway?’
So, it’s going to be a very interesting legal discussion, and we will always give the option to our clients to turn it on or turn it off. It’s not a liability that we will take.
We will have solutions, but it’ll be up to the client to decide if they want to switch ’em on or switch ’em off, based on the framework that we built.
And, yeah, of course, I think the industry is investing massively. I think every organization has an A.I. roadmap, and that’s where we say, ‘Look, yes, we will allow and can help you connect to Taskize via MCP [Model Context Protocol] types of technology to bring that to your ecosystem via us.’
And I think what’s happening over the next couple of years — I always call it the 30 to 40 percent low complexity queries — they will be out via one type of technology or another.
But what’s interesting is that we’re also looking at how we can then actually equip the end user to be much more efficient in solving high complexity issues, making correlations by bringing data decision points into the Taskize platform.
The platform revolves around what we call Taskize Bubbles: where people talk to each other within specific, subject-focused Taskize Bubbles, bringing insights, bringing intel to help each other solve the complex queries.
In my previous mandate — I always mentioned it to everyone I spoke to in New York — I saw a huge hidden cost in turnover. We are continuously re-educating each other on processes, operational processes, on whatever type of SOPs [standard operating procedures] that you use, or knowledge base. It’s a massive cost.
So, I see as well a huge opportunity to bring that element of knowledge, an industry knowledge base, into these A.I. type of capabilities. … While people today are still focusing purely on the operational efficiency, I believe in a shift to value-added, high complexity, A.I. support.
Q Could you give me a quick example of that?
A: So imagine that you’re dealing with a very complex asset-based security that is not settling and whatnot.
So instead of a user today having to go into endless manuals, which are stored somewhere online, and starting to look up what this asset-backed security is actually doing, how it’s being rated, what historically has been happening with this. Actually, there you should be using A.I. prompts to bring that within a very safe environment. … To help the user not have to go through all that pain of consulting, six, seven, eight, various applications to find the answer … [you] actually bring the various points of decision into our bubble to allow them just to analyze the facts and then take a decision on the back end of it.
Because I truly believe that the human element within the industry, on certain things, A.I. will get you somewhere, but not everywhere.
Q: And A.I. doesn’t have the full consciousness of the human mind, which is really going to be put to the test.
A: Also, the sentiment of the client who is asking or who is escalating.
For one client, you might decide A, but for another, you might decide B, just because of the history of the client and the like. So, in highly complex issues with high exposures, there’s a very complex decision-making process … And today it usually requires multiple people to give input into one of these complex issues.
So, I see A.I. really bringing the next layer into that domain, helping the industry solve complex issues quicker and better, to reduce risk.
Q: Ops people express two fears — one that they’ll be replaced and secondly, that they’ll be put in a position where their job changes and they’re not sure they can survive their new job. They’ll be required to work at a level much higher, and they have to think more and develop analytical skills.
A: Upskilling will need to be done.
Q: I think you could also say to some of the older, more established folks, you have to go back to the uncertainty because the technology is changing the landscape. There’s no question about it.
A: Oh, no doubt, no doubt. … I wonder if you were to do a change curve on the operational folks on A.I. readiness, where on that curve they would be?
I think we’d be surprised that we’re not investing enough on the change management side … We’re investing far too much on the build, right? … Guys, it’s the Ops users who are going to be confronted with all of this. Who’s taking care of all of them?
Need a Reprint?