James Ferrarelli, COO of SSGA, says there is an urgent need for effective controls to manage the risks tied to A.I. models

James Ferrarelli
(In April, the U.S. Commodity Futures Trading Commission (CFTC) outlined its stance on artificial intelligence (A.I.) in financial operations. Speaking at the Global Alternative Investment Management Ops A.I. Summit, CFTC Commissioner Kristin N. Johnson noted that while A.I. is improving efficiency, it is also introducing new risks. She urged firms to strengthen oversight of A.I. integration, improve third-party risk management, and enhance inter-agency coordination to address these growing vulnerabilities. As chief operating officer (COO) of State Street Global Advisors, the investment management arm of State Street Corp., James Ferrarelli is well aware of the litany of factors involved in adopting A.I. across large-scale securities operations. As head of operations, with oversight of technology, risk, and global business processes, he has overseen the development of a wave of new initiatives to streamline the integration of these technologies. When probed as to what A.I. issues keep him up at night, he cites a long list: regulatory compliance and legal risks, data quality and governance, model risk management, operational risks, ethical and trust issues, data privacy and security, integration with legacy systems, and skill gaps and talent shortages.)
Q: What are the biggest risks you see emerging as asset managers integrate A.I. into their operations systems and processes?
A: Integrating A.I. into asset management operations systems and processes brings numerous benefits, but it can also introduce significant risks if not implemented correctly. Ensuring the accuracy, consistency, and reliability of data used by A.I. systems is critical. Poor data quality can lead to incorrect insights and decisions, in addition to undermining the trustworthiness of A.I.-driven processes themselves. Protecting sensitive financial data from breaches or unauthorized access is paramount, as always.
A.I. integration must be done with full regard for regulatory obligations and expectations, but the lack of clear regulatory frameworks can pose challenges in ensuring compliance. Firms must navigate the complexities of aligning internal A.I. policies with regulatory standards.
When it comes to model risk management, A.I. models require robust governance and oversight mechanisms to minimize risks. This includes validating and back-testing, as well as monitoring their performance over time. There is a need for effective controls to manage the risks tied to A.I. models, such as ensuring transparency and explainability.
A.I. can automate various stages of the trading process, but this also introduces risks related to system stability and reliability. Any malfunction or error in A.I. systems can lead to significant financial losses. The integration of A.I. into risk management processes must be carefully managed to avoid over-reliance on automated systems, which introduce new challenges and liabilities.
As such, building trust in A.I. models is crucial. This involves addressing ethical concerns, such as bias in A.I. algorithms, and ensuring that AI decisions are fair and transparent. There is a shortage of talent with the necessary skills to develop, implement, and manage A.I. systems. This can hinder the effective integration of AI into asset management operations.
Q: What is your “Three Lines of Defense Framework” for governance and model risk management for A.I. tools used in operations or compliance?
A: The First Line of Defense (FLoD) includes business units and corporate functions that perform day-to-day operational activities. They are responsible for self-identifying and mitigating issues, implementing developer, monitoring, and testing guidance for A.I., and ensuring compliance with internal controls. Specific roles within the FLoD, such as Generative A.I. Application Owners and Developers, are responsible for identifying risks, implementing risk response measures, and ensuring compliance with applicable regulatory requirements.
The Second Line of Defense (SLoD) involves oversight functions such as risk management and compliance, which establish risk tolerance limits and monitor adherence to these limits.
The Third Line of Defense includes internal audit functions that provide independent assurance on the effectiveness of governance, risk management, and internal controls.
On top of that, we have an A.I. Risk Oversight Working Group, a forum that includes members with A.I. development and risk management expertise. It maintains oversight responsibility for A.I. governance metrics, review of assurance results, and compliance with the A.I. Policy.
We also have an A.I. Governance and Controls Team, responsible for executing centrally managed A.I. risk management, governance, regulatory monitoring, standards, and reporting. They ensure that AI-enabled solutions comply with control objectives and mitigation requirements.
We adhere to a “responsible” A.I. standard, which outlines the ethical use of A.I., ensuring transparency, fairness, and accountability. It includes guidelines for developing A.I. use cases that align with regulatory requirements and internal policies.
When it comes to model risk management, our A.I. models are subject to rigorous governance and oversight mechanisms. This includes validating and back-testing models, monitoring their performance, and ensuring transparency and explainability.
Lastly, the ISO 42001 standard provides guidelines for A.I. governance and risk management. It includes governance structures to establish oversight and accountability, risk management protocols to identify, assess, and mitigate potential risks, and compliance mechanisms to maintain adherence to evolving legal and regulatory standards.
Q: Are there particular controls or oversight mechanisms you’ve found effective for minimizing the risks tied to A.I. models?
A: The Lifecycle Approach to A.I. Systems, as listed in the answer to the previous question, we believe is effective in minimizing the risks associated with A.I. models. This approach ensures compliance at every stage of the AI system lifecycle, including conceptualization, purchase or development, deployment, and operational monitoring. Each stage undergoes rigorous evaluation to ensure alignment with regulatory requirements, ethical principles, and internal standards.
The approach includes a structured governance framework, dedicated oversight groups, adherence to responsible A.I. standards, and rigorous model risk management. These measures ensure A.I. tools are developed and deployed in a transparent, fair, and compliant manner, mitigating potential risks and enhancing operational efficiency.
Q: Commissioner Johnson spoke about using A.I. to combat fraud and cyber threats — how is State Street Global Advisors applying A.I. in those areas?
A: State Street Global Advisors is utilizing A.I. to enhance its cybersecurity measures and fraud detection capabilities. By establishing dedicated centers, implementing advanced data protection systems, and adhering to comprehensive A.I. risk management policies, the firm is effectively mitigating risks associated with fraud and cyber threats.
State Street has established a Cyber Fusion Center to respond effectively to cyber threats. This center employs a threat-focused cybersecurity approach that provides real-time knowledge of cyber threats, enabling risk decisions, optimizing defense strategies, and facilitating action in response to those threats.
In addition, we have a comprehensive A.I. Risk Policy that includes governance over A.I. applications, especially those related to fraud detection and cybersecurity. This policy ensures that A.I. models are validated, back-tested, and monitored for performance, transparency, and explainability.
Q: How aligned are your internal AI policies with the kinds of regulatory expectations outlined by the CFTC?
A: State Street Global Advisors has taken significant steps to align its internal A.I. policies with the regulatory expectations outlined by the Commodity Futures Trading Commission (CFTC).
The firm’s comprehensive governance framework, dedicated oversight groups, adherence to responsible A.I. standards, and rigorous model risk management practices ensure that A.I. applications are developed and deployed in a transparent, fair, and compliant manner. These measures address key regulatory concerns related to risk assessment, transparency, data quality, and ethical considerations.
Q: What capabilities do you think A.I. will bring to operations over the next five years?
A: A.I. will support asset managers with advanced data processing capabilities, enabling more effective analysis of macroeconomic trends, sector performance, and in-depth financial metrics. This will lead to more efficient portfolio optimization and better-informed decision-making.
Secondly, it will automate routine tasks, reducing manual intervention and increasing operational efficiency. It will enhance client and product intelligence by analyzing large volumes of data and generating insights that can be used to improve research and decision-making.
A.I.-powered systems will also provide continuous monitoring and real-time alerts, enabling proactive risk management and immediate responses to potential threats.
Lastly, they will transform operations management by simulating human intelligence and problem-solving capabilities.
Need a Reprint?