- Global Custody Pro
- Posts
- The Complete 2026 Guide to AI in Global Custody
The Complete 2026 Guide to AI in Global Custody
How the world's largest custodians are deploying artificial intelligence to transform securities services

The five largest global custodians now safeguard nearly $198 trillion in assets. This figure grows by roughly 10% annually. Yet while straight-through processing (STP) rates are high, exception handling remains labour-intensive. Reconciliation teams work through breaks one by one. Corporate actions specialists decipher announcements in dozens of languages. Settlement operations race against ever-tightening deadlines.
AI is being positioned by custodians and vendors alike as the next operational step-change, with early production deployments and selectively disclosed results. BNY reports 117 AI solutions in production. State Street has operationalised its Alpha AI Data Quality platform. Northern Trust states it has reduced certain custody tax operations from eight hours to thirty minutes.
Unless otherwise noted, the performance figures cited below are self-reported in public materials and should be treated as indicative rather than independently verified. Firms define "use case", "solution", and "production" differently, so figures are directionally informative rather than strictly comparable. Most disclosed metrics appear to reflect specific workflows rather than end-to-end custody operations.
The Scale of Global Custody
Global custody involves the safekeeping of assets and processing of cross-border securities trades for institutional clients. Core functions include settlement, corporate actions processing, income collection, and reporting. The industry is dominated by a handful of global players, each managing tens of trillions in assets under custody and administration (AuC/A).
As at September 2025, the top five US custodians by AuC/A are: BNY ($57.8 trillion, up 11% year over year), State Street ($51.7 trillion, up 10%), JPMorgan ($40.1 trillion, up 12%), Citi (approximately $30 trillion, up 13%), and Northern Trust ($18.2 trillion, up 9%). Combined, these five institutions custody nearly $198 trillion in assets, making them critical infrastructure for global capital markets.
Why AI Matters Now
Three structural forces have made AI adoption imperative for custody operations. The first is accelerated settlement cycles. The United States transitioned to T+1 settlement (where trades settle one business day after execution) on 28 May 2024. According to the SIFMA/ICI/DTCC After Action Report, affirmation rates improved from 73% to 95%, but the compressed timeline leaves less room for manual exception processing. The European Union is expected to follow with its own T+1 transition in 2027, while CSDR (Central Securities Depositories Regulation) settlement discipline regimes already impose cash penalties for fails.
The second force is persistent fee pressure. Custody fees have compressed for decades. Institutional investors have become sophisticated buyers, expecting efficiency gains to translate into lower costs or enhanced service levels. The third is regulatory complexity. ISO 20022 messaging migration requires translation and validation capabilities. The EU AI Act mandates governance frameworks from August 2026. In the United States, Federal Reserve SR Letter 11-7 extends model risk management requirements to AI systems. Custodians must deploy AI to manage complexity while ensuring their AI systems meet emerging regulatory standards.
How Leading Custodians Are Approaching AI
The major custodians are taking distinct approaches to AI deployment, shaped by their existing technology stacks, operational philosophies, and disclosure strategies. Two firms stand out for the depth of their public commitments.
BNY: The Eliza Platform
BNY has emerged as perhaps the most aggressive AI adopter among major custodians. The firm's Eliza platform, a proprietary AI environment, now supports over 125 live use cases with 117 AI solutions in production as at Q3 2025, though the firm has not disclosed scope or scale of individual deployments. The approach emphasises democratisation: 98% of BNY's employees have received generative AI training, and BNY states that up to 20,000 employees are building or experimenting with agents. BNY has been notably forthcoming about its AI metrics; peers have disclosed less, making direct comparison difficult.
Crucially, BNY has begun disclosing quantified outcomes. According to Leigh-Ann Russell, Chief Information Officer and Global Head of Engineering, the firm reports a 40% reduction in false positives during NAV (net asset value) calculation. Legal contract review time has reportedly decreased by 25%. A digital employee handling payment repairs is reported to process approximately 2,000 repairs per week, though scope and baseline were not disclosed. In code security, roughly 90 digital engineers contribute tens of thousands of lines of code, autonomously fixing simple issues while escalating complex ones to human supervisors.
"While productivity is a metric, 'joy' and employee satisfaction are equally important. AI removes the drudgery from work."
The Eliza platform is designed to be model-agnostic, allowing BNY to swap in different underlying models as AI capabilities evolve. In December 2025, BNY announced the integration of Google Cloud's Gemini Enterprise to advance agentic research capabilities for market analysis. The firm has also established a collaboration with Carnegie Mellon University to create the BNY AI Lab for research and development.
State Street: Alpha and Operational Control
State Street's AI strategy centres on its Alpha platform, which combines custody, accounting, and middle-office services with data analytics capabilities. As at Q2 2025, the firm reported $380 billion in Alpha platform AuC/A wins, with 28 clients live on the platform. The broader Charles River ecosystem, which provides front-office investment management technology, manages over $58 trillion in assets.
A distinctive element of State Street's approach is the Alpha AI Data Quality (AADQ) capability. According to company disclosures, traditional rules-based exception processing flagged over 31,000 exceptions over a six-month period, of which only 250 were true positives. The AI-enabled system identified just 4,000 exceptions while reportedly capturing all genuine issues identified by their evaluation process, dramatically reducing the false positive burden on operations teams.
Perhaps more significantly, State Street has articulated a clear operational prerequisite for AI deployment. In 2023, the firm made the counterintuitive decision to insource operations previously outsourced to vendors in India. Mostapha Tahiri, Executive Vice President and Chief Operating Officer, explained the rationale at the Fortune COO Summit in June 2025:
"Why would you insource more people in an era of AI?"
Tahiri's answer: operational control is a prerequisite for change. Third-party arrangements create friction due to compliance complexity and cultural misalignment. Custodians cannot effectively change operations they do not directly control.
The remaining major custodians have disclosed less about their AI initiatives. JPMorgan Securities Services ($40.1 trillion AuC) won Best Data Analytics Provider in the 2025 Waters Rankings, suggesting meaningful integration. Citi (~$30 trillion) has announced a digital asset custody platform targeting 2026. Northern Trust has deployed AI within its Digital Partner Platform. All major players are investing; what remains opaque is the depth and outcomes of those investments.
Core AI Use Cases in Custody
These firm-level strategies are deployed across core operational functions, where AI is achieving its most measurable impact.
Reconciliation
Reconciliation (matching internal records against external sources) generates enormous exception volumes in custody operations. This is where AI has achieved the most mature deployments. Solutions use supervised machine learning for pattern recognition and observational learning algorithms that adapt to historical matching decisions. SmartStream claims auto-match rates exceeding 97% and exception reductions of 67%, with processing that previously took weeks now completing in seconds. BNY has integrated reconciliation automation into Eliza as a core use case. The technology is proven; the challenge now is scaling across asset classes and counterparties. Vendor case study results may not generalise across asset classes or data quality regimes.
Corporate Actions Processing
Corporate actions remain one of the most challenging areas of custody operations. Announcements arrive in dozens of languages, via multiple channels, with varying levels of structure. Natural language processing and optical character recognition enable automated extraction and classification. According to industry analysis presented at SIBOS 2024, corporate actions data costs firms $3 to $5 million annually, with 75% of firms manually revalidating data. BNP Paribas Securities Services processes approximately 30,000 messages monthly through its ML translation tool, while a Chainlink, Swift, and Euroclear initiative is applying AI to structured corporate actions data delivery.
Settlement Prediction
Predictive analytics for settlement failure represents a natural AI application in a T+1 environment. Clearstream claims 80% accuracy in predicting settlement failures with up to four days advance warning through its Settlement Prediction Tool, launched in July 2025. Accenture research suggests machine learning techniques can achieve 83% to 97% prediction accuracy depending on methodology. These tools enable operations teams to prioritise interventions on at-risk instructions before deadlines lapse.
Client Service
Generative AI interfaces from BNY, State Street, and BNP Paribas are claimed by these banks to now handle some routine client inquiries, escalating more complex cases to humans. This seems very small in scope at the moment, given the enormous complexity involved when things go wrong in the global custody world, but over time as AI agents get better, we might see changes in how clients experience service from global custodians.
Agentic AI: Current State

The industry is moving beyond reactive chatbots toward agentic AI: systems that can execute multi-step tasks with minimal human oversight. According to the World Economic Forum and Cambridge Centre for Alternative Finance, agentic AI remains nascent in financial services. Vendor projections are optimistic, but such forecasts should be treated with caution given the high failure rates discussed below.
The industry's progress can be mapped against an autonomy spectrum: from Level 0 (no AI) through Level 1 (AI-assisted, human executes) and Level 2 (AI executes, human approves) to Level 3 (AI executes within guardrails) and Level 4 (full autonomy). Most disclosed custody deployments sit at Level 1 or Level 2. The "digital employee" model represents early Level 3 ambitions, with agents operating within defined boundaries and escalating exceptions to human supervisors.
BNY's approach to agentic AI centres on the concept of "digital employees": AI agents treated like staff members with identities, managers, specific personas, and defined roles. These agents operate within guardrails with escalation protocols. Sarthak Pattanaik, Chief Data and AI Officer, described the shift: "Now, instead of handling certain tasks in the first instance, the role of the human operator is to be the trainer or the nurturer of the digital employee."
No major custodian has publicly disclosed fully autonomous agentic systems in production custody operations. The current state is best characterised as supervised autonomy: agents that handle routine tasks independently while escalating complexity to humans.
Implementation Realities
For every successful AI deployment, there are many that fail to reach production. Understanding why is essential for practitioners.
The Failure Rate
Industry statistics are sobering. According to aggregated research, 42% of companies are now abandoning a majority of their AI initiatives, up from 17% in 2024. An average of 46% of generative AI proofs-of-concept are scrapped before reaching production. MIT's NANDA Initiative found that 95% of generative AI pilots fail to achieve revenue acceleration.
Banking-specific data is equally challenging. Among banks, 52% are piloting agentic AI but only 16% have fully deployed use cases. Gartner predicts more than 40% of agentic AI projects will be cancelled by 2027. The gap between experimentation and production remains vast.
Key Barriers
Infrastructure challenges dominate. According to NVIDIA, approximately 40% of financial services experts cite data quality as the main AI challenge. Legacy system integration compounds the problem: 68% of CTOs in an EY survey cited legacy systems as the most significant adoption obstacle, with initiatives experiencing 12 to 18 month delays due to compatibility challenges.
Change management may be the most underestimated barrier. McKinsey's State of AI 2025 report found that nearly two-thirds of organisations have not begun scaling AI across the enterprise. Only 38% of AI projects meet or exceed ROI expectations, according to Deloitte. BCG research indicates 74% of companies struggle to achieve and scale value from AI. Governance remains immature: a Smarsh survey found only 32% of financial services firms have formal AI governance programmes.
Beyond data and legacy systems, the expertise gap is acute. Custody operations require domain specialists who understand settlement, corporate actions, and regulatory requirements. AI implementation requires data engineers, ML specialists, and model risk professionals. Finding individuals or teams who bridge both worlds is exceptionally difficult. Most firms must either build this capability over years or source it externally.
The Investment Reality
The efficiency gains are real, but they follow (not precede) substantial investment. Meaningful AI deployment requires: talent (data engineers, ML operations, model validation, AI governance); data remediation (often the majority of project time); vendor and infrastructure costs (platforms, integration, monitoring); governance infrastructure (model inventories, risk tiering, audit trails); and change management (training, role redesign, cultural adaptation). Firms expecting AI to reduce costs in year one are misreading the investment profile.
The Timeline Question
Realistic timelines for meaningful AI deployment in custody operations run 18 to 36 months from initiation to scaled production. The first 6 to 12 months typically focus on data remediation, infrastructure, and governance framework design. Pilot results emerge in months 12 to 18. Scaled deployment, if it happens, follows in years two and three. Firms expecting quick wins are disproportionately represented among the 42% abandoning initiatives.
The Governance Reality
Effective AI governance in custody operations means: documented model inventories with defined risk tiers for each use case; baseline performance metrics established before deployment; ongoing drift monitoring to detect degradation; clear escalation protocols when AI outputs fall outside confidence thresholds; and audit trails sufficient for regulatory examination under SR 11-7 or the EU AI Act. Few firms have this infrastructure in place. Building it is often more work than building the AI itself.
Perhaps most concerning is the operational risk dimension. Research from the Federal Reserve Bank of Richmond found that "banks with higher AI intensity incur greater operational losses than their less AI-intensive counterparts." The mitigating factor: strong risk management. Global custody is a high-stakes environment where errors are not tolerable. A hallucination in a NAV calculation or a failed settlement instruction can trigger regulatory breaches and reputational damage that no efficiency gain can offset. According to an ACA Group survey, only 28% of financial services firms test or validate AI outputs.
Success Factors
MIT research identified patterns distinguishing successful AI implementations. Generic tools struggle because they do not adapt to specific workflows. More than half of generative AI budgets go to sales and marketing tools, yet the biggest ROI appears in back-office automation. Purchasing AI tools from specialised vendors succeeds about 67% of the time; internal builds succeed only one-third as often. Empowering line managers rather than central AI labs to drive adoption correlates with success.
Adoption friction typically falls into five categories: Clarity, Capability, Credibility, Control, and Consequences. Successful implementations address all five; failed ones typically stall on one or two undiagnosed friction points.
BNY's approach aligns with these findings. Russell emphasised "Learning Quotient": the ability to learn and adapt. The firm's focus on training 98% of staff on generative AI reflects an understanding that cultural adoption matters as much as technical deployment.
What We Still Don't Know
What remains unclear from public disclosures:
How "production" is defined and what percentage of operational volume is covered
Baselines, test design, and methodology behind reported efficiency metrics
Independent validation or assurance of claimed outcomes
Operational risk and error rates post-automation over time
When a custodian cites an efficiency gain, the operational question is: what is the exception rate after automation, and what is the human escalation load?
Reflections for Global Custody Leaders
Before committing budget to AI initiatives, custody leaders should be able to answer these questions clearly. Uncertainty on several suggests the initiative is not yet ready for production investment.
Operational Control: Do you directly control the operations you intend to transform, or are they managed by third-party vendors? If outsourced, what contractual and practical barriers exist?
Data Readiness: Is your data clean, standardised, and accessible across the workflows you intend to automate? What percentage of project time should you allocate to data remediation?
Expertise and Talent: Do you have people who understand both custody operations and AI implementation? Where will model validation and governance expertise come from: internal build, external hire, or advisory partnership?
Governance Infrastructure: Do you have a model inventory, risk tiering framework, and escalation protocols sufficient for regulatory examination? How will you document AI-assisted decisions for audit?
Investment and Timeline: Have you budgeted for 18 to 36 months before expecting scaled efficiency gains? Does your business case account for talent, data, infrastructure, governance, and change management?
Change Readiness: Is your operations leadership prepared to shift from first-line processing to supervising AI agents? Have you identified which friction points (clarity, capability, credibility, control, consequences) are most likely to stall adoption?
Use Case Selection: Are you starting with high-volume, rules-based processes where AI performance is easiest to validate? Have you identified which use cases require human-in-the-loop review versus those suitable for higher autonomy?
What's Next: The Fee Expectations Question
A fundamental tension is emerging. If AI enables efficiency gains, what do institutional investors expect: fee reduction, service enhancement, or value migration to new premium services? The industry is positioning for the latter two. There is also the edge case problem: AI excels at automating the predictable 80% of volume, but custody operations are defined by the unpredictable 20%. If AI handles volume but humans still handle complexity, does the operating model actually change?
Conclusion
The implementation path remains challenging. High failure rates, data quality barriers, legacy system friction, and change management complexity separate successful deployments from abandoned initiatives. The Richmond Fed's finding that AI intensity correlates with operational losses absent strong risk management underscores the governance imperative.
For custody practitioners, the implications are clear. AI capability is becoming a competitive requirement. The firms that navigate these challenges successfully will define the next era of global custody. Those that do not will find themselves outpaced by competitors who invested earlier, and more realistically.
Your AI initiative is not stuck because of technology.
It is stuck because of the humans around it. The AI Change Leadership Intensive uses the 5C Adoption Friction Model to identify which friction point is actually in the way, and the specific moves that would shift your first resistant group to active adoption within 90 days.
I write about AI adoption in financial services at brennanmcdonald.com. If you found this useful, subscribe to my newsletter for weekly insights on leading change in post-trade operations.
Sources
Note: Sources range from regulatory filings and earnings releases (highest credibility) to vendor case studies and analyst surveys (indicative but methodology-dependent).
Primary Sources
BNY Q3 2025 Earnings Release (16 October 2025)
State Street Q3 2025 10-Q Filing and Earnings Call (17 October 2025)
JPMorgan Q3 2025 Earnings Presentation (14 October 2025)
Citi Q3 2025 Earnings Press Release (14 October 2025)
Northern Trust Press Release (9 December 2025)
SIFMA/ICI/DTCC T+1 After Action Report (12 September 2024)
ESMA Final Report on CSDR Penalty Mechanism (November 2024)
Executive Interviews
Leigh-Ann Russell, CIO & Global Head of Engineering, BNY (AI in Finance with BNY: Advanced Insights S2E8, 2025)
Mostapha Tahiri, EVP & COO, State Street (Fortune COO Summit, June 2025)
Research and Industry Reports
McKinsey State of AI 2025 (November 2025)
MIT NANDA Initiative (August 2025)
Federal Reserve Bank of Richmond: AI and Operational Losses (October 2025)
World Economic Forum / Cambridge Centre for Alternative Finance (December 2024)
EY/MIT Technology Review: Agentic AI in Banking (2025)
Deloitte Financial AI Adoption Report (2024)
BCG AI Value Realisation Study (October 2024)
NVIDIA State of AI in Financial Services (January 2024)
EY Financial Services CTO Survey (2024)
Smarsh AI Governance Survey (August 2024)
ACA Group AI Validation Survey (2025)
Vendor and Technology Sources
OpenAI Case Study: BNY Eliza Platform (2025)
BNY/Google Cloud Joint Press Release (8 December 2025)
Clearstream Settlement Prediction Tool Press Release (July 2025)
SmartStream / WatersTechnology (2025)
Chainlink / SIBOS 2024 Corporate Actions Initiative
Glossary
AuC/A (Assets under Custody and Administration): The total market value of assets held by a custodian on behalf of clients, including securities, cash, and other financial instruments.
STP (Straight-Through Processing): Automated processing of transactions from initiation to settlement without manual intervention. High STP rates indicate operational efficiency.
T+1 Settlement: A settlement cycle where securities transactions settle one business day after the trade date. The US adopted T+1 in May 2024; the EU is targeting 2027.
CSDR (Central Securities Depositories Regulation): EU regulation governing central securities depositories, including mandatory buy-ins and cash penalties for settlement failures.
NAV (Net Asset Value): The per-share value of a fund, calculated by dividing total assets minus liabilities by the number of shares outstanding. NAV calculations are a core custody function.
AADQ (Alpha AI Data Quality): State Street's AI-enabled exception processing capability within its Alpha platform.
Agentic AI: AI systems that can execute multi-step tasks with minimal human oversight, making decisions and taking actions autonomously within defined guardrails.
Human-in-the-loop: A deployment model where AI systems require human approval before executing actions, ensuring oversight of critical decisions.
Supervised autonomy: AI agents that handle routine tasks independently while escalating exceptions or complex scenarios to human operators.
Digital employee: BNY's framework for AI agents treated as staff members with identities, managers, and defined roles.
Guardrails: Constraints and boundaries programmed into AI systems to limit their actions to approved parameters.
Escalation protocol: Defined rules for when and how AI systems hand off decisions to human supervisors.
Article prepared January 2026. All figures as at Q3 2025 unless otherwise noted.