- Global Custody Pro
- Posts
- AI in Securities Settlement Prediction
AI in Securities Settlement Prediction
Every sub-category of global custody can benefit from well-tested use of AI

Share this email with a colleague now so they’re on the same page. We only grow through word of mouth and your referrals - Brennan
Table of Contents
AI in Securities Settlement Prediction
In the T+2 era, a settlement exception was a morning annoyance. In the T+1 era, it is a midnight crisis. The compression of the US exception handling window from 29.5 hours to just 6 hours represents an 80% reduction in time available to identify, investigate, and resolve potential settlement fails. Nearly two years after the May 2024 transition, fail rates have remained stable at 2-3%. But stable fail rates with dramatically less time to act means prediction is no longer optional. The firms that can identify likely fails before market close have 6 hours to fix them. The firms that cannot are managing exceptions reactively in a window that no longer tolerates reactive approaches.
This article examines how AI-powered settlement prediction works, where it is being deployed, and why most firms are not yet using it. For broader context on AI across custody functions, see The Complete Guide to AI in Global Custody. For the broader shift from automation to autonomy, see Agentic AI in Post-Trade: From Automation to Autonomy.
The Challenge
Settlement fails cost money. CSDR penalty data from ESMA shows European markets have reduced fail rates by 42-47% since the settlement discipline regime took effect in February 2022. The mechanism is simple: liquid shares incur 1 basis point per day in penalties, other assets 0.5 basis points. A $10 million failed trade in liquid shares costs $1,000 per day in penalties alone, before accounting for funding costs, operational overhead, and counterparty relationship damage.
The T+1 transition amplified the operational pressure. Under T+2, exception teams had from 4:30 PM on trade date through 10:00 AM the following morning to resolve breaks. Under T+1, that window closes at 10:00 PM on trade date itself. Cross-border trades face even tighter constraints. When New York closes at 4:00 PM Eastern, it is midnight in London. European operations teams wake up to problems they have no time to fix.
The SIFMA/ICI/DTCC After Action Report noted that same-day affirmation rates jumped from 69% in December 2023 to approximately 95% post-transition. The industry achieved this through automation, not prediction. Straight-through processing got trades affirmed faster. But affirmation is not the same as settlement. Trades can be affirmed and still fail due to inventory shortfalls, funding gaps, or counterparty issues that emerge after affirmation.
How AI Addresses This
Settlement prediction uses machine learning to identify trades likely to fail before they reach settlement date. The models analyse historical patterns across multiple dimensions: counterparty behaviour, trade characteristics, market conditions, and inventory positions.
Feature engineering matters more than algorithm selection. The most predictive features typically include counterparty historical fail rates, trade size relative to normal volume, time of day the trade was executed, asset liquidity characteristics, and whether the counterparty has outstanding fails in the same security. Models trained on these features can flag high-risk trades hours before the exception window opens.
The output is not a binary pass/fail prediction but a probability score. Models train on years of trade and fail data to identify patterns that precede settlement failures, learning which combinations of trade characteristics, counterparty behaviour, and market conditions correlate with fails. Operations teams use these scores to prioritise which trades to investigate first. In a 6-hour window, investigating the 50 highest-risk trades is feasible. Investigating 500 is not.
Real-World Applications
Custodian disclosures on settlement prediction remain thin, but some data points exist.
BNY Mellon disclosed in 2023 a predictive model for US Treasury settlement in partnership with Google Cloud. The system uses 15 months of historical Fed settlement data to train the model. Based on that disclosure, the model identifies 40% of potential fails with 90% accuracy. The 40% figure is notable: it means the model catches fewer than half of eventual fails, but the ones it catches are highly likely to actually fail. This precision-over-recall trade-off makes sense operationally. False positives waste investigation time; false negatives are failures you did not predict but would have investigated anyway through normal exception processes.
State Street disclosed in 2024 that its Alpha platform uses neural network models to reduce false positive exception alerts. The comparison cited was 4,000 genuine exceptions versus 31,000 false positives under the previous rules-based system. This is exception detection rather than prediction, but it addresses the same operational problem: helping teams focus attention on trades that actually need intervention.
JPMorgan, Northern Trust, and Citi have not published specific settlement prediction metrics. Their AI disclosures focus on reconciliation, client service, and fraud detection rather than settlement forecasting.
DTCC's Exception Manager calculates predicted CSDR penalties and prioritises exceptions by penalty size, but it does not predict settlement probability. The platform automates penalty calculation and counterparty outreach, not fail prediction. DTCC's October 2025 "Path Forward" initiative signals continued investment in post-T+1 infrastructure, though predictive AI remains a gap.
Broadridge announced at SIBOS 2025 that its OpsGPT platform now includes fail prediction using historical patterns to assess failure probability and prioritise accordingly. The platform also deploys "asset alignment AI" to ensure securities are in the correct settlement location before trades are booked, addressing a major cause of fails. Broadridge claims resolution cycles have been cut from days to hours.
BNY Mellon's Q4 2025 earnings disclosed a 75% quarter-over-quarter increase in AI solutions in production, though specific settlement prediction metrics were not broken out. State Street indicated AI-related savings would accelerate in H2 2026 and 2027. For how similar patterns play out in other custody functions, see AI Exception Management in Post-Trade.
The Agentic Evolution
The autonomy spectrum helps explain where settlement prediction AI stands and where it might go.
Level | Name | Current Capability | Human Role |
|---|---|---|---|
L1 | Assisted | Model scores trades; analyst reviews all flags | Investigates every alert |
L2 | Augmented | Model prioritises queue; analyst investigates top-scored | Reviews model recommendations |
L3 | Supervised | Agent investigates high-risk trades, sources inventory, initiates counterparty contact | Monitors outcomes; handles escalations |
L4 | Autonomous | Agent predicts, investigates, resolves within thresholds | Intervenes only on agent failure |
Based on available disclosures, most firms operate at Level 1. The BNY Treasury model appears to be Level 1-2: it scores trades and prioritises the queue, but humans investigate and resolve. The critical shift happens at Level 3, where the human moves from "in-the-loop" (approving each action) to "on-the-loop" (monitoring outcomes).
The jump to Level 3 requires more than better models. It requires systems integration that lets the agent act on predictions. An agent that predicts a securities shortfall needs access to inventory systems to source stock. An agent that predicts a funding gap needs access to liquidity systems to arrange coverage. Most custody technology stacks were not built for this kind of cross-system agency.
The technology described above exists. So why is it not working at scale?
Implementation Considerations
Adoption stalls for reasons that have little to do with model accuracy.
Data fragmentation provides the technical excuse. Settlement prediction requires joining data across trade capture, position keeping, counterparty master, and market data systems. Many firms lack a unified data layer. Building one is a multi-year programme that competes for resources with regulatory mandates and client-facing initiatives. But data integration is a solvable engineering problem. The deeper blockers are human.
The 5C Adoption Friction Model identifies where resistance emerges:
Clarity (understanding what the model predicts): Teams may not understand probability scores or why certain trades are flagged. A 73% fail probability means what, exactly? Without clear mental models, teams default to investigating everything or ignoring the scores entirely.
Control (accountability for model-driven decisions): If the model says a trade is low-risk and it fails, who is accountable? Settlement teams are reluctant to cede judgment to systems they did not build and cannot fully explain.
Capability (skills to work with predictions): Using prediction models effectively requires understanding precision/recall trade-offs, score thresholds, and model limitations. Most settlement teams were not hired for these skills.
Credibility (trust in model outputs): Early false positives or missed fails erode trust quickly. Teams revert to manual processes when models disappoint, even if the models outperform human judgment on average.
Consequences (acceptable failure modes): Settlement fails have regulatory, financial, and reputational consequences. The asymmetry between upside (time saved) and downside (fail penalty plus relationship damage) makes teams risk-averse about trusting predictions.
Measuring Success
Settlement prediction ROI is measurable but rarely published.
Fail rate reduction is the headline metric. If prediction enables earlier intervention, fail rates should decline. The challenge is attribution: did fails drop because of prediction, or because of broader STP improvements?
Exception volume per analyst measures productivity. If prediction correctly prioritises the queue, analysts should resolve more exceptions per hour because they are working on genuine problems rather than false positives.
Funding cost reduction captures the liquidity benefit. Predicting fails earlier allows treasury to arrange funding or securities borrowing at better rates than last-minute scrambles.
Time to resolution measures operational efficiency. With a 6-hour window, resolving exceptions in 2 hours versus 5 hours creates material buffer for escalations.
If the technology is ready and the metrics are clear, the remaining bottleneck is cultural.
The Leadership Gap
Your AI initiative isn't stuck because of technology
It's stuck because of the humans around it. The AI Change Leadership Intensive uses the 5C Adoption Friction Model to identify which friction point is actually in the way, and the specific moves that would shift your first resistant group to active adoption.
The compressed T+1 window has made settlement prediction operationally valuable in ways it was not under T+2. With the EU targeting T+1 by October 2027, European firms face the same transition US markets completed in 2024. The question is no longer whether prediction models work. The question is whether firms can integrate them into workflows fast enough to capture the benefit. Is your team still at Level 1, reviewing every alert manually? What would it take to reach Level 3, where agents investigate and resolve predicted fails within defined thresholds?
I write about AI transformation and the playbooks to overcome AI adoption friction at brennanmcdonald.com. Subscribe to my newsletter for weekly insights.
Key Terms
Agentic AI: AI systems that can reason, plan, use tools, and take actions with minimal human oversight. Unlike traditional AI that makes predictions or GenAI that generates content, agentic AI can complete multi-step tasks autonomously within defined guardrails.
AI Agent: A software system that uses AI to perceive its environment, make decisions, and take actions to achieve goals. In settlement, agents might predict fails, source inventory, contact counterparties, and resolve exceptions without human intervention for routine cases.
Human-in-the-Loop: An AI design pattern where humans review and approve AI decisions before execution. Common in early settlement prediction implementations where analysts review model flags before taking action.
Guardrails: Constraints and boundaries that limit what an AI agent can do. In settlement, guardrails might include transaction value thresholds for auto-resolution, approved counterparty lists, or mandatory escalation triggers for cross-border trades.
Tool Use: The ability of an AI agent to interact with external systems, APIs, and data sources. Enables settlement agents to query inventory systems, initiate securities lending, contact counterparties, or update settlement instructions as part of resolving predicted fails.
Autonomy Level: The degree of independence an AI system has to make decisions and take actions. Ranges from fully human-controlled (Level 0) to fully autonomous (Level 4). Most settlement prediction is currently Level 1-2.
Settlement Fail: A trade that does not settle on the intended settlement date due to securities shortfall, funding gap, instruction mismatch, or counterparty issue. Fails incur penalties under CSDR in Europe and funding charges in the US.
T+1 Settlement: The requirement for trades to settle one business day after trade date, effective May 28, 2024 in the US. Compressed the exception handling window from approximately 29.5 hours to 6 hours.
CSDR: Central Securities Depositories Regulation, the European framework that introduced mandatory cash penalties for settlement fails in February 2022. Penalty rates range from 0.5 to 1.0 basis points per day depending on asset type.
Same-Day Affirmation (SDA): The process of confirming trade details between counterparties on trade date rather than waiting until settlement date. Critical for T+1 readiness; industry rates improved from 69% to 95% during the T+1 transition.
Sources
ESMA Final Report on CSDR RTS Settlement Discipline (October 2025) - EU T+1 target date of October 2027, updated settlement discipline rules, fail rate improvements since 2022.
DTCC "The Path Forward" Initiative (October 2025) - Post-T+1 clearing and settlement transformation roadmap.
Broadridge SIBOS 2025 Announcement (October 2025) - OpsGPT fail prediction capabilities, asset alignment AI, autonomous agent deployment.
BNY Mellon Q4 2025 Earnings (January 2026) - 75% QoQ increase in AI solutions in production, Eliza platform expansion.
State Street Q4 2025 Earnings (January 2026) - AI savings acceleration timeline (H2 2026-2027), record revenue of $14B.
AFME High-Level Roadmap to T+1 (June 2025) - EU industry committee established, technical and operational change requirements.
SIFMA/ICI/DTCC T+1 After Action Report (September 2024) - Post-T+1 fail rates (2-3% stable), affirmation improvements (69% to 95%), clearing fund reduction ($3B).