- Global Custody Pro
- Posts
- AI in the reconciliation space
AI in the reconciliation space
Why your reconciliation team is still the bottleneck

Share this email with a colleague now so they’re on the same page. We only grow through word of mouth and your referrals - Brennan
Table of Contents
When a vendor reports a 97% auto-match rate, the natural assumption is that the problem is 97% solved. It is not. That figure measures the easy part: deterministic matching on standard fields where records align cleanly. What remains is the 3% where complexity lives, where T+1 cut-offs get missed, where counterparty disputes escalate, and where operations teams work late.
Custody operations are defined by the exceptions, not the volume that flows through smoothly. The firms that confuse high auto-match rates with transformation are the ones whose reconciliation teams remain the bottleneck. For context on how reconciliation fits within the broader AI transformation of custody, see The Complete Guide to AI in Global Custody.
What 97% Actually Means
Most vendor metrics measure what happens when records are clean, identifiers are standard, and data arrives on time. An exact match on trade identifier, instrument code, quantity, price, and settlement date is not an AI achievement. It is table stakes.
The critical questions are rarely answered in marketing materials. What is the denominator: all records processed, or only those that passed initial validation? Which asset classes are included: equities with standard identifiers, or OTC derivatives where matching is genuinely difficult? What counts as a match: exact, fuzzy within tolerance thresholds, or AI-suggested requiring human confirmation?
One major custodian disclosed that its rules-based exception processing flagged over 31,000 exceptions over six months, of which only 250 were true positives. The AI-enabled replacement identified approximately 4,000 exceptions while catching 100% of genuine issues. That disclosure is notable because it reports both sides: false positive reduction and true positive capture. Most vendor claims report only the first.
Where the 3% Lives

OTC derivatives, loans, and commodities lack standard security identifiers. When there is no ISIN or CUSIP, the system must interpret counterparty-specific references that vary across jurisdictions and trading relationships. Match rates for these asset classes drop steeply from the headline figures vendors report for equities and government bonds.
Counterparty statements arrive in formats that resist automation. PDF nostro statements, faxed confirmations, emails with embedded tables require extraction and interpretation before matching can begin. The investment leading vendors are making in unstructured document handling is telling: this is where the operational burden sits.
Cross-currency transactions create breaks even when the underlying trade is correct. Roughly one quarter of firms cite cross-currency matching as their greatest reconciliation challenge. Exchange rate differences, timing variances, and inconsistent rate sources generate exceptions requiring judgment, not computation.
Late-arriving data creates noise in the exception queue. When the counterparty file does not arrive until after the cut-off, the break is real but not meaningful. Operations teams spend time re-checking once data arrives rather than investigating genuine discrepancies.
This is where the 15-year veteran of your reconciliation team earns their salary. They know which counterparties consistently send late files. They recognise patterns in how specific custodians format their statements. They understand which breaks resolve themselves and which require escalation. That institutional knowledge does not transfer easily to an AI system.
The Real Bottleneck
Recent research found that most firms struggle to capture value from AI "not because the technology fails, but because their people, processes, and politics do." Fear of replacement, rigid workflows, and entrenched power structures derail initiatives even in companies with advanced tools.
The reconciliation team that has been doing this work for 15 years did not sign up to become "exception managers" or "digital system trainers." When technology change is positioned as threat rather than augmentation, adoption stalls regardless of system performance. For a deeper examination of why AI projects fail despite working technology, see Agentic AI in Post-Trade: Why Most Projects Fail and What to Do About It.
What Good Actually Looks Like
At Sibos 2025, one vendor disclosed that their AI reconciliation product reduced manual effort from 40-45 hours to approximately three hours for a specific deployment. That is a tenfold productivity gain on the work that actually matters: the exceptions, not the volume.
The measure of success is not the auto-match rate. It is what happens to the 3%.
An AI-prioritised exception queue presents the highest-impact breaks first. When a break affects settlement timing, client reporting, or regulatory compliance, it surfaces immediately. Contextual enrichment pulls relevant data before the analyst sees the break: historical patterns with this counterparty, similar breaks and how they were resolved, related transactions that might explain the discrepancy. Investigation starts with context, not a blank screen.
Resolution pattern learning allows the system to incorporate analyst decisions into future matching. When a senior analyst resolves a break in a particular way, that pattern becomes available to the model. The goal is not replacing the 15-year veteran. It is giving them 90% of their time back for the cases that genuinely need their expertise.
Vibe code with your voice
Vibe code by voice. Wispr Flow lets you dictate prompts, PRDs, bug reproductions, and code review notes directly in Cursor, Warp, or your editor of choice. Speak instructions and Flow will auto-tag file names, preserve variable names and inline identifiers, and format lists and steps for immediate pasting into GitHub, Jira, or Docs. That means less retyping, fewer copy and paste errors, and faster triage. Use voice to dictate prompts and directions inside Cursor or Warp and get developer-ready text with file name recognition and variable recognition built in. For deeper context and examples, see our Vibe Coding article on wisprflow.ai. Try Wispr Flow for engineers.
Questions for Your Operations Team
Does your team share a common understanding of what you are actually building? When leadership expects autonomous agents but operations expects assisted tools, the disconnect registers as failure regardless of technical performance.
Do you control this workflow, or is it outsourced? You cannot effectively transform operations you do not directly manage.
Has your team been trained to supervise AI, not just use it? The role shift from "first line of defence" to "trainer of digital systems" requires skills most operations professionals have not developed.
Do your analysts trust the AI's exception prioritisation? Trust cannot be mandated. It must be earned.
Can your team see personal benefit in the transformation? If becoming an "exception manager" feels like demotion rather than elevation, adoption will stall.
Reconciliation proves the technology works. AI can match records, reduce false positives, and prioritise exceptions. The firms that treat high auto-match rates as mission accomplished are the ones whose teams remain the bottleneck on T+1 cut-off nights. The measure of success is not the 97%. It is what happens in the 3%.
If your reconciliation transformation is stalled
If you want to explore what is actually in the way, you can book a conversation here.
I write about AI transformation at brennanmcdonald.com. Subscribe to the newsletter for weekly analysis.
Key Terms
Reconciliation: Comparing internal records against external sources to identify discrepancies requiring investigation.
Break: A discrepancy identified during reconciliation. May represent a genuine error or a timing difference.
Exception: A record flagged as requiring human review. Not all exceptions represent genuine breaks.
Auto-Match Rate: Percentage of records matched without human intervention. Varies significantly by asset class and methodology.
False Positive: An exception that, upon investigation, does not represent a genuine break.
Observational Learning: Machine learning approach where the system learns by watching how analysts resolve exceptions.
Sources
State Street Q3 2025 10-Q Filing (17 October 2025)
Sibos 2025 Conference Coverage (September-October 2025)
Opimas, "Reconciliation for Securities: Vendor Solutions and the Impact of AI" (2025)
Harvard Business Review, "Overcoming the Organizational Barriers to AI Adoption" (November 2025)
Kani Payments, "When Reconciliation Breaks" (August 2025)
Article prepared February 2026. Company-reported figures have not been independently verified.

