AI in Corporate Actions Processing

What you need to know

Share this email with a colleague now so they’re on the same page. We only grow through word of mouth and your referrals - Brennan

Table of Contents

AI in Corporate Actions Processing

Corporate actions cost the industry $58 billion annually. That figure, from industry analysis presented at SIBOS 2025, increases by roughly 10% each year. Yet automation rates remain below 40%, and industry surveys suggest 75% of market participants still manually revalidate data before processing. The technology to extract and classify announcements exists. It has existed for years. The revalidation persists anyway.

This article looks at why corporate actions remains one of the most resistant areas to AI adoption in custody operations, and what it would take to change that. For broader context on AI across custody functions, see The Complete Guide to AI in Global Custody.

The Challenge

Corporate actions processing involves receiving announcements from issuers, interpreting their terms, calculating entitlements, collecting client elections for voluntary events, and ensuring securities and cash move correctly. The complexity begins at the source.

According to SIFMA, there is no US regulatory mandate for standardised corporate action announcements. Issuers, exchanges, and regulators each use different formats. Most announcements arrive as unstructured PDFs or HTML pages. A 2021 SIX survey found that 46% of global corporate actions data is still published and received manually.

The problem compounds through the intermediary chain. Each custodian and sub-custodian sets earlier internal deadlines to allow processing time. Cross-border holdings multiply the layers. By the time an announcement reaches the end investor, the original deadline may have compressed from weeks to days.

Voluntary corporate actions create additional friction. Elections require client instruction, often across multiple accounts and jurisdictions. Tight deadlines leave little margin for error. A 2015 SWIFT survey found that handling voluntary corporate actions received automation scores of only 1.83 out of 4, the lowest of any custody function measured. More recent industry data on this specific metric is sparse.

The consequences of failure are severe. Oxera research documented that processing failures during the France Telecom rights issue in 2003 could have resulted in losses in the tens of millions of euros for a single custodian. These are not hypothetical risks. They are historical near-misses that shaped the defensive processing culture that persists today.

How AI Addresses This

AI applications in corporate actions focus on three areas: announcement extraction, classification, and exception prediction.

Announcement extraction uses natural language processing (NLP) and optical character recognition (OCR) to turn unstructured PDF and HTML announcements into structured data. Machine learning models identify entity names, event types, key dates, and terms. According to company disclosures, BNP Paribas Securities Services processes approximately 30,000 messages monthly through its ML translation tool, handling announcements in dozens of languages.

Classification uses supervised learning to sort events by type (dividend, merger, rights issue, stock split) and flag anomalies. These models train on historical data to spot patterns and identify announcements that differ from expected structures.

Exception prediction uses pattern recognition to identify announcements likely to cause processing issues, letting teams focus review effort on high-risk events rather than processing everything in order.

What is Natural Language Processing?

NLP refers to AI techniques that let computers interpret and extract meaning from human language. In corporate actions, NLP parses announcement text to identify relevant terms (record date, election deadline, exchange ratio) even when formatting varies across issuers and jurisdictions.

The Implementation Gap

The technology described above exists. So why is it not working at scale?

The gap between available technology and operational deployment is wider in corporate actions than in other custody functions. What follows is not a list of vendor failures, but a symptom of deeper structural problems.

Vendor solutions exist but outcomes are unverified. SmartStream offers AI-powered lifecycle automation. IHS Markit provides an AI chatbot for data queries. SIX operates a bot for corporate actions data access. None of these vendors have published independently verified metrics on accuracy, STP rates, or cost reduction in production settings.

Custodian disclosures are thin. No major custodian has publicly disclosed specific AI outcomes for corporate actions processing. BNY's Eliza platform covers corporate actions in general terms but without operational metrics. State Street's Alpha platform references corporate actions in training data descriptions but has not disclosed deployment outcomes. Custodians have been more forthcoming about AI in reconciliation and client service; few have matched that transparency in corporate actions.

The Chainlink initiative has potential but is not yet in production. In September 2025, Chainlink announced a consortium with Swift, Euroclear, and 24 financial institutions to address corporate actions data problems using blockchain-based attestation. Phase 2 pilot results claimed high accuracy for attestor-validated records. As of February 2026, no verified production deployment has been announced. The initiative addresses data integrity after capture, creating what some call a "golden record" of validated corporate actions data. But even a golden record requires internal systems ready to consume it without manual intervention. The bottleneck shifts from data quality to workflow readiness.

ISO 20022 will not solve this alone. Adoption of ISO 20022 for corporate actions messaging has been slow, with DTCC indicating it will not mandate format changes. Even with full adoption, ISO 20022 standardises message format, not data quality at source. Announcements will still arrive as unstructured PDFs requiring AI interpretation.

The Agentic Evolution

The autonomy spectrum helps explain where corporate actions AI stands and where it might go.

Level

Name

Role of AI

Human Involvement

L1

Assisted

Extracts and classifies data

Reviews every output; validates all data

L2

Supervised

Processes routine mandatory events

Reviews exceptions and voluntary actions

L3

Conditional

Drafts notifications; identifies positions

Escalates only genuinely complex events

L4

Full

End-to-end processing and elections

None (currently unrealistic due to liability)

Based on available disclosures, most corporate actions deployments appear to operate at Level 1. Industry commentary suggests some firms are piloting Level 2 for dividend processing. No major custodian has publicly disclosed a Level 3 deployment for corporate actions. Level 4 is not a realistic near-term target given the liability implications of missed elections and the complexity of multi-jurisdictional tax treatment.

The challenge is that Level 3 requires more than better technology. It requires solving the trust deficit. When the majority of firms manually revalidate data they receive, the problem is not extraction accuracy. The problem is that nobody trusts the data enough to act on it automatically.

Why Adoption Stalls

The barriers to AI adoption in corporate actions are primarily organisational, not technical.

Data quality at source remains broken. AI can extract data from PDFs, but it cannot fix errors in the original announcement. Until issuers adopt standardised formats (which no regulator mandates), extraction will always involve some guesswork.

Integration complexity is high. Corporate actions processing touches position systems, client instruction platforms, tax engines, and settlement infrastructure. AI that extracts data but cannot connect to downstream systems creates a new manual step rather than removing one.

Defensive processing culture is rational. Teams that have experienced or heard about multi-million dollar losses from missed elections are not being irrational when they insist on manual validation. The consequence asymmetry (small time savings vs catastrophic loss) makes caution the individually rational choice even when it is collectively inefficient.

Expertise hoarding is real, and it is often a form of misguided protection. Research indicates that approximately one-third of employees stockpile expertise when they perceive AI as a threat to their role. But in corporate actions, this behaviour often stems from the same instinct that drives defensive processing: specialists believe their vigilance protects the firm from catastrophic loss.

They are not being selfish. They are acting as the firm's informal risk-management layer because they believe the formal systems are insufficient. They are holding onto the controls because they do not trust anyone else to catch what they would catch.

Knowledge management research suggests 90% of organisational knowledge is unwritten and hard to capture in systems. Specialists who have spent decades learning market-specific rules and edge cases have reason to resist transferring that knowledge to systems that might replace them.

The 5C Adoption Friction Framework

Adoption friction typically falls into five categories. Applying them to corporate actions:

  • Clarity (do teams understand what AI will do?): Teams may expect AI to handle everything, then lose trust when it escalates complex events. Defining what "automation" means for voluntary actions requiring client instruction matters.

  • Control (do they feel agency over the change?): Specialists who built expertise over decades resist handing decision-making to systems they did not build and do not fully understand.

  • Capability (do they have the skills to use it?): Cross-training is difficult when knowledge is unwritten and market-specific. New staff cannot easily validate AI outputs.

  • Credibility (do they trust it?): One missed election destroys trust for years. AI systems have no margin for error in building credibility.

  • Consequences (do they see personal benefit?): If AI handles routine announcements, what remains for the specialists? The answer ("exception handling and client advisory") may not feel like enough.

For a deeper treatment of this framework, see Agentic AI in Post-Trade.

Measuring Success

Metrics should reflect both efficiency and risk:

  • Extraction accuracy: Percentage of fields correctly captured without manual correction

  • STP rate by event type: Distinguish mandatory from voluntary; dividends from mergers

  • Time to client notification: For voluntary events, elapsed time from announcement receipt to client communication

  • Election capture rate: Percentage of elections received before deadline, by channel

  • Exception rate: Percentage of events requiring manual intervention, with root cause analysis

  • Error rate and severity: Incidents per 1,000 events, weighted by financial impact

The industry consensus, reflected in A-Team Insight research, is that complete end-to-end automation is not realistic. The practical target is 80:20: 80% of events processed with supervised automation, 20% handled manually for complexity, novelty, or risk.

Corporate actions may be the last custody function to achieve meaningful AI autonomy. The technology exists. The data remains broken at source. The intermediary chain amplifies friction. And the people who process these events have spent decades developing expertise that AI threatens to commoditise.

Solving this requires more than better extraction models. It requires addressing why teams still do not trust the data enough to stop checking it manually. That trust problem has a structure. The path from Level 1 to Level 3 is shorter than most custody leaders expect, once you identify which friction point is actually in the way.

I write about AI transformation and the playbooks to overcome AI adoption friction at brennanmcdonald.com. Subscribe to my newsletter for weekly insights. Subscribe here

Key Terms

Agentic AI: AI systems that can reason, plan, use tools, and take actions with minimal human oversight. Unlike traditional AI that makes predictions or generative AI that creates content, agentic AI can complete multi-step tasks on its own within defined guardrails.

AI Agent: A software system that uses AI to perceive its environment, make decisions, and take actions to achieve goals. In corporate actions, an agent might interpret announcements, identify affected positions, and process elections without human intervention for routine cases.

Human-in-the-Loop: An AI design pattern where humans review and approve AI decisions before execution. Common in current corporate actions AI, where extraction is automated but validation remains manual.

Guardrails: Constraints and boundaries that limit what an AI agent can do. In corporate actions, guardrails might include value thresholds for automatic processing, approved event types, or mandatory escalation triggers for voluntary actions.

Tool Use: The ability of an AI agent to interact with external systems, APIs, and data sources. Lets agents query position systems, access announcement feeds, send client notifications, or update entitlement records as part of completing tasks.

Autonomy Level: The degree of independence an AI system has to make decisions and take actions. Ranges from fully human-controlled (Level 0) to fully autonomous (Level 4). Most corporate actions AI currently operates at Level 1, with some firms piloting Level 2 for routine mandatory events.

Corporate Action: Any event initiated by a public company that affects its securities. Includes dividends, stock splits, mergers, rights issues, and reorganisations. Requires processing by custodians to ensure shareholders receive correct entitlements.

Voluntary Action: A corporate action requiring shareholder election, such as a tender offer or rights issue. Contrasts with mandatory actions (dividends, stock splits) that apply automatically. Voluntary actions involve tighter deadlines and client instruction collection.

Entitlement: The benefit (cash, shares, or rights) a shareholder receives from a corporate action. Calculating entitlements correctly requires accurate position data, event terms, and applicable tax treatment.

Straight-Through Processing (STP): Automated processing of transactions from start to finish without manual intervention. STP rates for corporate actions remain below 40% industry-wide despite decades of automation investment.

Sources

Industry Research

  • Chainlink/industry consortium corporate actions cost analysis, SIBOS 2025

  • SIFMA, "Standardize Corporate Action Announcements" (2024)

  • SIX, Corporate Actions Data Survey (2021)

  • SWIFT fund manager automation survey (2015)

  • Oxera, "Corporate Action Processing: What Are the Risks?" (2018)

  • A-Team Insight, "Automation Adds Efficiency to Corporate Actions Processing, But is Not a Sole Solution" (2024)

  • ValueExchange, "Corporate Actions: An Investor's Perspective" (2024)

Standards and Implementation

  • FinOps, "ISO 20022 for Corporate Actions: A for Effort, C for Completion in US" (2025)

  • Clearstream Market Standards for Corporate Actions Data

Custodian and Vendor Disclosures

  • BNP Paribas Securities Services ML translation tool (company-disclosed, not independently verified)

  • Chainlink Phase 2 pilot results (company-disclosed, not independently verified)

Change Management Research

  • McKinsey State of AI 2025

  • Prosci change management failure rates

  • Canadian HR Reporter, knowledge hoarding research

Article prepared February 2026. Company-reported figures have not been independently verified.