The numbers tell two stories at once.
According to NVIDIA's 2026 State of AI in Financial Services survey of 800+ industry professionals, active AI usage in financial services jumped to 65 percent, up from 45 percent a year earlier. That is a 20-percentage-point swing. The pilot phase, for most major players, is over.
But according to KPMG research, 99 percent of companies plan to put autonomous agents into production. Only 11 percent have done so. The gap between intent and deployment is enormous.
The question is not whether AI agents will work in financial services. It is whether the institutions deploying them can govern what they do.
Five Deployments in One Month
What makes March 2026 different from the previous 12 months of agentic AI announcements is the specificity. These are not pilots. They are not proofs of concept. They are production systems handling real financial workflows.
Daylit launched AI agents for accounts receivable collections and real-time cash intelligence. The platform enables finance teams to accelerate collections and transform receivables into a strategic driver of working capital. AR collection is one of the most labour-intensive functions in corporate finance: tracking overdue invoices, sending reminders, escalating disputes, reconciling payments. Daylit's agents automate the workflow end to end.
CGI deployed AI agents within CGI Credit Studio, its cloud-native platform for credit default management. The agents handle the operational burden of managing defaulted loans: data collection, validation, communications, and case management. For banks managing thousands of default cases simultaneously, the automation replaces manual processes that have historically required large back-office teams.
Marqeta unveiled an AI-powered risk score embedded in its real-time decisioning engine. The score analyses transaction risk at the point of authorisation, using machine learning to detect emerging fraud patterns that rule-based systems miss. "By embedding AI-powered controls and advanced machine learning into the authorization process, we enable customers to expand confidently while also strengthening their fraud defense as they scale," said Anthony Peculic, interim chief product officer at Marqeta. The context is urgent: global payment fraud is projected to increase 153 percent by the end of the decade.
Regnology added an agentic AI layer to its Ascend platform for regulatory reporting. Workflow agents now automate data collection, validation, and examinations. Real-time analytics agents surface Key Risk Indicators and translate granular data into narrative reports. The company calls its vision "Straight-Through-Reporting": a fully automated regulatory reporting workflow from data ingestion to submission. For compliance teams drowning in reporting requirements across multiple jurisdictions, that vision has obvious appeal.
TRM Labs launched its Co-Case Agent for crypto crime investigations, embedded directly in its Forensics platform and available free to all customers. The agent translates natural language prompts into complex investigative actions: tracing funds across chains, auditing transaction graphs, suggesting next steps. Chainalysis is building similar capabilities, with natural language AI agents for its blockchain investigation platform rolling out over summer. Both companies cite the same pressure: caseload is growing faster than the investigative workforce. TRM reports a 500 percent increase in AI-enabled fraud and scams.
Five deployments. Five different financial services verticals. All production, all this month.
The Cautious End of the Spectrum
Not every institution is moving at the same speed, and the slower movers are not wrong to be careful.
LHV Bank is undertaking a proof-of-concept with Gradient Labs to explore agentic AI for retail customer support. The scope is deliberately narrow: email-based communications only. The emphasis is on explainability and auditability, ensuring the bank can trace every decision the agent makes and explain it to regulators if asked.
That caution reflects a real constraint. Financial services is one of the most heavily regulated industries in the world. An AI agent that makes a fraud decision, processes a loan default, or handles a customer complaint is operating in territory where errors have regulatory consequences. The agent does not just need to be accurate. It needs to be auditable, explainable, and compliant with rules that were written for human decision-makers.
Gartner predicts that 40 percent of enterprise applications will feature task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. But it also warns that more than 40 percent of agentic projects may be cancelled due to costs, unclear value, or governance gaps. The enthusiasm is real. So is the attrition rate.
Who Secures the Agents?
If AI agents are going into production in financial services, something needs to govern their access. That governance layer is still being built.
Oasis Security raised $120 million in a Series B led by Craft Ventures, with participation from Sequoia, Accel, and Cyberstarts. Total funding now stands at $195 million. The company focuses on identity governance for non-human identities: AI agents, service accounts, and machine-to-machine connections.
The scale of the problem is striking. Machine identities now outnumber human identities 82 to 1, according to Palo Alto Networks data. Legacy identity systems were designed for human users logging in with credentials. They were not designed for AI agents that need dynamic, intent-based access to financial systems.
"The machines of the world are taking over, and our infrastructure was built for humans," said Danny Brickman, CEO of Oasis Security. The company offers intent-based access control, where an agent's permissions dynamically align with its stated objectives rather than static role-based assignments. That is a different model from traditional identity and access management, and it maps directly to how agents actually operate: requesting tools and data as they pursue a goal, not logging in once and staying within a fixed permission set.
As we explored in our analysis of agentic AI security, the identity layer is one of the most underinvested areas in the agentic stack. Oasis's funding round suggests the market is starting to close that gap.
The 40 Percent Problem
The deployments are real. The risks are also real.
PYMNTS research shows that 56.3 percent of companies face bot or agent-related threats. 58.6 percent struggle with bot-driven fraud. Financial services firms are the most affected, with 60.6 percent reporting increased bot traffic. The same technology that automates fraud detection also automates fraud execution. Banks are arming themselves with the same weapon criminals use.
The governance gap is the deeper concern. Deloitte's Tech Trends 2026 found that 85 percent of companies expect to customise agents to fit unique business needs. Only 21 percent report having a mature model for agent governance. That is a 64-percentage-point gap between ambition and readiness.
The institutions that deployed agents this month have made a bet: that the value of automation in fraud scoring, debt collection, regulatory reporting, and investigations outweighs the governance risk of putting autonomous systems into production before the governance frameworks are fully mature. For the specific, bounded use cases they chose, that bet is likely sound. An agent scoring fraud risk at the point of authorisation is operating in a well-defined decision space with clear feedback loops.
The harder question comes next. What happens when agents move from bounded tasks into open-ended decision-making? When the AR collection agent decides which customers to escalate? When the regulatory reporting agent interprets ambiguous data? When the fraud agent blocks a legitimate transaction?
The pilot phase is over. The governance phase has barely begun.
Sources
The agents are in production. The governance frameworks are not. Who bears the cost when the gap between the two catches up?