How AI Is Rebuilding Financial Services
Image credit: Precious Madubuike on Unsplash
Part 2/2: Finance, controls, and the AI P&L
The path forward will include major players funding, governing and scaling AI. This shift to AI-native finance is not just a product story; it is a business architecture problem. Boards want to know how much to invest, where the return lands, and how to keep regulators—and customers—confident as autonomy rises. The answers are emerging in the budgets and disclosures of large institutions and in the frameworks that turn model ambition into bankable controls.
On funding, leaders are moving AI from innovation sandboxes to programme line items with multi-year horizons. JPMorgan is a bellwether: a technology budget in the tens of billions and reported 10–20% engineering productivity gains from coding assistants, alongside hundreds of AI use cases in flight and value estimates in the $1–1.5B range. These figures are not directly comparable to networks, but they establish a P&L lens: AI value is tracked as approval lift, loss reduction, opex savings and agent-led upsell, not as model counts. Cost discipline follows naturally: optimise model size and routing to cut inference spend; automate evaluation to accelerate safe releases; retire overlapping tools when agents absorb tasks.
Controls are the second leg. The NIST AI RMF gives firms a template that meshes with existing model-risk governance—identify risks, map contexts, measure and manage harms, and monitor post-deployment. For Europe-facing operations, the EU AI Act turns guidance into law: obligations for high-risk systems and general-purpose AI providers, including technical documentation, transparency, post-market monitoring and incident reporting. The strategic point is commercial, not bureaucratic. Providers that can show conformity to NIST-style processes and EU-style duties convert compliance into contractual trust—a deciding factor in B2B sales where risk officers hold the pen. Central-bank analysis from the BIS adds a sector-wide caution: productivity upside is real, but correlated models can become a systemic vulnerability if everyone optimises to the same signals. Diversification and stress testing become part of the go-to-market story.
Operating models are being rebuilt around AI Ops. Mastercard’s materials describe decision platforms and AI-mediated risk management at transaction speed; PayPal’s governance reporting shows risk-appetite frameworks, inventories of high-impact models and escalation paths that connect engineers to compliance and the board. The pattern resembles SRE in software: shadow deploys, kill-switches, drift detectors, and post-incident reviews. As agentic features move into checkout and dispute management, those controls become rails for safe autonomy—greenlighting low-risk actions and flagging anything that crosses policy thresholds.
All of this resolves into a practical P&L. On the revenue line, AI raises authorisation rates, reduces false declines, and creates new originations when agents assemble baskets and trigger payments with consent. On cost, it compresses underwriting, KYC and dispute handling cycles and automates internal knowledge work—from code to legal review—freeing scarce talent for higher-margin tasks. On capital, it justifies shared platforms: feature stores, policy engines, evaluation suites, and observability pipelines that every product can use. The technology model evolves from single-use fraud screens to multimodal, tool-using agents operating over a governed decision fabric. The operating model moves from human-heavy rules teams to cross-functional AI Ops. The financial model shifts to outcome pricing—fees tied to approval lift or loss outcomes—plus subscriptions for agentic services and support.
Startups can win inside this architecture by being precise about where value shows up and by borrowing governance early. A clean wedge—say, agentic checkout or real-time account-to-account orchestration—priced on measurable lift will land faster than broad promises. Publishing a NIST-aligned risk approach, shipping with monitoring and incident playbooks, and offering explainability artefacts that slot into a bank’s audit process shorten diligence. Distribution partnerships with networks exploring agentic payments create leverage where trust is concentrated. Incumbents, in turn, can absorb startup tempo: iterate behind policy gates, push low-risk autonomy first, and expand scope as evaluation and monitoring prove stable.
Where does this lead The plausible end-state is agentic commerce standards: a shared way for buyer and seller agents to discover, negotiate, authenticate and settle, with consent, transparency and dispute hooks built in. Visa and Mastercard are now writing publicly about this layer, tying it to transparent model behaviour and trusted execution. The nearer-term horizon is less dramatic but more valuable: a steady rise in approval rates, fewer false declines, faster onboarding and cleaner disputes, under governance that regulators recognise and customers can see. That is how AI shows up on the income statement—and why the firms that treat it as business architecture, not just model training, will set the pace.