How AI Is Rebuilding Financial Services

Image credit: Fintech News Network / Visa / Cloudfare

Part 1/2: From fraud engines to agentic commerce

Financial services has been here before: new computation, same promises. The difference now is that modern AI is moving from passive scoring to agentic behaviour in the payment flow—reading context, invoking tools and acting under policy. The shift is visible in how networks frame their strategies, how fintechs price value, and how regulators write rules that turn experimentation into operations.

Visa’s public posture captures the mood. In its “AI: The next frontier” perspective, the company argues for flexible, adaptable regulation and transparency in model operation—language that signals both confidence and a governance mindset suited to a heavily regulated industry. The companion essay on fraud underscores the scale of investment and impact, citing $10B across five years and $40B in blocked fraud in a single year, with the subtext that better models now aim not only to stop bad transactions but to approve more good ones. That outcome—approval lift with trust—defines the commercial prize: merchants and issuers will share upside with whoever can raise genuine throughput without raising loss.

Mastercard has turned the same principle into product lines. Decision Intelligence scores transactions in real time using network features and machine learning to both reduce fraud and drive approvals, and the firm has begun to describe the next layer—agentic commerce—as assistants that can search, compare and complete purchases for consumers with minimal inputs. The framing matters: if agents originate demand, the network’s revenue shifts from pure interchange to risk-adjusted approval lift, agentic checkout fees, and data products that explain decisions to counterparties. Recent Mastercard materials go further, sketching standards for agentic payments—identity, consent and traceable execution for AI-to-AI transactions—which implies a technology roadmap that bakes governance into the protocol, not just the app.

Fintechs point to the same economics from the merchant edge. Stripe Radar packages network-scale risk learning with configurable rules for merchants and platforms; Adyen Uplift markets conversion gains attributed to end-to-end AI optimisation across the payment funnel. The narrative is consistent: value accrues where more genuine transactions clear at a given loss target, where false declines fall, and where sellers get actionable explanations they can use with issuers and customers. In that world, the technology stack is no longer a single fraud model. It is a decisioning platform—feature stores, policy engines, continuous learning, and agents that can reason over identity, funds, device, fulfilment state and dispute history.

The move from models to agents also changes operations. Instead of static risk teams and quarterly model pushes, leading players are adopting AI Ops: red-team pre-deployment, shadow traffic, drift monitors, and rollbacks when patterns change. PayPal’s governance disclosures show the inside mechanics—risk appetite frameworks, inventories of “high-impact” models, and board-level oversight—while central-bank commentary from the BIS puts the sector on notice that AI can be both a productivity lever and a systemic risk if everyone converges on the same algorithms. These signals align with a broader compliance runway: NIST’s AI Risk Management Framework (AI RMF 1.0) gives firms a common language for identification, measurement and monitoring; the EU AI Act adds binding obligations for high-risk applications and general-purpose AI (GPAI), including documentation, post-market monitoring and incident reporting.

Why the interest in “agentic” rather than just “generative”? Because the use cases now extend beyond text. Networks and processors are stitching tool-use into flows so that an AI can, for example, request additional verification, check a delivery signal, consult a chargeback history, or re-route a payment rail—all under policy and with a reason code trail. Mastercard’s blogs describe hundreds of thousands of decisions per second mediated by AI; Visa’s trust language emphasises transparency so counterparties understand why a decision was made. The competitive moat becomes less about model weights and more about governed execution: agents that act, but only within auditable and reversible boundaries.

This technology model maps directly to revenue. Approval-rate improvement and fraud-loss reduction generate quantifiable lift; agent-initiated commerce creates new originations; explainability and dispute-prevention analytics become sellable add-ons. Pricing follows outcomes: performance-based fees, tiered SLAs, and enterprise subscriptions tied to conversion and loss ratios. For merchants, the test is simple—authorisation up, fraud flat-to-down—and providers that can publish credible, regulator-recognised metrics will win procurement cycles.

Startups do not need to be spectators. They lead in wedges where incumbents move slowly: agentic checkout, account-to-account orchestration, real-time KYC, and vertical-specific flows (for instance, marketplaces or subscription retries). The lesson from networks is to operationalise trust early: adopt NIST AI RMF controls, document model risk, and make monitoring and incident response part of the product. That posture shortens diligence with acquiring banks and schemes and turns governance into a sales asset rather than a drag. Meanwhile, incumbents can borrow from startups’ tempo—shipping agentic features behind policy gates, measuring lift weekly, and routing only the low-risk, high-confidence actions to autonomous execution while keeping humans on the loop for the rest.

The likely endpoint is standardised agentic payments: buyer and seller agents negotiating terms, authenticating identity, and settling under rules that preserve consent, privacy and auditability. Both Mastercard and Visa are now explicitly talking about this horizon. Between here and there sits the pragmatic work: decisioning platforms that scale, governance that satisfies regulators, and commercial models that reward approval lift with trust. That is the bridge from fraud engines to agentic commerce—and it is being built in public.

Next
Next

Realistic Pathways To AGI Adoption