ACI Blog

AI in action: The reality of AI adoption in fraud and financial crime

On this page

Fraud and financial crime are no longer isolated operational risks. They have become systemic challenges accelerated by real-time payments, industrialized scams, and the rapid adoption of artificial intelligence. Today, AI is transforming the landscape on both sides of the equation—empowering fraud teams to respond at machine speed, while enabling criminals to scale and adapt faster than ever before. 

To understand how organizations are navigating this reality, ACI Worldwide, in partnership with Finextra, have surveyed 154 global financial crime leaders across banks, payment providers, merchants, and financial infrastructure firms. The result is a clear snapshot of an industry at an inflection point: It has embraced AI in principle but is still grappling with how to deploy it safely, confidently, and at scale. 

The headline message is unmistakable: AI is no longer optional in fraud and financial crime prevention but scaling it safely and effectively remains a work in progress.

AI adoption has reached critical mass

One of the strongest signals from this report is how quickly AI has moved from experimentation to expectation in fraud and financial crime prevention. An overwhelming 98% of organizations are pursuing at least one advanced AI initiative, underscoring that AI is no longer a future ambition but a present-day requirement. 

Adoption is also translating into real-world deployment:

  • 51% of organizations are already live with AI in fraud and financial crime prevention
  • A further 47% plan to go live within the next 24 months

In practical terms, AI is now embedded across fraud strategies worldwide, with organizations prioritizing technologies such as generative AI, real-time anomaly detection, and agentic AI to keep pace with increasingly sophisticated threats.

Yet widespread adoption does not automatically translate into maturity.

The autonomy gap: AI is deployed, but not fully trusted

Despite rapid uptake, most institutions remain cautious about handing over decision-making to machines. While AI models are increasingly embedded in fraud and financial crime workflows, confidence in fully autonomous operation remains limited. Only 19% of organizations operate with full AI autonomy, while the majority still rely on some form of human oversight, either reviewing flagged decisions or manually checking all outcomes.

This “human-in-the-loop” approach offers reassurance in the short term, but it also introduces material friction into fraud operations:

  • Higher operational costs, as manual review scales linearly with volumes
  • Slower response times, reducing effectiveness against real-time fraud
  • Limited scalability as transaction volumes and attack sophistication continue to grow

The research findings suggest that trust—not technology—is now one of the main barriers to unlocking AI’s full potential in fraud operations. Building confidence in explainability, governance, and accountability is emerging as the next critical phase of AI maturity.

Data privacy has overtaken fraud as the top AI concern

Perhaps the most surprising finding is what organizations see as the greatest emerging AI-driven risk. Rather than fraud techniques themselves, data privacy risks in AI training now rank as the number one concern globally, overtaking traditional fraud threats.

This shift reflects a growing recognition that as AI becomes more deeply embedded in fraud prevention, risk is increasingly defined by data. Organizations recognize that:

  • AI systems are only as safe as the data used to train them
  • Mishandling sensitive data can create regulatory, ethical, and reputational exposure
  • Strong data governance is as critical as detection accuracy

At the same time, organizations overwhelmingly recognize that data and intelligence exchange is vital to modern fraud prevention. This creates a growing tension between collaboration and compliance, one that many institutions are still actively working to resolve.

Talent shortages are shaping AI strategies

While AI investment is accelerating, access to skilled talent remains uneven across the industry. Only one in five organizations report having sufficient in-house data science expertise, leaving many firms reliant on outsourcing, cloud platforms, or third-party support to bridge capability gaps.

This reliance has important strategic implications. As organizations scale AI-driven fraud prevention, gaps in internal expertise can make programs harder to sustain and harder to evolve over time:

  • AI programs may struggle to scale sustainably without strong expertise
  • Institutions risk long-term dependency on external expertise, reducing flexibility and control
  • Pressure is growing for AI tools that reduce reliance on scarce specialist skills

The findings suggest that AI strategy is increasingly an organizational challenge, not just a technical one, requiring new operating models, skills development, and closer alignment between fraud, risk and compliance, data, and technology teams.ime insights, faster decision cycles, and consistent performance across diverse merchant portfolios.

Tactical security dominates—strategic growth takes a back seat

When asked about priorities for the next 12 months, organizations overwhelmingly focus on securing payment environments using advanced technologies. Faced with escalating fraud volumes and intensifying regulatory scrutiny, most institutions are doubling down on short-term resilience and risk containment.

By contrast, objectives such as minimizing customer friction, improving acceptance rates, or enhancing experience rank far lower. This imbalance highlights a sector heavily focused on short-term resilience and defensive execution, often at the expense of longer-term optimization and growth.

This tactical emphasis is understandable given the pace of fraud and regulatory pressure. But the research raises an important question for banks and payment providers alike: Can organizations afford to prioritize security alone, when fraud decisions increasingly shape customer trust, revenue, and competitive differentiation?

A turning point for fraud and financial crime prevention

Taken together, the findings in this report paint a picture of an industry in transition:

  • AI adoption is nearly universal, moving firmly into the operational mainstream
  • Advanced capabilities are being pursued at scale, across fraud detection, prevention, and decisioning
  • Trust, talent, data governance, and organizational alignment are slowing progress, emerging as primary constraints

Fraud prevention is no longer a back-office function. It has become a strategic capability that influences risk, revenue, customer experience, and competitive advantage.

The next phase will be defined not by whether organizations adopt AI but by how responsibly, collaboratively, and confidently they scale it across the enterprise.

Explore the full findings

The AI in Action: Global Survey on Fraud and Financial Crime report explores these themes in depth, including regional differences, maturity patterns, and what leading organizations are doing to move from tactical defense to strategic resilience. Explore the full report and see how your organization compares.

Ai In Action: Global Survey on Fraud and Financial Crime

Real-world insights on how AI is transforming fraud strategies worldwide.

Payments Expert

ACI Worldwide powers electronic payments for financial institutions, retailers and processors around the world with its broad and integrated suite of electronic payment software.