Skip to main content
AI in ASIA
generative ai financial scams
Business

Deepfakes and Generative AI: The New Face of Financial Fraud in Asia

APAC deepfake fraud explodes 2,000% as criminals weaponize AI to steal millions through sophisticated video call impersonations.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Deepfake fraud increased 2,100% in Maldives and 408% in Malaysia year-over-year

APAC becomes epicenter for industrial-scale AI-powered financial deception campaigns

Traditional fraud detection methods fail against real-time deepfake video call attacks

Advertisement

Advertisement

Asia Pacific Becomes Ground Zero for Industrial-Scale Financial Deception

The Asia-Pacific region has become the epicentre of a deepfake fraud explosion that's redefining financial crime. While traditional scams relied on basic social engineering, sophisticated AI-powered attacks are now targeting everything from wire transfers to investment schemes across the region.

Sumsub's latest research reveals that deepfake fraud surged by over 2,000% across APAC markets, with the Maldives experiencing a staggering 2,100% year-over-year increase. Malaysia followed with a 408% spike, whilst Hong Kong recorded a 147% rise despite regulatory measures aimed at curbing general fraud.

The scale of this threat extends far beyond individual cases. One in four people across APAC has been targeted for money mule recruitment, and 69% of businesses report being affected by fraud. This represents a fundamental shift from opportunistic scamming to what experts describe as "industrial-scale deception."

The Hong Kong Wake-Up Call

The $25.6 million Hong Kong deepfake incident that shocked global headlines wasn't an isolated case. It exemplifies how criminals are weaponising generative AI to create convincing video calls featuring synthetic versions of company executives, complete with realistic facial movements and voice patterns.

These attacks combine multiple AI technologies: deepfake video generation, voice synthesis, and even manipulated telemetry data to bypass security systems. The sophistication means traditional verification methods, such as asking personal questions or checking caller ID, are increasingly ineffective.

"APAC is now the primary region for advanced identity manipulation using synthetic data and AI-driven techniques," according to Sumsub's research team, based on analysis of millions of verification checks across the region.

The financial services sector faces particular vulnerability because criminals can now create real-time deepfake interactions during video calls. This allows them to respond to questions and adapt their approach, making detection exponentially more difficult than static deepfake content.

By The Numbers

  • Deepfake fraud increased 2,100% year-over-year in the Maldives and 408% in Malaysia
  • 34% of APAC fraud victims reported funds stolen, whilst 24% were tricked into sending money
  • Deepfake attacks now account for 6.5% of all fraud attempts globally, representing one in every 15 cases
  • Global financial fraud losses reached $442 billion in 2025, with AI-powered scams driving significant increases
  • 32% of APAC users have encountered deepfakes online, with 24% unsure if content they viewed was synthetic

Beyond Deepfakes: The AI-Powered Fraud Ecosystem

Whilst deepfakes capture headlines, they represent just one component of a broader AI-powered fraud ecosystem. Criminals are leveraging generative AI tools to craft hyper-personalised spear-phishing campaigns that incorporate stolen data with AI-generated content to create seemingly authentic communications.

The automation capabilities of these systems mean fraudsters can simultaneously target thousands of potential victims with individually tailored messages. Each email or message appears to come from a legitimate source and includes specific details that traditional security awareness training hasn't prepared users to recognise.

Application Programming Interfaces (APIs) in financial services, while enabling innovation, have created new attack vectors. Criminals exploit these interfaces to automate account creation, transaction processing, and identity verification processes at scale.

Traditional Fraud Methods AI-Enhanced Fraud Methods Detection Difficulty
Generic phishing emails Personalised AI-generated communications High
Voice impersonation Real-time voice synthesis Extreme
Static fake documents Dynamic deepfake video calls Extreme
Manual targeting Automated mass personalisation Very High

Southeast Asia's cyber slavery compounds have become production centres for these sophisticated scams. Criminal organisations operating from these facilities combine human traffickers with AI technology experts to create what Interpol describes as "coordinated criminal ecosystems using synthetic identities."

The Industry Fights Back

Financial institutions aren't passive victims in this escalating arms race. Leading banks across Asia are deploying their own AI systems to detect anomalous patterns and flag suspicious transactions before funds can be transferred.

Asian banks are increasingly turning to generative AI not just for operational efficiency, but as a critical defence mechanism. These systems analyse transaction patterns, communication metadata, and behavioural indicators to identify potential fraud in real-time.

Enhanced authentication methods are evolving beyond traditional two-factor systems. Biometric verification now incorporates "liveness detection" that can identify whether a person is physically present or represented through synthetic media. Voice pattern analysis examines speech rhythms and vocal characteristics that current deepfake technology struggles to replicate perfectly.

"Fraud is evolving from manual impersonation to industrial-scale deception powered by deepfake video and voice synthesis," notes Sumsub's research team, highlighting the need for equally sophisticated defensive measures.

Several APAC governments have introduced regulatory frameworks specifically targeting AI-powered fraud. Vietnam recently enacted Southeast Asia's first comprehensive AI law, including provisions for financial crime prevention. Indonesia has implemented transaction monitoring requirements following $35.4 billion in fraud losses during 2024.

Protection Strategies for Businesses and Individuals

Defending against AI-powered financial fraud requires a multi-layered approach that combines technology solutions with human awareness. Traditional security training needs updating to address the sophistication of modern synthetic media attacks.

  • Implement verification protocols that require multiple forms of authentication for high-value transactions
  • Establish "cooling-off" periods for large financial transfers, allowing additional verification time
  • Train employees to recognise the subtle inconsistencies in deepfake content, such as unnatural eye movements or lip-sync issues
  • Deploy AI-powered fraud detection systems that can analyse transaction patterns and communication metadata
  • Create clear escalation procedures for suspicious requests, regardless of apparent sender authority
  • Regularly update security awareness training to include emerging AI-powered attack vectors
  • Implement biometric authentication systems with liveness detection capabilities

The challenge extends beyond individual protection to systemic resilience. Southeast Asia faces particular trust challenges as consumers struggle to differentiate between legitimate AI applications and malicious uses.

Companies must balance security with user experience, as overly complex verification processes can drive customers to competitors. This creates a delicate equilibrium where security measures must be robust enough to prevent fraud whilst remaining accessible to legitimate users.

How can I identify a deepfake video call during a business transaction?

Look for subtle inconsistencies like unnatural eye movements, slight audio delays, or responses that seem too generic. Always verify through an independent communication channel, such as calling the person directly on their known number before proceeding with any financial requests.

What should businesses do if they suspect an AI-powered fraud attempt?

Immediately halt the transaction and verify the request through multiple independent channels. Document all communications and report the incident to relevant authorities. Implement additional verification steps for similar future requests, regardless of apparent sender legitimacy.

Are traditional security measures completely ineffective against deepfake fraud?

Traditional measures remain valuable as part of a layered defence but need enhancement. Multi-factor authentication, transaction limits, and verification calls still provide protection, but must be combined with AI-powered detection systems and updated awareness training.

Which APAC regions face the highest deepfake fraud risks?

The Maldives, Malaysia, and Hong Kong currently show the highest growth rates in deepfake fraud incidents. However, criminals are rapidly expanding operations across Indonesia, Philippines, Thailand, and Singapore, making regional awareness crucial for all APAC markets.

How are regulators responding to AI-powered financial fraud?

APAC regulators are implementing biometric verification requirements, transaction monitoring mandates, and AI-specific legislation. Vietnam leads with comprehensive AI laws, whilst other countries are developing frameworks to address synthetic media fraud and cross-border criminal coordination.

The AIinASIA View: The deepfake fraud explosion across APAC represents more than a technological challenge, it's a fundamental shift in criminal capability that demands equally sophisticated responses. We believe the region's rapid adoption of generative AI across industries creates both the vulnerability and the solution. Success will require unprecedented collaboration between financial institutions, technology companies, and regulators to create defences that evolve as quickly as the threats. The organisations that invest now in comprehensive AI-powered fraud prevention will not only protect themselves but gain significant competitive advantages in customer trust and regulatory compliance.

The arms race between AI-powered fraud and AI-powered detection is accelerating across Asia Pacific. As criminals become more sophisticated in their use of synthetic media and automated targeting, the financial services industry must match this evolution with equally advanced defensive measures.

The question isn't whether AI will become the ultimate weapon against financial crime or merely equip criminals with better tools. Both are happening simultaneously, creating a complex landscape where success depends on who can innovate faster and more effectively.

What concerns you most about the rise of AI-powered financial fraud in your region? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path →

Latest Comments (2)

Crystal
Crystal@crystalwrites
AI
13 February 2026

That Hong Kong deepfake case with the CFO call is wild, truly shows how advanced these fakes are getting! But I wonder if the article gives enough credit to just how good those social engineering tactics were too. Even without perfect deepfakes, scammers are so good at exploiting trust. We need to focus on both the tech and the human element to fight this!

Tony Leung@tonyleung
AI
15 January 2026

that hong kong deepfake with the CFO, the 25.6 million USD one. our firm, we saw similar attempts even before that hit the news publicly. the speed at which these deepfake tools are improving, it's outpacing our current regulatory frameworks here for digital identity verification. that's the real issue.

Leave a Comment

Your email will not be published