Global financial losses linked to deepfake fraud have surged to an unprecedented $1.56 billion, as increasingly affordable AI tools fuel a new era of scalable digital crime.
According to fresh data compiled from the AI Incident Database and Resemble.AI, released by Surfshark, more than $1 billion of these losses occurred in 2025 alone, underscoring how rapidly synthetic media has become a central weapon in the global fraud ecosystem.
Just a few years ago, producing a convincing deepfake video required professional skills, specialised software, and costs that could range from $300 to $20,000 per minute, depending on quality. But the landscape has radically shifted. With the arrival of accessible AI generators, including newly released tools such as OpenAI’s Sora 2, the cost of generating fake videos has fallen to just a few dollars per minute, placing powerful manipulation technology in the hands of anyone with an internet connection.
The consequences have been swift and severe. Between 2019 and 2023, deepfake-related financial losses totalled $130 million. In 2024, losses approached $400 million. But 2025 has marked a turning point, with more than $1 billion in recorded losses, illustrating a crime wave supercharged by falling creation costs and widely available generative AI.
Read also: 5G subscriptions hit 3bn, yet connectivity progress leaves poorer Nations behind
Deepfake audio has become even cheaper to produce, often costing one to twenty cents per minute, with only three seconds of a person’s voice needed to create a convincing clone. This has enabled a wave of hyper-personalised scams that exploit urgency, fear and emotional manipulation.
In one such case, a woman lost $15,000 after receiving what sounded like a desperate voicemail from her daughter claiming she had been in a car crash, followed by a deepfaked call from a supposed public defender demanding bail money.
The collapse in production costs has also opened the door to entirely new categories of scams. A fast-growing “lost pet” scam trend now sees fraudsters generate AI images of missing animals, contact unsuspecting owners, and demand small payments, sometimes as little as $50, to return pets that never existed. The scammers support their claims with convincingly manipulated photos generated within minutes.
In the corporate world, deepfakes are infiltrating hiring systems. The FBI is investigating a scheme in which more than 100 companies unknowingly hired remote IT workers using synthetic identities. These individuals allegedly manipulated video interviews, audio responses, CVs and communications, funnelling money to foreign governments through falsified employment.
But by far the most damaging category is investment fraud, which accounted for $900 million, or 57 percent of all deepfake-related losses recorded in the dataset. These scams often begin on social platforms including Facebook, WhatsApp, Telegram and YouTube, where deepfaked versions of celebrities, executives or politicians promote fraudulent investment schemes.
Analysts warn that the deepfake crisis is escalating faster than regulatory systems can respond. With generative AI tools becoming more sophisticated, cheaper and easier to use, fraudsters are now able to operate at industrial scale, often across borders and with minimal risk of detection.
The report signals a defining moment for global cybersecurity, as synthetic media becomes one of the most disruptive tools in the modern criminal arsenal. As deepfake technology continues to advance, experts say governments, regulators and technology companies must urgently strengthen detection systems, tighten platform controls, and improve public awareness to slow the accelerating wave of AI-enabled fraud.


