Deepfakes break through as business threat

Deepfakes targeting enterprise financial data used to be a hypothetical concern, but that’s no longer the case, as criminal deepfakers now target more than a quarter of all companies, according to a recent survey.About 15% of executives say cybercriminals have targeted their companies’ financial or accounting data using deepfakes at least once in the past year, while another 11% say they’ve seen multiple deepfake scams. Deloitte conducted the survey of more than 1,100 company executives during a May webinar about trusting generative AI.

featured-image

Deepfakes targeting enterprise financial data used to be a hypothetical concern, but that’s no longer the case, as criminal deepfakers now target more than a quarter of all companies, according to a recent survey. About 15% of executives say cybercriminals have targeted their companies’ financial or accounting data at least once in the past year, while another 11% say they’ve seen multiple deepfake scams. Deloitte conducted the survey of more than 1,100 company executives during about trusting generative AI.

About half of the remaining executives in the survey either don’t know whether their organizations have been targeted by deepfake scams or say the question isn’t applicable. The number of targeted organizations may even be under-reported, with deepfake scams focused on still relatively new, says Michael Bondar, global enterprise trust leader and principal at Deloitte Transactions and Business Analytics. “We’re talking about an entirely new realm of possibilities,” he says.



“When these incidents occur, organizations are not likely to be very loud and verbose about this.” More deepfakes expected More than half of those surveyed expect the number of financial scams to increase in the coming year. To fight deepfake scams, executives say their companies are communicating with employees, offering training, creating new policies, or deploying new technologies.

But about 10% say their companies are doing nothing, and nearly a third of executives surveyed say they don’t know what their companies are doing or believe the question isn’t applicable. Deepfaked voice calls are becoming more common, but is happening as well, says Mike Weil, digital forensics leader and managing director at Deloitte Financial Advisory Services. When an employee hears a CFO’s voice, or sees a CEO on a video call, most are wired to follow instructions, without questioning the request, he notes.

“This takes social engineering to the next level,” he says. “You’re talking to that individual, and they also have a lot of knowledge that you would think is unique to that person. They’re able to interact with you in a way that sounds legitimate.

” At the same time, criminals will increasingly do extensive research on an organization to sound legitimate when they deepfake voice or video calls, Weil adds. “We’re talking about highly coordinated and sophisticated attacks, where there’s a whole intelligence operation to understand your organization or a client,” he says. “These aren’t random attacks.

These are looking for weaknesses within the organization, and it’s a recipe for a lot of money to leave an organization.” Defense in depth The defense against deepfake attacks is multilayered, say Bondar and Weil. Employee education and training are important, as is ensuring executive leadership precisely follow internal processes for activities such as transferring large amounts of money.

In addition, organizations should run internal fire drills to check how employees may fall for deepfake scams, they suggest. Finally, some vendors are using AI to spot AI-generated deepfakes. “This is really a bit of an arms race,” Bondar says.

“It’s an emerging space for companies that are trying to provide technology needed to protect organizations, but of course, on the other side, malicious actors are also working feverishly to make themselves even more effective in their nefarious agendas.” The survey results don’t surprise Kevin Surace, chairman and CTO, Appvance, provider of AI-powered software testing tools. Deepfake scams are on the rise, but few executives want to talk about it, he says.

Deepfake voice messages are becoming common, he adds. “Anyone today can create this with no skills,” he says. Interactive deepfake voice calls and fake participants on video calls require more technical knowledge, but they are also happening, he says.

“All three [methods] are on the rise and will reach epic proportions by end of 2025 as the tools to generate them get far easier to access,” Surace says. The problem may be bigger than the Deloitte survey shows, says Nicos Vekiarides, CEO of Attestiv, provider of deepfakes detection technology. A by Medius, a provider of AI tools for finance professionals, found that over half of its audience in the US and UK have been targeted by deepfake scams.

“While deepfakes have become ubiquitous in the political and social media scene over the past few months, they have started to take on a far costlier toll in the financial fraud arena,” he says. “Through deepfakes, identity theft and wire fraud have taken a new turn and can victimize any company or any individual.”.