As deepfake scams become ever more common, what detection tools can help separate reality from the scammers?
When the French president Emmanuel Macron posted a hair tutorial video and a clip of himself as action hero MacGyver to his social media feeds, the world’s media questioned whether he had been hacked.
In reality, Macron’s montage of content, posted across Instagram, X and TikTok ahead of the AI Action Summit, was being used to highlight a growing cyber threat for politics and businesses alike: deepfakes.
While the videos are simple to detect (in one, he was inserted into a 1980s-Euro disco hit), deepfakes may not be as easy to spot in day-to-day life.
In a high-pressure work environment, this threat can become even more insidious. Imagine an employee a call from someone claiming to be their boss, urgently requesting a favour. In the rush of a busy day, that’s when an employee may unknowingly step into a trap – especially when the call has spoofed the boss’s voice.
According to a study by technology firm identify platform Sharp UK, 85% of workers in UK SMBs are increasingly concerned about cyber breaches, yet only a third say they’re confident in spotting such threats.
On the other side, a Signicat report found that only 22% of businesses have implemented measures to prevent AI-driven identity fraud.
The same report found that three-quarters of fraud decision-makers admit they do not have the time to address the problem with the urgency it requires.
Thankfully, cyber security firms are working to tackle the issue in a battle of AI against AI with tools that can detect when an incoming call is a deepfaked audio.
“We’ve seen deepfakes involved in security breaches, financial frauds, and even political manipulation,” says Nick Knupffer, VP marketing, Surf Security.
As a result, the cyber security firm has launched its own deepfake detection tool, to help defend enterprises, media organisations, police and militaries for AI deepfake threats.
In what it claims with 98% accuracy, the detector works within its own cyber-secure browser, on any audio source within it.
For example, if an employee uses a browser version of Slack, Zoom, Google Chat, Microsoft Teams or WhatsApp, the user can click a button on the button to start detecting whether the audio is real or fake.
It can be done on recorded or live audio.
The firm’s CTO, Ziv Yankowitz, explains that the tool is trained using deepfakes created by an AI voice cloning platform.
Similarly, scam-call tracking firm Hiya has also joined the fight for deepfake detection with its own Chrome extension and mobile app.
The firm, which has been detecting the average scam call with companies such as Samsung and AT&T, has launched an AI that actively listens to conversations, analysing voice to determine if it’s real, AI-generated, or a recording.
While it listens, the technology deletes the data after the call is over for user privacy.
“We also have incognito mode, so users can easily turn off features for personal calls while keeping them active for business calls,” explains Hiya’s VP head of marketing, Patchen Noelke.
The firm has also launched ‘Hiya AI Phone’, an AI call assistant mobile app that screens unwanted phone calls, safeguards against phone scams, and takes notes during phone calls, too.
“We’re seeing more corporate scams,” says Noelke. “With cheaper access to AI tools, scammers can easily impersonate voices.
This trend is expected to increase, especially with remote and hybrid work environments, where people might not even know their colleagues as well in person.
According to its own study, 27% of people in the US have been targeted by deepfakes, with audio being the most common.
“Most people aren’t confident they can detect them. The challenge is that deepfakes are evolving,” he says.
Noelke explains that even while AI improves, Hiya and its AI and machine learning and continuing to improve its own technology to detect the unique patterns in sound waves that indicate a deepfake.
“While humans might struggle to tell the difference, AI can spot the subtle machine-like patterns in the audio that signal a deepfake.
This is where AI becomes essential, as detecting deepfakes now requires another layer of AI to catch the machine-made patterns.”
In the future, Surf Security, for example, is looking to detect video deepfakes for when they become ever more realistic.
“As deepfakes improve, it’s getting harder to distinguish between real and fake content.
A year ago, deepfake videos looked terrible, but now they’re much more convincing,” Knupffer concludes.