Deepfakes are a new addition to the cyber attacker's arsenal, and the technology is making it easy for malicious groups or individuals to exploit people’s trust.
Deepfakes are a new addition to the cyber attacker's arsenal, and the technology is making it easy for malicious groups or individuals to exploit people’s trust.
The creation and malicious use of deepfake images and videos has increased significantly in recent years, Recent statistics indicate that instances of deepfake fraud material, such as fabricated videos of public figures, politicians, or celebrities, surged by 3,000% in 2023. Despite this, many business owners are still unaware of how this technology works, the risks it poses to their organisations, and how to mitigate that threat.
Deepfakes explained
Deepfakes are digitally manipulated media forms which replace an individual's likeness, including voice, face, or body, to make them appear like another person. Recent high-profile cases have brought this technology into sharp focus. These include a deepfake audio of Keir Starmer which purported to capture the then opposition leader abusing party staffers, videos of Nigel Farage, Boris Johnson and Rishi Sunak playing Minecraft together, and a computer-generated video of Martin Lewis created to solicit money for a fake investment scheme.
While some examples are a more serious threat to businesses and the public than others, the readily available and easy to use nature of the technology means that it has become an emerging tool for cyber attackers. These criminals are using it in a variety of ways to scam, groom and mislead people from creating fake news and hoaxes, to committing financial fraud.
While instances of deepfakes are increasing, a separate study from McAfee also revealed that 70% of people aren’t ‘confident’ they are able to tell the difference between real and cloned voices. These are concerning figures which highlight the scale and dangers of deepfakes. If people can easily gain access to software which creates this material, and individuals on the other side of the fence aren’t able to spot these scams, we have a big problem.
Why are deepfakes so hard to spot?
In recent years, artificial intelligence has advanced at an impressive pace. Just a few years ago the technology felt like a far off, futuristic concept, but now it is being used by businesses worldwide. It can speed up processes, improve business efficiency and even forecast future events. However, it can also aid in the creation of deepfakes, making the content more sophisticated and more difficult to detect.
By using AI, deepfakes can now be made with as little as three seconds of content, such as to mimic a person’s voice. And what makes these types of attacks even more convincing and dangerous is that, with the appropriate equipment, these alterations can be broadcast live, allowing for real-time interactions with cyber criminals who are posing and appearing as another person entirely.
The danger of this to businesses is vast, as are the ways in which deepfakes can be used by malicious attackers. For example, cyber criminals could FaceTime the finance team of a business, posing as the CEO, and ask them to transfer money to a fraudulent bank account, or even ask the HR department to share sensitive employee information which is then used to extort the company.
Even on a more basic level, attackers could send voice notes impersonating a senior leader's voice or share videos confirming their identity. All they need is a recording of the leader's voice, and a photo of their face; which for many prominent business leaders is freely available online.
These avenues of attack are so difficult to spot because people aren’t always looking for the threat. If a C-suite leader or board member gets in touch with an employee in a more junior position, there are a number of factors in play which would make it hard to uncover the deception.
Firstly, this employee might not have much contact with senior leadership, so they are unfamiliar with the way they work, speak and operate, and therefore fail to spot unusual behaviour. Secondly, because it is their boss, they are unlikely to make a fuss, worried that asking a CEO to authenticate their identity will be frowned upon.
How to mitigate the risk of deepfakes to my business?
Due to the nature of deepfake attacks, cyber controls and software are not always the most beneficial protection. Deepfakes rely on human error to succeed, as do many cyber-attack vectors, however, unlike a ransomware attack, for example, the attacker doesn’t necessarily need to access your network. Instead, they could ask someone to share sensitive information with them via a ‘new’ or ‘personal’ email address, something network and web monitoring tools are unable to detect.
Even with basic cyber hygiene training, employees will be aware of the signs to spot with a deepfake attack. which is why staff training and upholding strict cyber and data processes is your key line of defense against this rising threat.
Multi factor authentication (MFA) can also provide another layer of protection. This involves authenticating users, people, and access multiple times before fulfilling any requests. For example, if a CEO asks the finance team to transfer funds, there should be other authenticators such as a code sent to an ‘authorised’ mobile phone or email which needs inputting before the request can be carried out.
It means that any odd requests from attackers posing as senior leadership will need to be verified a number of times, which should give your team the opportunity to spot the scam. It also provides peace of mind for staff, who know that even if they are targeted, there are preventative measures in place to ensure any malicious activity is rooted out.
Deepfakes are just the newest addition to a cyber attacker's arsenal, and the technology is making it easy for malicious groups or individuals to exploit people’s trust and extort organisations within every sector. But, with the right training and procedures in place, the risk of deepfakes can be significantly reduced.
Tom Kidwell is an ex-Army and UK Government Intelligence specialist, cybersecurity expert, and the Co-founder & Director of Ecliptic Dynamics
Thanks for signing up to Minutehack alerts.
Brilliant editorials heading your way soon.
Okay, Thanks!