ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Deepfakes have changed the game

Brendan Kotze at Performanta warns: Don’t believe everything you hear

 

As humans we have always been able to rely on our own hearing and sight as truth, but artificial intelligence and criminal intent has stripped us of that luxury. 

 

The arrival of deepfake technology has completely changed the game. This AI has already been used on several occasions to impersonate and falsely incriminate celebrities such as Emma Watson, but it actually poses a serious security risk to us, less famous “normies” as well.

 

Think about what you use voice authentication for everyday – bank services, general log in details, calls to colleagues, or even that weekend call from your kids asking for money. Suddenly, a simple voice recording could spell disaster for individuals and companies if the wrong people get their hands on it. 

 

Security has successfully evolved to match the increasing threats to our safety – simple passwords evolved to Two-Factor pin codes to fingerprint scanners, and then to facial recognition. But now we need to protect our voices. The question on everyone’s minds is how? 

 

An introduction to deepfake audio

Deepfakes are high-quality impersonations of someone. Software systems can generate convincing forgeries of images and videos, but audio deepfakes may represent the biggest threat to business security.

 

A research study published in the journal PLOS ONE found that deepfake voices fooled humans 25% of the time. Remember that we’ve only scratched the surface of what this technology is capable of, success rates will only increase as deepfake voices mature and improve and align to more natural and conversation languages and models. Not to mention that a one-out-of-four hit rate is more than enough for cyber-criminals.  

 

Another factor that makes this new threat so dangerous is that deepfake technology is very accessible. The Emma Watson fakes were created using ElevenLabs’ publicly available tool that allows users to type words and hear them repeated in a human voice.

 

President Biden was targeted with the same technology. Impersonating a celebrity is one thing, but campaigns using one of the most powerful people in the world like the US President is something else entirely. 

 

How will this affect me?

The risk to the general public remains relatively low for the time being, despite some reports seeing rapid uptake. Deepfake voices sit in that category of highly targeted attacks. Even though deepfake tools are available, developing sufficiently convincing fake voices still takes time and resources.

 

However, it won’t stay this way for long, considering the decline in cost and complexity, when combined with the attack success rate.

 

Simply put, a criminal motivated enough can use these tools for various malicious identity impersonations. 

 

Back in 2021, the FBI released a warning that online criminals "almost certainly will leverage synthetic content for cyber-crime and foreign influence operations in the next 12-18 months." The year before that, thieves successfully stole $35 million from a Hong Kong bank using deepfake voice calls. 

 

So, while the threat to the general public is minimal now, there’s no telling how quickly that will change. 

 

Understanding the risks

To stay ahead of the imminent threat, individuals and businesses should familiarise themselves with how they could be targeted using deepfake tech.

 

First there’s access fraud, when deepfake voices can (and have) fooled voice authentication systems. Then there’s general impersonation. Criminals use deepfakes voice notes and phone calls to instruct unsuspecting employees. 

 

Individuals must be aware of how they could be used to gain access to their employer.

 

Business identity compromise (BIC) is a genuine concern, and deepfake voices can be used in a phishing campaign to steal credentials, multi-factor tokens, and session tokens. From there, it leaves people open to blackmail if a criminal uses fake voices to pressure them – especially business executives - often compelling their involvement in a larger cyber-attack. 

 

And finally, we have misinformation. Well-timed voice notes can impact company reputations, stock prices, and negotiations. To put it simply, once a criminally motivated individual has control of your voice, the threats could escalate exponentially.  

 

Get ready to fight back

First things first, companies need to determine where they are most vulnerable to deepfake voice attacks. To do so, there are several questions to answer that should help:  

  1. How frequently do personalised attacks, such as spear-phishing, target your people (or people in your sector)? 
  2. Do any of your authentication protocols rely on voice? 
  3. Do your executives use voice notes? 
  4. Do they operate in a multi-region structure where many decisions are made across phone calls? 
  5. Does any part of your security environment rely on voice authentication?
  6. What tools are you currently using that store voice recordings, i.e. are you recording teams meetings for company use? How protected are those recordings and transcriptions?

The answers to those questions will give an organisation a good idea of where they should start. From there, businesses can close any gaps with training, security process improvements, and incident response planning. 

 

Deepfake technology is still relatively new, but it evolves fast. 

 

However, the same goes for deepfake detection tools. Managed detection and response partners are ready to help deploy detection capabilities as soon as possible, without unnecessary expense. They are motivated to invest in such developments and can use multi-tenant scale to lower costs. 

 

All in all, there’s no need to jump to panic stations just yet. Deepfake voices might be a potent tool for cyber-crime, but they are still sporadic and require considerable effort and resources. 

 

Most cyber-attacks are opportunistic and scattershot. Organisations are still far more likely to be attacked through email phishing, a lack of basic security precautions, or business email compromise. Maintaining a sufficient level of safety now will only serve them better in the future. 

 


 

Brendan Kotze is CDO at Performanta

 

Main image courtesy of iStockPhoto.com

 

This article was originally published in teiss.co.uk

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings