How to tell if a voice call is AI or not

by CybrGPT
0 comment

There was a time when we could believe everything we saw and heard. Unfortunately, those days are probably long gone. Generative AI (GenAI) has democratized the creation of deepfake audio and video, to the point where generating a fabricated clip is as easy as pushing a button or two. This is bad news for everyone, including businesses.

Deepfakes are helping scammers bypass Know Your Customer and account authentication checks. They can even enable malicious state actors to masquerade as job candidates. But arguably the biggest threat they pose is financial/wire transfer fraud and the hijacking of executive accounts.

Organizations underestimate the deepfake threat at their peril. The British government claims that as many as eight million synthetic clips were shared last year, up from just 500,000 in 2023. The real figure may be far higher.

How attacks work

As an experiment by ESET Global Security Advisor Jake Moore has also shown, it’s never been easier to launch a deepfake audio attack on your business. All it requires is a short clip of the victim to be impersonated. GenAI will do the rest. Here’s how an attack might proceed:

  1. An attacker selects the person they’re going to impersonate. It might be a CEO, a CFO or even a supplier.
  2. They find an audio sample online – which is quite easy for high-profile executives who regularly speak in public. It might come from a social media account, an earnings call, a video/TV interview or any number of other sources. A few seconds of footage should be enough.
  3. They select the person to call. This might require some desk research – usually scouring LinkedIn for IT helpdesk staff, or finance team members.
  4. They might call the individual direct, or send an email in advance – for example, a CEO requesting an urgent money transfer, a password/multi-factor authentication (MFA) reset request, or a supplier demanding payment for an overdue invoice.
  5. They call the pre-selected target, using GenAI-generated deepfake audio to impersonate the CEO/supplier. Depending on the tool, they may stick to pre-scripted speech, or use a more sophisticated “speech-to-speech” method where the attacker’s voice is translated in near real time to that of their victim.

Hearing is believing

This type of attack is getting cheaper, easier and more convincing. Some tools are even able to insert background noise, pauses and stammers to make the impersonated voice sound more believable. They’re getting much better at mimicking the rhythms, inflection and verbal ticks unique to every speaker. And when an attack is launched over the phone, AI-related glitches may be harder for the listener to pick up.

Attackers may also use social engineering tactics, such as creating pressure on the listener to respond urgently to their request, in order to achieve their goals. Another classic is to urge the listener to keep the request confidential. Add to that the fact that they’re often impersonating a senior executive, and it’s easy to see why some victims are duped. Who would want to get into the CEO’s bad books?

That said, there are ways for you to spot a faker. Depending on how sophisticated the GenAI they’re using is, it may be possible to discern:

  • An unnatural rhythm to the speech of the speaker
  • An unnaturally flat emotional tone to the voice of the speaker
  • Unnatural breathing or even breath-free sentences
  • An unusually robotic sound (when they use less advanced tooling)
  • Background noise which is either strangely absent or too uniform

Time to fight back

The reason threat actors are putting more of their time into scams like these is simple: the potential rewards on offer. Cautionary tales are steadily accumulating. One of the biggest blunders came way back in 2020, when an employee at a firm in the UAE was tricked into believing that their director had phoned to request a $35m fund transfer for an M&A deal.

Given that deepfake technology has improved significantly in the six years since, it’s worth revisiting some key steps you can take to minimize the chances of a worst-case scenario.

It should start with employee training and awareness. These programs should be updated to include deepfake audio simulations to ensure staff known what to expect, what’s at stake and how to act. They should be taught to spot the tell-tale signs of social engineering and typical deepfake scenarios such as the ones described above. Red teaming exercises should be run to test how well employees are absorbing this information.

Next comes process. Consider the following:

  • Out-of-band verification of any phone-based requests – i.e., using corporate messaging accounts to check with the sender independently
  • Two individuals to sign off any large financial transfers or changes to supplier bank details
  • Pre-agreed passphrases or questions which executives must answer to prove they are who they say they are over the phone

Technology can also help. Detection tools exist to check various parameters for the presence of a synthetic voice. Harder to implement but another course of action would be to limit the opportunities for threat actors to get hold of audio, by limiting executives’ public appearances.

People, process and technology

However, the bottom line is that deepfakes are simple and cost little to produce. Given the potentially huge sums up for grabs for the fraudsters, it’s unlikely that we’ll see the end of voice cloning scams any time soon. A three-pronged approach based around people, process and technology is therefore the best option your organization has to mitigate the risk.

Once a plan has been approved, remember to regularly review it so that it stays fit for purpose, even as AI innovation advances. The new cyber-fraud landscape demands constant attention.

Source link

You may also like

Leave a Comment

Stay informed with the latest cybersecurity news. Explore updates on malware, ransomware, data breaches, and online threats. Your trusted source for digital safety and cyber defense insights.

Weather Data Source: 30 tage wettervorhersage

Subscribe my Newsletter for new blog posts, tips & new photos. Let’s stay updated!