Deepfake and Cybersecurity: A New Threat?

First Published:
//
Last Updated:

In August 2022, Patrick Hillmann, who was the Chief Communications Officer at Binance until he left the company in July 2023, reported in a blog post that attackers had created a deepfake AI hologram of him. They used it to hold virtual meetings with unsuspecting participants who believed they were communicating with Patrick Hillmann himself.

This is the kind of scary stuff that deepfake technology is capable of, and cybersecurity companies are definitely on alert. VMware's 2022 Global Incident Response Threat Report sounded the alarm on a surge in deepfake attacks because cybercriminals have realized it's an easy way to get around existing security measures.

In this post, we explore how deepfake technology is becoming increasingly concerning as a new cyber threat.

What is Deepfake?

Deepfake is a technology that uses artificial intelligence algorithms, particularly deep learning techniques, to generate realistic-looking videos and images of people. This technology can be used to create videos that depict a person in a scene, doing something they never did, or saying something they never said.

Call it a master of hoaxes. It is the kind of AI that can imitate and create an image of any-thing and any-one without much trouble. That’s where the technology’s name comes from after all. It is a combination of «deep learning» and «fake», because you use one to get the other.

What makes deepfake so frightening?

Other than still images, Deepfake can also create sounds to go along with them. The result when you combine the two is what often creates the most trouble. And no, we are not talking about collages. Think about it, what is a video? It’s thousands upon thousands of images stitched together, creating the illusion of movement. Add the sound recording on top of that and you have a perfectly functioning video.

Deepfake can generate all these images, put them in order, create the sounds that you want it to create, and suddenly, you can have a video of anyone doing anything and saying whatever you want them to say.

Starting to see how this might be kind of terrifying? Do you want to see Bugs Bunny run for president and give a speech about the importance of champagne? You can do that. Do you want to see the current president dive into a rabbit hole and run away from a hunter that is trying to catch him? Do you even want to hear the president say “What’s up, Doc”? All these are possible with deepfake, and there lies the big danger.

Deepfake and cybersecurity

While some of the deepfake use cases might be more lighthearted than others and while some of them may or may not need access to private information in order to achieve their goal, a lot of them can actually be major cyber threats. In other words your digital assets could become a dreadful attack vector.

Also Read:

Here is how deepfake is quickly turning into a whole new cyber threat.

1. Fraud

With the ability to change one’s voice through deepfake, malicious people can hop on sessions such as virtual meetings and phone calls pretending to be colleagues.

They can easily end up accessing private information or even convince employees to complete fraudulent activities the scammers need help to complete.

And with the ability to create realistic videos, cyber attackers can influence employees by sending them a video of someone higher up like the CEO giving them instructions on how to complete a certain task, which ends up being part of the fraud.

A few years ago, the CEO of an energy company was the victim of one of the biggest deepfake scams in history. The deepfake attackers made him believe he was having a conversation with another executive of the company’s parent company, but that was far from the truth.

This resulted in said CEO authorizing the transfer of over 220.000€ to a bank account that was supposed to belong to a «Hungarian supplier».

Also read: What is CEO Fraud? Essential Guide to Prevention and Protection Strategies

2. Brand damage

Unethical competitors or people with malicious intentions can create fake video and/or fake audio that makes said the target organization look bad, usually by stating lies about stuff such as horrible work conditions, bribes, bad mouthing customers, and other similar scenarios.

After they have created their fake recording, they either upload it on a popular platform like YouTube or send it to a news outlet for a fair price. This way, they benefit both from bringing a brand down and getting paid for it at the same time.

For example, a deepfake video could be created to make it appear as if a company's CEO is making a controversial statement, or that a company's product is causing harm. This can lead to decreased public trust in the brand and potentially harm the company's financial performance. Deepfake can also be used to impersonate a brand's representative or to create fake promotions and discounts that can harm the brand's reputation by creating confusion among customers.

3. Propaganda

Remember how we talked about making the president say «What’s up, Doc»? Now imagine we actually have them say something serious.

Imagine we had them say that the money they raised for charity X was used for a brand-new yacht instead. Or what if there was a recording where the president explained his future plans and they were nothing like what he promised?

Deepfake is excellent at making you believe false statements like that. Of course, the more outrageous the lie, the less believable it is, but the ones creating the deepfakes know that. That’s why they are creating videos and phone recordings that are just fake enough to make you think about it and mess with you.

And then, there is also the other way around. What if another candidate created a fake recording where they are praised and thanked by a big organization or charity for something they have never actually done? You can suddenly appeal to the masses by creating lies out of thin air.

4. Biometrics

Eye, and face recognition in general, is easier than ever to bypass. With our faces always exposed and with so many pictures containing them, our faces have become one of the top biometric identification ways that deepfake can exploit.

All the attackers need to do is create realistic deepfake images and videos of an individual's face. To counter this, security systems that use biometrics need to be improved and fortunately many companies at the forefront of cybersecurity are already working on this.

Also Read: Key cybersecurity trends

How can businesses stay safe from deepfake cyberattacks?

With deepfake improving at such rapid rates and many people using it with ill intent in mind, we all need to be aware of both the threat and the ways to prevent it in the first place. For businesses, please use these techniques:

1. Training

Cybersecurity awareness training is perhaps the most effective way to keep deepfake-inspired threats out of your organization. Make a point of educating your employees about deepfake technology and how it can be used to spread false information, impersonate individuals, and manipulate public opinion. The goal is to equip them enough to be able to identify and report suspicious deepfake content. Include the use of deepfake detection tools, as well as media literacy.

2. AI assistance

Since deepfake is a creation of AI, it is only fitting to use said AI to check if there are any anomalies in what we see and/or listen to, just in case we are not able to catch it ourselves.

A lot of cybersecurity software are now implementing liveness tests along with biometric identifications. There are tests to check if a media item came from a live person or was a generated image or video.

Read more: Machine Learning and Artificial Intelligence in Cybersecurity

3. Visual analysis

You can use visual analysis of the content you are viewing to detect if anything is out of the ordinary.

For example, a deepfake could be blinking way less, blinking oddly, or even not blinking at all sometimes. Other times it could be an unusual eye or hand movement and, sometimes, image and video quality can also be analyzed for indicators of potential Deepfake. You can take a look at an example of an AI-generated speaker in the video for our article «Common Network Vulnerabilities and Threats».

4. Protected conversations

Conversation protection is one of the easiest ways to save your business from the threat of deepfake attacks during online meetings and phone calls.

This could include having a secret password shared among all people that are supposed to join a virtual meeting and having them enter said password before gaining access to the meeting room.

Ask those joining the meeting to safeguard their passwords and ensure their emails or the medium through which the passwords are normally sent, are properly protected.

In the case of phone calls, you can have a question, password, catchphrase, or even a small scripted dialogue between participants to prove that everyone is real.

The US Department of Homeland Security advises people to be aware of various indicators that may suggest a media source is actually a deepfake. For images/videos, watch for changes in skin tone at the edges of the face and suspect effects like cropping around the mouth, eyes, and neck. As far as audio goes, look out for clues such as discontinuous sentences and lack of context relevance. Be sure to review the full list of deepfake signs in their document.

Check out the Homeland Security’s Deepfake resource.

Conclusion

Computer scientist Ian Goodfellow is credited with designing the machine learning framework that is the foundation of the deepfake concept, known as generative adversarial networks (GANS).

His groundbreaking technology has now caught the attention of both positive and malicious users, just like any other technology. Take the internet, for example — something most of us rely on yet also provides the very platform that cyber criminals ride on. Nonetheless, technology can't be blamed; its benefits far outweigh its detriments. As such, our job in business is to be prepared to counter any new threats that arise, such as deepfakes. Now is the time to take action and ensure your organization isn't vulnerable to deepfake-related cyber attacks.

 
1.40K
No comments yet. Be the first to add a comment!
Our site uses cookies