Deepfake AI: What are deepfakes & how to spot them?

Image

If you’re worried that a photo or video may be a deepfake, there are some simple tell-tale signs.


Hello, my name is Becky Sanders. I studied at Central University in downtown USA, my interests include being human, and I really want to be your friend on LinkedIn. This is my profile picture. Can’t you tell I’m real?

Deepfake

Unfortunately, I’m not Becky. In fact, Becky Sanders doesn’t exist. Her profile photo is a deepfake: an artificially created piece of media, such as a photo or video. Sometimes deepfakes create a video of a person doing something that they have never done; other times photos are created of people who have never existed, by blending many faces of real people together. Newer deepfakes can even replicate a person’s voice to make them say something they might never say in real life. 

What is a deepfake?

A deepfake is a fake but realistic piece of media made by manipulating existing video or audio material to create convincing video, audio, or image hoaxes. Deepfakes use tools based on artificial intelligence (AI) to replicate human features and expressions and mimic voice or video to create fraudulent media. Deepfakes can also swap a person’s face or voice with another to make it appear that someone said or did something they in fact did not do.

The end products often look very realistic, since the technology is powered by a cutting-edge technology in deep learning called a GAN (Generative Adversarial Network). In fact, the term “deepfake” is a portmanteau of deep learning and fake. 

The technology continues to improve, especially with the development of generative AI tools like ChatGPT, which makes identifying deepfakes simultaneously more difficult and more important. Along with their more benign uses in the form of silly videos and jokes, deepfakes have also been used to support political campaigns, launch online attacks, and perpetrate other scams.  

How to Spot Deepfakes

If you’re worried that a photo or video may be a deepfake, there are some simple tell-tale signs. Specifically, you should look for digital incongruities: areas that look like they don’t belong together. This is similar to the puzzles for children “what’s wrong with this picture” – for example, a swing with only one rope or a reflection in the mirror facing the wrong way. In deepfakes, the incongruities can be glasses that don’t fully reach the ear, or a beard that doesn’t move together with the face when the person talks. Another common place of errors are the lips – often deepfakes have lips which do not look natural or match the person’s other facial features. 

You may also look for digital artefacts – items that are difficult for computers to generate correctly right now. This could be facial textures (the face, especially the forehead, might be too smooth), shadows (might be in the wrong place), or unrealistic-looking eyebrows.

Deepfake

Photo Credit: AP Photo

Finally, in animated videos, pay attention to blinking. Does the person appear to blink too little or too much? 

What Kind of Deepfakes Exist?

Early deepfakes focused on changing faces in videos – for example replacing the face of an actor with that of another person in a movie. These deepfakes, though not very convincing, had severe negative consequences when they were deployed without people’s consent. This is especially true because many people are still not aware that deepfakes can be created.

Immediately, AI researchers realized that this technology could be used to impersonate world leaders and other influential figures. They therefore showed a proof of concept in 2018: a deepfake video of former President Obama warning about the dangers of deepfakes!

 

However, in this video, the comedian Jordan Peele voiced Obama. This is because  realistic-sounding audio is still several years away. Even with advanced technologies  like Lyrebird, the generated voice can sound metallic if you listen closely.

Using the tips above, you can see that this video is fake. For example, former President Obama’s forehead is abnormally smooth. Next, the wrinkles around his mouth do not move naturally as he speaks. Finally, the shadows on his cheekbones appear unnatural compared to the direction of the light in the rest of the video. 

Another use of deepfakes has been for generating fake profile pictures for sockpuppet accounts on social media. For example, a profile picture was generated for LinkedIn for Katie Jones, who does not exist. The same technique has also been used on Twitter and YouTube for an army of sockpuppet accounts.

Deepfake

Photo Credit: PC Magazine

More recently, deepfakes have been deployed in Russia’s invasion of Ukraine, forging a video of President Volodymyr Zelensky surrendering. This deepfake was deployed on a hacked Ukrainian news website and widely disseminated from there. Again, if you know what to look for, it is obvious this video is a fake: the beard does not move consistently with the face and the skin tone of the neck does not match that of the face. You can also see the face is cropped in an awkward way to prevent looking at the forehead and hairline, where it is easiest to detect fakes. Facebook has identified and taken down this video.

There have been other deepfakes deployed in this conflict, for example one of President Putin announcing peace with Ukraine. You can again see that the forehead is unnaturally smooth, and the movement of the cheeks and mouth looks unnatural.  Finally, the hairline also does not look realistic.

Future of Deepfakes

Going forward, we can expect deepfakes to be integrated into broader information warfare and the disinformation landscape. The Zelensky video in particular represents the union of hacking, creating the deepfake, and disseminating that video by state-sponsored propaganda channels, as well as various social media bots. 

We can also anticipate the use of deepfakes in catfishing, phishing, and other threats  that combine traditional cybersecurity with human emotion. 

To counter these threats, we must take an equally holistic approach: secure our end- points and servers with cutting-edge cybersecurity, detect deepfakes using both  computer-assisted and manual methods, and stop the spread of disinformation by bots and sockpuppets on social media.

Oliver Buxton
  • Oliver Buxton
  • Cybersecurity writer
Oliver Buxton is a Prague-based cybersecurity writer focused on the social impacts of advanced persistent threats. Oliver’s work on cyberterrorism and cyberwarfare has been published in The Times, and he previously worked on digital safeguarding policy for higher education institutions.

Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc. 

Contents

    Want more?

    Follow us for all the latest news, tips and updates.