How to Spot Deepfakes and Fake News: Tips & Detections Tools
Tweet about this on TwitterShare on FacebookShare on LinkedInPin on PinterestEmail this to someone
Articles

How to Spot Deepfakes and Fake News: Tips and Detection Tools

What is a deepfake? In essence, it’s a video altered using the latest AI showing someone saying or doing something they never said or did. Quality runs the gamut, but the best deepfakes are virtually indistinguishable these days, posing potential threats to our democracy and the trust we have in media institutions around the world. Here are tips on how to spot deepfakes and fake news.

Let’s start with a quick scenario that describes the habits of millions of Americans — if not millions across the world. You’re on lunch break, mindlessly scrolling through your Facebook or Twitter feed, lapping up the news of the day. Suddenly, you come across a video shared by a friend or acquaintance. Possibly even someone you don’t know.

What is a deepfake video?

The headline is unbelievable. You watch the video twice. It feels 100 percent real and passes every sensory test imaginable. The content itself? It shows someone saying something outrageous, politically damaging or caught in a hot-mic moment. It’s too good to pass up and just dropped, so you can be one of the first to share it. Social media gold.

You’d wait, but you don’t have time to verify the content’s original source and because it reinforces notions you already believe in, you decide not to do any honest vetting because your eyes and ears don’t lie — and hey, you have a team meeting in 10 minutes. You share it — because why not — to hear the disbelief pour in, one comment at a time.

Thousands of others have done this at the same time — spreading the video to hundreds, thousands, if not millions of people at once. In minutes. On a whim.

Only thing is: The video is fake. Bogus. A fraud. But nobody knows it yet because the quality is superb — it’s passed through everyone’s B.S. detectors and collective common sense. But now, instead of being a friend in-the-know who scooped everyone else with this unbelievable story, you’ve essentially become the ‘Outbreak’ monkey. Digital edition.

Of course, you pull it down eventually after finding out it’s fake (if that fact is even verifiable), but it’s already generated tons of engagement and left an indelible mark on all those who consume, comment and share the video before the discrediting ever happens. And who knows, they might not even find out that it is discredited at all… Damage done.

To this hypothetical scenario — which unfortunately happens all the time — we all let out a collective gasp.

Welcome to the era of deepfakes. Because this is where we are. As a culture, a society, a human race.

Deepfakes are no longer just game for tech tricksters skilled in the use of Adobe Photoshop or After Effects. The technology has gotten easier to use, more accessible and automated to the extent where algorithms do a lot of the work for you. Lately, fear has crept up regarding their ability to do everything from weaken democracy to spark doubt prior to elections.

Curious how real they’ve gotten? Check out these people who don’t actually exist. Each face is a total fake, generated using the kind of AI employed to render deepfakes: generative adversarial networks (GANs).

So, how do we avoid deepfakes, cheap fakes (the less sophisticated counterparts of deepfakes) and the spread of fake news in an era where seeing isn’t necessarily believing? Especially when social media behemoths Twitter, Facebook and Facebook-owned Instagram are only now starting to address these not-so-fake problems?

It’s true that a great way to prevent the spread of misinformation is through education, restraint and a massive awareness of what this threat entails. But that itself might not be enough in the face of bad actors.

Facebook recently came out to say they will ban deepfakes, but not cheap fakes like the doctored video that circulated last year of House Speaker Nancy Pelosi, which garnered millions of views in 48 hours.

How to deter deepfakes from spreading

Much like not driving drunk or pointing a loaded handgun, there are things we can do to minimize damage and avoid unintended consequences. Actionable tips that can help prevent the spread of doctored content, misinformation or fake news in the digital AI era.

So, let’s get into it.

    1. Do a quick gut check to assess the likelihood that a video is real.
    2. Don’t share unless you’re 100 percent certain of its authenticity.
    3. If you’re still unsure, check online resources designed to prevent the spread of misinformation (which we’ll get into below).

How to spot deepfakes

How to spot deepfakes

When it comes to deepfakes for use purely as entertainment, their existence is impressive, amusing and shows how their powers can be used for good. The harmless ones produce a chuckle for their creativity even if they do possess logical disconnects, poor dubbing and wide-eyed people who can barely blink.

Unfortunately, we’re way beyond that.

The best ones now have edits that are virtually imperceptible… with the potential to fool everyone. If you want to know how far the technology has come, look no further than this eye-opening deepfake video from YouTuber Ctrl Shift Face as an example.

In it, we see actor/comedian Bill Hader on ‘The Late Show with David Lettermandoing impressions of Tom Cruise and Seth Rogen. When he does their voices, you see Hader’s face seamlessly transform into each of the respective actors he’s doing an impression of. The subtlety is rather scary.

If the most common use of deepfakes was of the harmless variety, that would be one thing. But it’s not. At this moment, 96 percent of deepfakes on the web are being used for “non-consensual pornography” according to a recent study by Deeptrace released in July 2019, which shows where the lion’s share of energy has been paid up until this point.

And newsflash: The phenomenon is only beginning.

In its study, Deeptrace found 14,678 deepfake videos online — up 75 percent from the previous year. This indicates a tipping point with regards to what could be headed our way — a point highlighted in this New York Times piece, which elaborates on the recent breakthroughs that explain why more and more are appearing on YouTube and social media.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to. — New York Times, “Internet Companies Prepare to Fight the ‘Deepfake’ Future”

Given this newfound prevalence, there’s a heightened need for detection technology to make determinations on these videos — and fast — before people accidentally spread them without knowing any better.

The big question is: How do you spot a deepfake when many of the foremost experts in the field admittedly have a hard time doing it?

The solution: deepfake detection tools

With these videos becoming so real, there’s an onus on people to tell the difference should a questionable piece of content drop. And it will. That’s why companies such as Deeptrace have designed deepfake detection tools to identify when a video has been altered using “proprietary detection technology leveraging the latest advances in deep learning and video forensics.”

Essentially, a safeguard that could enable everyone from trusted brands to news organizations to flag a video as fake before disseminating them. Similarly, there’s also Reality Defender 2020 from the AI Foundation, which rolled out FacedForensics as “the first large-scale deepfake detection data set.” Both detection tools identify deepfakes using large data sets.

In a piece written for The Hill, AI Foundation Chief AI Officer and Computer Science Professor Subbarao Kambhampati offered his insights on how the tools work:

For detecting fake videos of people, current techniques focus on the correlations between lip movements, speech patterns and gestures of the original speaker. Once detected, fake media can be added to some global databases of known fakes, helping with their faster identification in the future.

Resources to help companies discern whether a video is real or fake:

  • Deeptrace Labs: Offers a comprehensive deepfake detection tool that “leverages the latest advances in deep learning and video forensics, and elaborates intelligence around the authenticity of visual content.”
  • Reality Defender 2020 from the AI Foundation: Offers “an invite-only submission page” where videos can be entered to be scanned and verified so as to determine “whether content is fake, manipulated, or original” before rendering a report. (Available to journalists).

But as the musician Pink once said, what about us? Is there a mainstream detection tool for people to use at the local level? The short answer is: Not yet. But companies such as Canada-based Dessa are working on it using machine-learning and real-world AI.

Per the New York Times piece:

Dessa recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

Still, there are things we can do if a deepfake video does happen to squeak through — which they will.

For example:

  • There’s the human pause button, guided by restraint and common sense while a video is being dissected and debated.
  • There’s also fact-checking websites such as the Pulitzer Prize-winning site, Politifact.com, to see if the video content you’re faced with has been ruled “True” or “False” by their Truth-o-Meter. They will even cover the gray areas such as “Half True” and “Mostly False” to show that not everything in the world of truth and fiction is black and white.

Only time will tell how impactful deepfakes will be — and how powerful the tools to detect. “As of now, we lack automated ways to detect deepfakes in a reliable and scalable fashion,” says UC Berkeley Computer Science professor Dawn Song in Chenxi Wang’s piece for Forbes. “It will be an arms race between those that create deepfakes and those [who] seek to detect them.”

Want to become a better digital citizen in the post-truth era of #deepfakes and #fakenews? Here's how. Click To Tweet

How to spot fake news

How to spot fake news

If you’ve been awake and in America since 2016, the phrase “fake news” might mean different things to you depending on who you are, and what you believe. But despite all our differences, one thing we can likely agree on is that fictitious, fake news stories shouldn’t have a platform in the post-truth era.

Unfortunately, in the last four years, great attention has been paid to the existence of fake news (especially in the months leading up to the 2016 election), but there’s been somewhat less attention paid to how we prevent this from happening again.

The solution: vetting sources and facts

From vetting sources to using fact-checking websites such as FactCheck.org and the aforementioned Politifact to authenticate whether stories claims are true, there are many ways for those to confront fake news and to avoid their spread.

Another great resource to check whether a story is legit is on Snopes, the internet’s greatest lie detector since 1994. There, the merit of a particular story can be confirmed and verified (if it hasn’t been already).

Still not sure how to snuff out the fluff? Consider the following a quick litmus test before you click that “share” button.

3 tips for identifying fake news stories (*before sharing):

1. Consider the source.

I can’t remember how many times I’ve seen stories in my Facebook feed from news outlets that sound anything but legit — URLs that don’t pass the smell test with their dot-whatever extensions and hastily concocted mastheads. Both red flags.

If you’re at all in doubt, explore the site to investigate the author and the About Us section, for instance, to read about journalistic credentials, mission, etc. If any of those smell or look fishy, it could be fake. Capital F.

2. Get ahead of headlines.

Headlines are usually your first clue something’s off. The more outrageous or unfathomable a headline is… the greater chance it’s just not true. Before giving it credence, see if you can confirm the story’s presence in other credible places online.

Clickbait-y headlines tend to be somewhat unbelievable. In fact, the headlines are often the full extent of the content since the stories are fake and meant to make an impression in a news feed alone. If you’re suspicious, click through to see if a well-researched story backs up the headline with substantiated reporting, legitimate quote accreditation, or even visual (non-deepfake) proof that what they’re saying happened, in fact… did. It probably did not.

3. Let mistakes be your guide.

Fabricated stories don’t go through the arduous vetting that long-established media outlets go through because… they’re fake. They also tend to have tells littered throughout in the form of uncharacteristic typos, grammatical errors, and amateurish moves like ALL CAPS.

Journalistic institutions have rigorous standards for what passes before publishing a story so the more mistakes you see, the more chance the whole story is all just one big mistake.

A quick word on fake photos and misleading news…

The best fake photos have fooled millions at a time. Some are relatively amusing like a pilot taking a selfie at 30,000 feet, others have political ramifications. When it comes to photos, ask yourself who sent the photo and look for inconsistencies that could indicate some sort of photo manipulation such as lighting inconsistencies, uncharacteristic blurriness or logic problems.

If there’s a photo that seems particularly incredulous, there are things you can do to spot forgeries. That includes doing a reverse image search on Google or or using apps such as Tineye to assess legitimacy. For journalists and fact-checkers, there’s also the new free experimental tool called Assembler from Jigsaw and Google Research, which is designed to spot sophisticated trickery.

Here’s another reality of what we’re facing: The misleading use of real footage to tell fake stories.

For example, with the tragic passing of NBA superstar Kobe Bryant and eight others, a video circulated that claimed to show the helicopter crash. In reality, the footage was not of Bryant’s helicopter, but of a tail-spinning aircraft that went down in the United Arab Emirates in 2018.

But the video spread like wildfire with one Twitter post alone garnering over 3.3M views in just two days. Snopes debunked this fake story, citing the rating as “Miscaptioned,” meaning the video was wrongly paired. To dispel its legitimacy, AFP was also able to fact check this detail to disprove it using a reverse image search.

All things told, it never hurts to possess a healthy skepticism about the origin of a story. Fake news peddlers like to capitalize on the day’s headlines to plant seeds of doubt while spreading propaganda, panic, and fear. Don’t fall prey. No matter how you define what your litmus is, using your brain before choosing what to share is never a bad thing.

Helpful resources for detecting the use of fake news/imagery:

The repercussions of creating deepfakes

The repercussions of creating deepfakes

Until the world is well-versed and prepared to spot deepfakes of a malicious variety, bad actors will persist in trying. As a result, there are many senators and members of Congress trying to outlaw deepfakes to help stave off the threat.

That said, no federal legislation has passed (yet) to deter people from attempting to create politically driven deepfakes. But there has been movement of late on the state level and laws introduced at the federal.

The solution: new consequences

In California, last October, Governor Gavin Newsom passed AB-730 prohibiting the creation of deepfake videos designed to influence state elections. This makes it illegal — within 60 days of an election — for anyone to distribute “with actual malice materially deceptive audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

On the federal level, last summer, a bi-partisan group of Senators introduced The Deepfake Report Act of 2019 to target and cut-down on the deepfake video threat. The year before saw the introduction of The Malicious Deep Fake Prohibition Act.

But whether either will become law prior to the 2020 election is anybody’s guess. Deepfake alarm-sounders such as UC Berkeley professor and digital forensics expert, Hany Farid, have also been vocal advocates desperately trying to raise awareness around the threat posed by deepfakes.

One of Farid’s most pressing concerns, according to this Washington Post article, is the democratization of tools needed to create deepfakes. “We are outgunned,” warns Farid. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

That’s probably why the Pentagon’s Defense Advanced Research Projects Agency (DARPA) has dedicated resources for non-profit SRI International to develop detection tools according to Techcrunch. SRI, like Deeptrace, would rapidly identify and curb the release of malicious deepfakes. One approach involves automated assessment that would detect errors in video with certain logic problems by flagging and disqualifying them quickly — preventing the spread of disinformation. The end game being an attempt to level the playing field, which currently favors the manipulator.

At this point, it’s safe to say that if you create a politically driven deepfake with malicious intent in California at least, you’re breaking the law.

It’s a start, but not enough potentially to prevent damage from being inflicted prior to an election nationwide.

Something perceived as real or even likely to be, if even for a short period, can cause harm. This is why we should all care what’s out there and support the search for a solution, be it through non-profits or crowdsourced measures such as the Deepfake Detection Challenge. Once we have the tools, we’ll all be empowered to stop the growth of deepfakes and fake news.

We all knew robots could one day come for our jobs. But our senses? That’s a new one.

Here's what you need to know about #deepfakes, #fakenews and protecting truth in an election year. Click To Tweet

Author note: The art of creating deepfakes is a complicated topic, which in and of itself demands a more in-depth visual explanation. For a more thorough deep dive into the phenomenon, check out this compelling breakdown from CNN Business.

Find a Team to Manage Your Content and Grow Your Vision

Start Your Content Plan
Gregg Rosenzweig

About Gregg

Over the past two decades, Gregg Rosenzweig has spent his career writing, producing and publishing engaging content for American mass consumption in the digital, TV and branded content spaces. From serving as a Creative Director on commercial spots to pitching/winning/executing branded content campaigns for Fortune 100 companies, Gregg's been fortunate to work for (and with) top advertising and digital media agencies... as well as some of the most highly respected publishers across the media landscape.

Subscribe to Our Blog

Be the first to hear about our latest features, articles, interviews and studies.

OOPS! There were some errors in your submission. Please try again shortly.

You're in!

We heard you loud and clear. You will get a confirmation in your inbox soon.

Check Your Email Confirmation

[if lte IE 8]
[if lte IE 8]
[if lte IE 8]
[if lte IE 8]