Exclusives

CPS 2019: Hollywood Better Prepare Now for Deepfakes, Keynoter Says

UNIEVRSAL CITY, Calif. — You can go all the way back to Joseph Stalin for some of the earliest examples of image manipulation, where you’ll find the Soviet leader air-brushing people he didn’t like out of photos.

As digital tools progressed, manipulations like splicing, copy-move and in-painting hit the scene. And for Hollywood, first CGI, and now computer-assisted de-aging solutions, have become regular tools of the business.

But all these tools for manipulating images and videos are nothing compared to deepfakes, and now is the time for the media and entertainment industry to prepare for their impact, according to Anthony Sahakian, CEO of Swiss-based tech company Quantum Integrity.

Deepfakes are an AI-based technology — created using generative adversarial networks (GAN), a class of machine learning systems — used to produce or alter video in order to present something that did not in fact occur. And thus far, the majority of its usage has been around porn, adding faces of celebrities to adult film stars.

But quickly “it’s now become a threat to political, personal, even societal integrity,” Sahakian said, speaking Dec. 4 at the Content Protection Summit. In Pakistan it was used to fake someone saying something negative about local religion, resulting in people being killed, Sahakian said. A deepfake video of President Obama making a speech made the rounds online a couple years back. Recently a convincing deepfake video popped up showing actor Jim Carrey acting in “The Shining.”

Every day today, 20 billion to 30 billion images are being upload to the internet. For YouTube alone, an estimated 300 hours are uploaded every minute. By Sahakian’s count, only 20,000 deepfake videos have been discovered to date, meaning it’s a relatively minor issue as of now. That’s because, for the moment, deepfakes are expensive to make, Sahakian said. But he predicts that within a year or two, the technology will advance to the point where it’s easy and affordable to create.

That means broadcasters and media and entertainment companies with an online presence will need to address a question they really haven’t had to before with the public: “How much integrity does the image or video you’re looking at have?” Sahakian said. Not whether it’s been edited, but whether it relays anything based in reality in the first place?

Experts estimate that we now have less than 10 seconds to determine whether or not an image or video is a deepfake, before it can be cause damage on a social network or in the news. And detecting a deepfake at the outset is full of challenges: limited data to work with, how to make a universal detector that can adapt to use cases without significant changes to architecture, how to keep pace with deepfake development, and how to challenge the confirmation biases that exists with people today, feeding into their pre-conceived notions.

For his company, Sahakian said they’ve discovered that the way to fight an AI algorithm is with an AI algorithm. “We now have machines that on a case by case basis can be trained for specific kinds of manipulation and detect deepfakes,” he said.

Hollywood needs to begin preparing for a wave of deepfakes, he said, with billions of potential dollars at stake. They begin by looking at how the tools and technology used to create them are becoming commonplace (and even free) … and start using that technology to detect what they create.

“The technology is dangerous. And it’s there,” Sahakian said.

The Content Protection Summit was produced by MESA and CDSA, and was presented by SHIFT, with sponsorship by IBM Security, NAGRA, Convergent Risks, LiveTiles, Richey May Technology Solutions, EIDR, the Trusted Partner Network (TPN) and Darktrace.