Viral Videos & a Democratic Flu

Looking at the threats deepfake videos and botnets pose to democracy, we step back to ask: does “fake news” even have to be fake?

Looking at the threats deepfake videos and botnets pose to democracy, we step back to ask: does “fake news” even have to be fake?

It was the expression on her face that made the video such an outrage.

The politician had struck a nerve across parties — an outspoken defender of the dignity of the working class, with innovative solutions that broke out of party orthodoxy.

The video, posted to YouTube just a week before her re-election, threatened all that. The hero of the working class was cursing at a 19-year-old delivery boy. The boy was scrawny and small, dressed in cheap clothes, and his 10-year-old car, still carrying the unrepaired scars of some past fender-bending accident, had showed up 20 minutes late with cold dim sum.

The video starts when the altercation is underway. The politician herself, in a bathrobe, is calling him names, insulting his car, using brutal curses aimed at making him, and anyone who watched, ashamed of their poverty.

Can you tell which one of these faces are “fake,” generated by an AI? Hint: they all were. Source:  NVIDIA research paper .

Can you tell which one of these faces are “fake,” generated by an AI? Hint: they all were. Source: NVIDIA research paper.

The cell phone video, taken by a passerby, was shared on Twitter by an elderly woman in Iowa, @RaginGranny26, tweeting: “So much for the dignity of the working class. This hypocrisy makes me sick.” The woman only had about 58 followers, but all of them shared it, and their followers shared it, and on and on until, at the end of the night, the video had been seen 258,000 times. That’s when it caught the attention of the news networks, who “reported on the controversy,” skirting the need to verify the video or its source. By the end of the weekend the video had been seen 1.2 million times.

The election was three days away. At first, the candidates campaign staff told her to ignore it, confused about what was happening, dismissing it as dirty tricks that would blow over soon. But then daily internal polling started showing a sharp decline in her support. How to respond? She would never say what she said to the boy — not only because she didn’t believe any of those words, but because it would be political suicide.

By the time she’d realized that the video was fake — a scene set up by paid actors, most likely hired by the opposition campaign, with her face digitally superimposed over a recording of an actual incident — the damage was done. It distracted her from pushing issues central to her campaign; it put her in a position of explaining how video could be faked in the first place. When she posted proof of what she’d eaten on the night in question — with real receipts for tacos — the Tweet was immediately shared by 200 users, all seeming to spontaneously craft the hashtag #TacoTuesday (the day of the election) with sneering commentary about how nobody should believe her. #TacoTuesday went viral, a place to ridicule hypocritical politicians who denied their mistakes, even when they were clearly visible for the entire world to see.

She lost her district by 1.3% of the vote.

We’re looking at fakes and the limits of credibility of what you see and read online, and so I’ll tell you now: this story isn’t about a real election. But it is about real technology, and the scenario we laid out is already possible today.  

DeepFake News

Let’s start with a look at a DeepFake video: in this one, the actor Steve Buscemi’s face has been superimposed onto the body of actress Jennifer Lawrence, who was answering press questions at the Golden Globe Awards.

A DeepFake is a combination of “deep” learning (a type of machine learning) applied to digital “fakes.” It’s a tool that can superimpose people into situations they might otherwise never appear: if you were so inclined, you could insert actor Nicolas Cage into pretty much anything.

A DeepFake makes use of an artificial intelligence strategy called “Generative Adversarial Networks” (GANs). Essentially, GANs can be fed data — let’s say photos of cats — and independently “figure out” what makes a cat a cat by seeing what all of those photographs have in common.  Once it “learns,” it can actually generate (the G in GAN) all kinds of cats out of nowhere. The more pictures of cats it has, the better it gets at cat-making. While the GAN is figuring things out, it can end up with cats that look like a lot like owls or caterpillars. The “adversarial” bit (the A in GAN) concerns a parallel process that judges the weirdos — cat-owls and cat-erpillars — giving the generator more data about what works and doesn’t, therefore improving the images. In the end, the program learns from itself to become really good at cat making.

DeepFakes do this with human faces, and it can apply it to video. It’s clumsy, but since 2017, when the software became widely viable, it has been quickly applied to dark purposes: an underground trading market where celebrities’ faces are swapped into scenes from pornographic films. A GAN can generate convincing, moving replicas of their faces, which can then be placed into a video and “mapped” to motions and expressions in the video. Darkest of all, the tech can do the same with images of random strangers from the internet: a sociopath only needs access to a stranger’s Facebook or Instagram accounts to create pornographic videos of anybody who has shared enough selfies.

While this raises many important questions about the law, ethics, and consent, it also has political repercussions. Case in point (warning — contains strong language):

Viral Videos & A Democratic Flu

We see how a video could be made in our election scenario: hire an actress, or use footage of a pre-existing incident, and apply a GAN to map someone else’s face to the scene. The next step is getting these images and videos seen.

When a DeepFake video is created, it can be widely distributed, quickly, through the use of a botnet. Botnets are automated accounts on social media sites that can be operated by a single actor, but act as if they are independent. Some have long histories and personalities, undetectable as robots. There is a misconception that bots are working off of scripts, like chatbots you might use online to pay a bill. On social media, they are often more sophisticated than that, with accounts being coordinated by human beings, oftentimes paid to maintain a network of accounts that they can “parcel out” for various fees.

An image from a viral video showing a confrontation between a Catholic high schooler in a Make America Great Again Hat and a Native American activist.    Source: Reuters.

An image from a viral video showing a confrontation between a Catholic high schooler in a Make America Great Again Hat and a Native American activist. Source: Reuters.

Early in 2019, a viral video circulated on Twitter showing a standoff between a Catholic high school student in Washington, D.C., for an anti-abortion protest and a Native American activist who was there for a native’s rights rally. The video showed the older man, Nathan Phillips, banging a drum and chanting a song as the younger man in a red “Make America Great Again” hat stared at him; both surrounded by crowds of people.

Let’s be clear: nobody has claimed that this video was a fake. But it was notable that different sides of the US political spectrum saw completely different things within the same video. Some condemned the teenager as a symbol of white privilege; his expression reading as clear contempt for the elderly man; others saw the high schooler doing his best, confused by the situation, trying to smile his way through it. Each of these readings was seen as laughably wrong by the other — evidence that the “other side” has completely lost its grasp on reality.  

The video swept through Twitter and inspired a massive, cut-throat debate, exploiting pre-existing tensions between the right and left. It was viewed 2.5 million times, and made it to bloggers, newspapers, then network and broadcast news. It was a few days later that CNN reported that the video, which was real, had been amplified by a “fake account” on Twitter. According to the CNN report, the video was taken from Instagram, where it likely would have gone unnoticed. But once the fake account posted it to Twitter, the message was amplified, in a coordinated manner, to ensure that it would be seen by as many people as possible, earning nearly 14,500 shares.

That’s what a botnet can do. The Oxford University Computational Propaganda Institute created a report on actions of state actors in the US election in 2016, concluding that “they sought to manipulate and radicalize” social media users by creating inciting, polarizing content that encouraged extreme responses and attitudes, and then amplified that content through networks of fake accounts (read the full report).

The focus on “misinformation campaigns” or “fake news” can miss the point. A recent study from Stanford University suggests that Facebook users are, on the whole, better informed, but more politically polarized. It is real emotion, not fake news, that is most disruptive. The stories are often true. The division comes from amplifying stories that inflame tensions, seeding social networks with automated, human-controlled coordinated bot networks that push extreme interpretations and rhetoric around that truth. It’s a way of weaponizing the flares of outrage that move us to “share” from a place of contempt for our enemies, rather than dialogue.

It’s similar to a flu — symptoms include irritability, a withdrawal from other people, a perpetual queasiness — within our democracies, a flu that spreads virally, through social media. That’s the goal, and we do it to ourselves: the bots just help spread the germs.

What’s Next?

You can use social media to achieve your goals, whatever those goals might be. And for many groups, it is literally to create a new reality, because the target audience derives its understanding of the world — not just the news, but the way it frames the world — from what it views through social media.
— Peter Singer, author, LikeWar: The Weaponization of Social Media

It’s not difficult to imagine the noxious fumes that would result from the combination of a fake, polarizing video promoted by networks of bots, half of which are arguing against the other in a bid to inspire any human observers so angry that they jump in, driving the conversation into the real world. This form of online information warfare is a real threat, with various policy solutions bringing their own share of problems.

So, what’s being done?  

Australia and Ireland have criminal penalties for distributing political disinformation backed by bots, but human rights organizations have flagged similar laws in Iran, Malaysia, Russia, Saudi Arabia and Tanzania as being used to stifle real political speech. Uganda has a unique law that charges a kind of “gossip tax” on social media access which has been sharply criticized for limiting access to those who can pay.

Other policy efforts, such as we see in France, focus on the “fake” part of fake news, which leaves bots spreading “true” news paired with divisive rhetoric relatively untouchable. Indian law has established “morphed photographs” as an explicitly banned form of political speech, but the costs have been high — the Indian government has shut down the entire Internet more than 100 times in an effort to prevent the spread of disinformation.

A proposed law in New York was criticized as being too broad, while California — where many social media companies are headquartered — has passed a state-wide education initiative to teach public school students how to evaluate information they find online, a response to a Stanford University study showing that many kids couldn’t decipher good information from bad.

Meanwhile, California representative Adam Schiff and others have requested a briefing on the security implications of deepfakes from the various US intelligence agencies — but policy proposals in the US remain an open question. In late 2018, US Congress began weighing the “Malicious Deep Fake Prohibition Act” which would introduce penalties for creating DeepFakes with illegal content, but it hasn’t been considered as of this writing.  

In the absence of policy, the US has been focusing on technological fixes. DARPA has created a Media Forensics program intended to identify fake video. They’ve awarded researchers at universities and NGOs with funds to solve the problem. So far, some solutions have emerged, focused on a few “tells,” such as unnatural blinking. But let’s face it — any technical fix presents only trivial problems for dedicated programmers to circumvent.

Facebook has announced it would take steps to scan uploaded videos for signs of manipulation, though it’s unsure what it would do. The site had a taste of this already, when a politician in a local race in Mexico was photoshopped onto a US Green Card with the suggestion that he was secretly a US citizen. Many decisions in how the algorithms present news could minimize the spread of fake video. The networks’ algorithms all tend to help controversial, divisive content rise to prominence by privileging heated, fast-paced, multi-party arguments as evidence of “engagement.”

Maybe the simplest route to limiting the reach of misinformation is not to spread it in the first place: to resist sharing an inflammatory post. After all, the most controversial posts are the most engaging — when you’re angry or disgusted, you comment, share, reply. Others push back and do the same. A fight creates far better metrics for social media than agreement: it keeps us refreshing the page, waiting for the next insult, the next opening. These posts show up at the top of your feed, like a siren declaring a building is on fire. It’s all so unpleasant, but the paradox is this: if social media users step away from this kind of engagement, it would prove to be a chill on free speech. A deliberate refusal to participate in the system is still disenfranchisement.

We might end with a fictional postscript to the fake news that started this story. We can easily imagine the winning candidate of that race, who had encouraged the distribution of a fake video to help his campaign, later used the “fake video defense” when a 20-year-old member of his staff leaked an explicit video he’d sent to her over email. He would hold a press conference and argue that his opponents had embraced the same techniques they’d accused him of using. The press and the public largely shrugged.

After all, it’s best not to believe anything you see online.


Eryk Salvaggio
Once called "the Harry Potter of the Digital Vanguard," Eryk Salvaggio is a writer, artist, and researcher at swissnex San Francisco. He previously studied new media art and journalism at the University of Maine and Global Media at the London School of Economics.