Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare?
In May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.”
The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.
World’s first AI news anchor unveiled in China
One woman wrote: “Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools.”
Another added: “Trump shouldn’t blow so high from the tower because the Americans are themselves as dumb.”
But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery.
Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done.
Sp.a’s intention was to use the fake video to grab people’s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. “It is clear from the lip movements that this is not a genuine speech by Trump,” a spokesperson for sp.a told Politico.
As it became clear that their practical joke had gone awry, sp.a’s social media team went into damage control. “Hi Theo, this is a playful video. Trump didn’t really make these statements.” “Hey, Dirk, this video is supposed to be a joke. Trump didn’t really say this.”
The party’s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided the first example of the use of deep fakes in an explicitly political context.
It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem – and perhaps undermine the possibility of a reliable, shared reality.
Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN.
(Read more at TheGuardian.com)