Internet is for Porn – and Deepfakes for Manipulation

Facebook: “We will remove misleading manipulated media if it meets the following criteria:

It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

This move has, no doubt, been caused as a careful anticipation of what is to come for the digital platform. High profile politicians are increasingly shown saying racist, funny and outright illegal or embarrassing things. Maybe the entire Trump presidency is a sophisticated deepfake? A recent report shows that the number of nation states that see value in using disinformation campaigns, which will more and more rely on deepfakes, is steadily growing and around 70 of them are doing it today. Another trend is also steadfast. The cost of making deepfakes is going towards zero, as Moors law drives down the unit cost for the building blocks of the really convincing deepfakes, namely computing capacity for neural networks.

The first reactions by commentators have been predictably calling out Facebook for not doing enough to protect its user base of 2,4 billion global users or one third of humanity from deepfakes. Criticism has also been hurled at the vague and leaky definition of what these deepfakes worth removing even are. The genocide on the Rohinga; the countless and regularly recurring leaks about misusing our data; the devastating effectiveness to manipulate elections and referendums and the idea that with global power comes no responsibility has made us dull to what we let the charismatic humanoid and his company get away with. In fact, the entire idea that societies should allow a private company to regulate content and free speech is absurd. Especially, when it has so blatantly shown that it will do everything to destroy public trust if it makes some money.

Yet another reason to doubt this latest announcement is the effective enforcement of this policy. Even if Facebook was serious about removing deepfakes, how would they do it? Their 2.4 billion users upload a lot of videos every second. And to reliably and effectively check all of that for advanced deepfakes is possible only via upload filters. DARPA, the US Defense Advanced Research Projects Agency has the answer in Medifor, a project of ‘automated assessment of the integrity of an image or video’. But don’t panic yet. This would get rid of all the funny and harmless deepfakes meant to amuse and would instantly brand Facebook as a censor, which would lead to decreased profits – not mentioning the costs of somewhat functional upload filter. Based on Facebooks’ purely profit seeking behavior we can rest assured that this is among the last steps it would take.

What needs to happen instead is that democratically elected regulators around the world must decide how much we value “truth” or “authenticity” online. We must decide in what way we are willing to sacrifice freedom of reach and the profits of Facebook and other digital monopolies in return for the ability to believe an image or video we see online. Self-regulation by tech monopolies is as believable today as the 14 years apology tour of Mark Zuckerberg.

AI driven deepfakes are being used today largely to create fake porn, but it doesn’t take a genius or a well-resourced state actor to create an entire parallel event tree. For example start with a fake war announcement against a country. A faked response by targeted country that it will stop at nothing to retaliate and some carefully staged footage of an attack to create the temporary illusion that a full out war has just broken out. Imagine if that fake war was set between China and the US or Saudi Arabia and Iran. The resulting temporary changes to global commodities alone would allow the fakesters to cash in. One can imagine thousands of scenarios that could be constructed so realistic that a considerable amount of the 2,4 billion people would initially believe it and react accordingly. Maybe it will take exactly such a staged global crisis for regulators around the world to finally set clear transparency, reporting and legal liability laws for the digital realm. Yet I don’t wish that upon us. We need to realize even without a global crisis how much unregulated power Facebook and others currently hold and react now.

Napsat komentář

Vaše emailová adresa nebude zveřejněna. Vyžadované informace jsou označeny *