11.08.2020
Disinformation and mass reporting on Facebook: how to respond
The latest report by the University of Oxford on social media disinformation and manipulation reveals that Facebook still holds the record as the social network where most disinformation campaigns take place.
The growing popularity of photo and video-sharing apps such as Instagram and TikTok will probably increase the share of disinformation tactics employed on these platforms (provided that the Chinese app survives scrutiny from Western authorities). Many political leaders are turning their eyes on these platforms to engage younger audiences, while tools like deepfakes are spreading like never before.
Despite the threat, however, deepfakes have not (yet?) reached a point in which they can represent a real danger; editing multimedia content with the objective to successfully deceive large swaths of the online population would require sophisticated technology which is not here yet. Most deepfakes also leave easily detectable tracks on the altered videos.
What is especially present on Facebook, as well as on Twitter, is a more invisible and subtle array of tools at disposal of government agencies, individual actors or groups and corporations to sway public opinion by disseminating false information or directly attacking rival accounts to silence them. These tools consist of human accounts, bots and “cyborgs”, which mostly deal with image and text-based information.
1) Human accounts are managed by real users often employed by so called troll farms. Troll farms are companies employing professional or semi-professional workers who sow dissent through fake news and propaganda dissemination across social networks. One example is the Internet Research Agency (IRA), a Russian company involved in many campaigns.
2) Bots are algorithm-driven accounts which are intended to imitate the behaviour of human users. While individually less effective, they can be deployed in large numbers to hijack and divert trending topics, mass report accounts/pages/groups and spread specific messages.
3) Cyborgs consist of human accounts which make use of automatic processes as well to perform disinformation campaigns.
The reason why Facebook remains the dominant platform for disinformation campaigns could lie primarily in demographics. A recent study, limited to the US population, confirmed that older users (65+) share nearly seven times more disinformation and hoaxes than the youngest age group; if we interlock this data with the demographics of Facebook and Instagram, we come across a sharp picture: while 46% of 65+ citizens use Facebook, only 8% have an Instagram account. It makes therefore sense for any disinformation campaigner to currently focus its efforts on platforms populated by older users.
These campaigns are often carried out by professionals who have experience and adequate assets by their side. When strange profiles lurk our Facebook groups or interact with our pages, a watchful eye could help understand whether we are dealing with bots and fake accounts. The platform is not always helpful, and numerous pages keep being downvoted or deleted without warning or successive explanations from Facebook. The following tips may therefore help identify such accounts and avoid unpleasant outcomes:
1) URL. Different URL and profile names might signal accounts which have been repurposed by the owner or stolen and then repurposed by a third party. Old accounts by common users are valuable assets for disinformation campaigners, as they are not under scrutiny by the network’s staff or algorithms and are generally more trusted by other users.
2) Creation date. There is no way users can determine the precise creation date of an account, except for group admins. When users request to join a Facebook group, the admins and moderators can visualize the exact date (day/month/year) of creation of their accounts. If an account appears to be extremely new (say two days old), it might have requested to join the group for specific (malign) purposes.
3) Profile picture. Many accounts, especially bots, do not upload and do not show any profile picture. As said above, bots are employed in large numbers and their programmers do not dedicate much time to fabricating elaborate personalities. The number of bots is what counts for successful disinformation campaigns. Some human or cyborg accounts might instead use profile pictures that still appear somehow suspicious. Free image search engines like TinEye and Yandex can be very helpful as they allow to reverse search pictures to find out whether they were stolen to other sources and users. Yandex, which is (ironically) a Russian search engine, is among the most precise in that regards.
What might be helpful if unusual activities by such odd accounts are noticed, especially in large numbers, is to unpublish the pages or personal profiles and set the privacy of groups to private. In that way the swarms of trolls or bots will not be able to interact with the designed targets.
Such actions by political actors, government bodies and independent groups pose a real danger to our society. Excessive polarization drives peoples to the extremes of the political spectrum, feeding hatred and sowing discontent out of inexistent events and overblown fears. This must not, however, lead online platforms to dictate what citizens should believe or not. Unilateral actions by unaccountable corporations could lead to censorship and violations to the right of online anonymity.
More systemic, definitive and horizontal answers are needed, so that all parts of the society get involved to the detoxification of the polluted online environment.