top of page

Does EU Law Effectively Mitigate Online Disinformation’s Influence on Elections?

“The mind of the enemy and the will of his leaders is a target of far more importance than the bodies of his troops.” — Mao Zedong

The art of war has always held psychological manipulation in high regard, and postmodern conflicts are not different, taking full advantage of the informational capacities of the 21st century. However, the Russian propaganda surrounding the war in Ukraine, as well as China’s cyber- and lawfare with Taiwan, are only the tips of a giant iceberg. Political actors are trying to influence decision-making worldwide and at every level of society, with the European Union’s External Action Service identifying no less than 750 recorded cases of foreign disinformation in 2023. Aside from influencing public opinions on issues and thereby altering political support, those interferences are much more threatening in times of participatory or even deliberative decision-making. In democratic countries, disinformation is a vital threat to free elections, often targeting candidates or, more likely, whole parties to influence election outcomes. 2024 has been hailed as the ‘year of elections’ with about a third of the world population heading to the polls, making this threat ever more relevant. Notably, the increasing prevalence of political content on online platforms and search engines offers ample opportunities for users to encounter factually incorrect information. With this in mind, how do you inform your voting?


Psychologically speaking, many factors influence an individual’s political behaviour and decision-making, ranging from biologically anchored personality traits to personal values, identities, and emotions to the influence of their social context. Research suggests that the search, processing, and use of information about any decision-making is guided by pre-designed templates that aid us in making sense of a nearly infinitely complex world, e.g. in the form of rigid beliefs or even ideologies. This form of judgement is especially helpful when forming opinions about groups of people with very different beliefs than oneself without the opportunity for prolonged engagement. One well-founded theory suggests that in these cases, one’s liking of an outgroup rests on three central, simultaneous comparisons to one’s own group: Do you share the same goal? Are they perceived as more or less powerful than you? Are they perceived as culturally inferior or superior to you?


While many more possible assessments exist, these three are considered the most impactful in determining one’s position towards an outgroup. Positive opinions towards an outgroup are expected when you share the same goal and estimate their power and cultural sophistication at approximately equal levels. Should any of these assessments differ, the consequence will likely range from disliking to outright derogating the outgroup. Here, disinformation may work effectively when it supplies false accounts that tip the balance on these judgements. To those ends, it may also abuse common cognitive biases like the fundamental attribution error or diverse mental shortcuts that are necessary to keep up with the immense information load of our everyday lives.


On a societal level, political scientists identified three ways in which disinformation threatens the integrity of democratic elections. Although the EU recognised the threat of disinformation on ‘very large online platforms and search engines’ as a ‘systemic risk’ in its 2022 Digital Services Act (DSA), its legal protection regarding the following threats to democracy suffers from significant holes. First, disinformation distorts decision-making and perhaps even facilitates disengagement with the news in general. While platforms and search engines are mandated to assess and mitigate disinformation threats, the lack of a concrete definition of disinformation and the largely self-responsible due diligence obligations placed on providers hinder adequate protection. Moreover, despite the support of fact-checking organisations, specific arrangements for detecting and treating factually wrong content have yet to be established. Second, disinformation can fuel polarisation and segregation by spreading false claims about political groups. This threat is well-ameliorated by empowering users vis-a-vis algorithms to make choices about the functioning of the recommender systems. The right to a user experience without personal profiling, as defined under Art. 4, section 4 of the GDPR, is especially worth mentioning. Third, disinformation via illegitimate accounts and information sources can impair trust in fundamental democratic processes and institutions, thus further opening the readiness to engage with alternative, likely compromised media sources. Despite platform providers’ obligations to identify and catalogue the origins and details of political advertisements and sponsors, this threat is not well met since this regulation does not affect the more informal, interpersonal interactions on search engines and online platforms. Thus, mandatory regulations do not affect disinformation that mainly occurs as news items or misleading social media posts.


A legally binding definition of disinformation based on political and psychological research insights is necessary as a first remedy to these shortcomings. Moreover, algorithms could use the insights of academic theories to designate content categories or modes of expression that warrant closer scrutiny, potential labelling and automated fact-checking, as direct content moderation runs counter to the freedom of expression as enshrined in Art. 10 of the European Convention of Human Rights. Finally, the EU Commission published additional guidelines for providers of online platforms and search engines in the wake of the 2024 EU Parliament Elections that contain valuable yet legally non-binding clarifications on the DSA’s provisions. Including them as legislation would do much to remedy the current shortcomings and strengthen the legal protection against online disinformation.


The implications of this combination of psychological and legal vulnerability for the European Union, voters and businesses alike, are that online information sources still require better scrutiny to combat the advance of political and other disinformation. Online search engines and social media are not yet obliged to have adequate content detection and filtering systems to ensure specific informational quality standards, which pose critical threats in the wake of fake accounts, deepfakes, and growing filter bubbles. Hence, organisations of high epistemic quality, such as research institutions, professional journalism and government agencies, remain the only sources of high information legitimacy. While wisdom also dictates caution in using such sources, their documents remain as dikes and bulwarks amidst the flood of inscrutable and unaccountable information spreading in record time across the internet.

Comments


bottom of page