As the upcoming European Parliamentary elections loom closer, fears and misconceptions about the role of Artificial Intelligence in politics are circulating with increased fervour. From concerns about AI disrupting political processes to worries over its potential for generating fake news or misuse, it is more than crucial now to separate fact from fiction in order to empower well-informed decisions and construct solid political opinions.
AI as a political disruptor
During a recent panel discussion hosted by Harvard, former White House official Miriam Vogel likened ChatGPT to a team of interns capable of producing a solid first draft of a document. And this is what Generative AI remains, for now – a valuable brainstorming partner.
However, the disruptive potential of AI in elections lies not solely within the technology itself but rather in how it’s wielded and for what ends. The infamous Cambridge Analytica scandal serves as a stark reminder of how political entities leverage technology to sway public opinion, predating the widespread adoption of AI. While concerns have been raised about the technology potentially replacing humans in democratic processes, New York Times assessed that it is more about influential lobbying efforts rather than direct voting consequences.
In fact, despite these concerns, AI holds promise in actually democratising political structures by empowering smaller groups that traditionally lack resources to compete in the political arena. By levelling the playing field, AI has the potential to diversify political discourse and amplify the voices of marginalised communities.
AI as an accurate source of information and influence in politics
It’s a common misconception that AI possesses an unparalleled ability to provide accurate information and be able to influence elections outcomes. While AI algorithms excel in certain tasks, such as natural language processing, data analysis and image recognition, its actual intelligence still poses huge limitations, particularly in politics.
Highly threatened by international regulations, major tech companies have adjusted their content policies in an attempt to mitigate the risk of partisan manipulation coming from their software. For instance, OpenAI issued a public statement promising to prevent abuse and increase transparency for AI-generated content. Additionally, tech giants convened in Munich this year to sign an accord outlining how large platforms will combat deceptive content related to elections.
A practical case study conducted by Democracy Reporting International tested AI’s capabilities in providing information related to voting in Europe. Their conclusion revealed that while AI can be inaccurate, it tends to refrain from offering any political opinion at all. While this may raise concerns for those who rely solely on online information, the study reassures that the chatbots have been trained to maintain political neutrality, even at the risk of providing incorrect information to the electorate. However, they lack the capability to influence political opinions.
AI-generated fake news
The spectre of AI-generated fake news haunting elections is indeed a legitimate concern, yet it’s crucial to acknowledge that misinformation predates AI. The widespread use of bots in previous election cycles serves as a prime example of how online trolling can distort voters’ intentions. While AI undoubtedly exacerbates the spread of fake news by automating content generation, the root of the problem lies in societal issues such as media literacy and blind trust in online information sources.
However, key institutions have recognised this issue and are actively striving to combat it through online information campaigns. For instance, the EU Commission has recently begun engaging with online influencers to garner their support in this fight.
AI threatens human decision making
It’s a gentle nudge worth remembering that the “A” in AI stands for “Artificial.” This subtle distinction serves as a powerful reminder that, at least for the foreseeable future, AI remains incapable of fully supplanting human decision-making processes. While the technology excels in key policy tasks like message personalisation, campaign management or speechwriting, its role fundamentally revolves around assisting rather than replacing human agency.
In the field where the intangible qualities of empathy, authenticity and interpersonal connection hold immense sway, we can assess that politics are a safe ground. The nuanced intricacies of political engagement—be it the ability to convey a compelling narrative, genuinely empathise with constituents’ needs, or simply extend a reassuring handshake—remain uniquely human traits that AI cannot replicate.
The bias of AI
Some fear that AI may perpetuate political bias by amplifying existing human prejudices. However, it’s essential to recognize that humans are inherently biassed, and AI simply reflects these qualities.
The partisanship in AI stems from the fact that AI applications are only as biassed as the data they are trained on. While it’s true that non-representative datasets can lead to discriminatory outcomes, this issue stems from human prejudice encoded in the data rather than inherent flaws in AI technology itself. To mitigate bias in AI, it’s crucial to ensure diverse and representative datasets and employ techniques such as algorithmic auditing and fairness testing. Additionally, promoting diversity and inclusivity in the development team simultaneously with the deployment of AI systems can help address the issue at its source.
Fact: it’s not AI that’s the evil, it’s humans
Ultimately, the ethical implications of AI in elections stem from human intentions and actions. Whether it’s deploying bots or creating deep fakes, the misuse of AI reflects human motives and agendas. Addressing these concerns requires scrutinising the individuals behind AI-driven campaigns and holding them accountable for their actions.
Addressing deeper social problems
The fears surrounding AI’s impact on elections often overlook the underlying social issues driving technological disruptions. By tackling media literacy gaps, promoting transparency in AI development and fostering responsible usage of the technology, we can confront these challenges head-on. Rather than demonising AI, let’s focus on leveraging its potential to enhance democratic processes while remaining vigilant against its misuse. Ultimately, the true threat lies not in AI itself, but in how we choose to wield it.