“Hey ChatGPT!” This is one of the most used sentences nowadays in almost every office. It’s crazy that we got used to this new technology less than a year after it was first released. Daily, I meet speechwriters who ask ChatGPT for speaking points. I meet parliamentary assistants feeding the tool with information and asking it for policy briefs. I meet colleagues using the technology to get campaign ideas, map previously used messages, write mass emails and identify what competitors did right or wrong in previous elections. It is arguable whether the use of AI in politics is right or wrong. 

Last December, the European Parliament stroke a deal with the Council and the Commission and a new provisional agreement is in place in Europe to ensure that AI in Europe is used safely and in respect to fundamental rights, this of course includes civil and political citizen rights and therefore it affects how parties campaign.

During the last few months, we have seen a lot of coverage about the need to regulate AI. Prominent voices, like Steve Wozniak (Co-Founder of Apple), have been speaking up about the need to slow down the development of AI technologies. Many academics, political representatives and philosophers argue that AI poses a threat to democracy. They argue that AI will contribute to voter manipulation and the spreading of misinformation. Many people, however, need to understand the developing technologies and are keen to have developers explain how these technologies function.

However, political voices are rising about the problems of this new regulation as it might slow down innovation. President Macron mentioned:

“We can decide to regulate much faster and much stronger than our major competitors (…) regulating things that we will no longer produce or invent is never a good idea.”

This can be an analogy also to the fact that AI is in use by parties already to run campaigns, and a new regulation might just limit other parties that did not catch up before their competitors.

AI regulation in Europe: navigating trust and technological influence

At the European level, MEP Brando Benifei has stated that the process of regulation  “could become a context where the businesses will act to influence the legislative work.” 

Fortune 500 CEOs have also been quick to react to potential regulation. In a piece shared first at the Financial Times, 150 CEOs united to highlight the fact that regulating AI can endanger the competitive technological market in Europe. I also highlighted in past articles for PartyParty, that technology regulation in the framework of EU law is often ineffective. I argued that legal implementation needs to keep up with the fast pace of developing technologies.

The regulation aims to limit AI on three grounds: To prevent recognition systems that can further enforce prejudices, to avoid social scoring (ranking people) and to minimise cognitive behavioural manipulation. These grounds are reasonable. In order to safeguard democracy, we need to focus on the last point: cognitive behavioural manipulation. One important question is whether AI technologies can further the manipulation of voters. Some rules are necessary to prevent bad actors from using AI tools to manipulate the public and deliberately spread misinformation. But we need to understand that AI can also enrich democracy and facilitate participation. 

AI’s potential in political participation: tools and implications

We must understand that democratic European parties can use AI to foster political participation. Here are some examples of how parties can use AI to improve their messaging: 

  • Data Analysis: AI algorithms can analyse large datasets to uncover patterns, identify voter segments, and identify lower voting turnout areas in order to activate those communities. 
  • Sentiment Analysis: AI-powered sentiment analysis tools can gauge public opinion on social media platforms and other online sources to shape campaign messaging. While emotional messaging is often seen as a manipulative tool, it’s arguable that voters respond better to emotional messaging and parties, therefor, need to understand what the hopes and fears of voters — in an ethical manner, of course.
  • Speech and Image Recognition: AI can automatically transcribe speeches, interviews, and public debates, making it easier for parties to analyse and respond to critical issues.
  • Predictive Modeling: AI models can model possible behaviour, allowing parties to target their efforts and allocate resources effectively 
  • Natural Language Processing (NLP): NLP algorithms can help parties understand and respond to constituents’ concerns by analysing text data from sources like emails, letters, and social media.
  • Social Media Monitoring: AI can track and analyse social media conversations, identifying trends and sentiment towards particular topics or candidates.
  • Chatbots and Virtual Assistants: Parties can use AI-powered virtual assistants to provide automated responses, answer voter queries, and engage with supporters.
  • Robotic Process Automation (RPA): Parties can use RPA to automate repetitive tasks like data entry, freeing up campaign staff to focus on more strategic activities.

This shows that some political parties can leverage AI technologies to enhance participation. However, can these tools be understood as cognitive behavioural manipulation? These tools can be used in the wrong way to exploit the voters’ fears. But, many technologies allow parties to use their resources better to engage with voters personally. This is especially relevant in countries with limited resources. 

The current agreement is be the second regulation framework, after GDPR, that aims to ban something. The difference is that the GDPR framework is specific on what it bans and regulates, while this new AI framework is broad and will be hard to implement. The consequence for political parties is that some technologies may be framed as potentially manipulative and this would limit the parties’ ability to do voter outreach, especially their ability to reach those not already engaged. 

AI and outreach: the balance of engagement and manipulation

Here are, in our view, the main three consequences that AI tools and regulation can have on political parties:

  • Targeted Advertising and Messaging: As mentioned in other pieces, political parties often leverage AI algorithms to analyse vast amounts of data to identify specific voter segments and tailor their campaign messages accordingly. AI can help optimise targeted advertising, allowing parties to reach potential supporters effectively. If regulations restrict the use of AI in this context, parties may face limitations on their ability to reach specific voter groups, potentially impacting their campaign outreach strategies. Moreover, if regulations restrict the use of AI in sentiment analysis, for example, parties may lose access to valuable insights into public sentiment, making it more challenging to fine-tune their campaign strategies. This also might limit parties’ understandings of their communities. In a world where citizens do not participate in public forums and often distrust politicians at their doors, AI tools can be a tool to stay close to the feelings of the community. Broad regulation can limit this reach.
  • Social Media Monitoring: AI can monitor social media platforms and analyse user behaviour and competitor techniques. Political parties must understand online conversations, identify influential individuals, and detect emerging trends, if they want to expand their message. This information can shape campaign messaging and help parties engage with key influencers. If regulations limit the use of AI in social media monitoring, parties may need help to gather real-time insights from online platforms and adapt their campaign tactics accordingly. This also applies to content creation. AI technologies can generate personalised content, such as articles, videos, and social media posts, based on user preferences and behaviour. Political parties can use these tools to automate content creation and dissemination, allowing for rapid and tailored communication with voters. If regulations restrict or limit AI-generated content, parties may need to rely more on manual content creation processes, which tends to be time-consuming and less efficient.
  • Algorithmic Fairness and Transparency: Regulations aimed at preventing cognitive behavioural manipulation and social scoring may also require political parties to ensure the fairness and transparency of their AI algorithms. Parties might be required to disclose their data sources, explain the decision-making processes behind their algorithms, and address potential biases or discriminatory practices. This could add a layer of scrutiny and accountability to how political parties use AI in their campaigns.

Today, social media is the political forum for most citizens to express concerns and get information. Because of behavioural techniques used in the past to spread disinformation, there is a growing perception that technology only diminishes voter independence. Yet, we also live in a world where many voters, including the youth, are not politically engaged. Can we assume that they will be manipulated in efforts made by democratic parties because they used AI to engage those groups politically?  

Consequences of broad regulation: how parties might adapt

Because of regulation, your party may face challenges in the future with having to adapt to broad frameworks. These regulatory frameworks may limit tools that analyse public sentiment, monitor social media, automate content creation. Your staff likely use these tools already and understand the value in optimising scarce resources.  

We can, of course, assume that several parties will try to abuse the technologies. However, regulation of AI alone will not change the fact that those willing to manipulate and spread lies will find a way. I agree that some regulation is needed. However, the current agreement seems, to me, too broad and not sufficiently enforceable.

If we focus on democracy and the role of parties in an election year, perhaps the European Parliament and Commission should be focusing on trying to create frameworks with national election institutions. This could help to ensure that standards of voter independence are secured. Specific regulation can be the way to ensure that AI tools are not abused. Regulation should also ensure that this technology is at the service of parties trying to engage with voters. 

Share.
Exit mobile version