“Hey ChatGPT!” This is one of the most used sentences nowadays in almost every office. It’s crazy that we got used to this new technology less than a year after it was first released. Daily, I meet speechwriters who ask ChatGPT for speaking points. I meet parliamentary assistants feeding the tool with information and asking it for policy briefs. I meet colleagues using the technology to get campaign ideas, map previously used messages, write mass emails and identify what competitors did right or wrong in previous elections. It is arguable whether the use of AI in politics is right or wrong.
Last December, the European Parliament stroke a deal with the Council and the Commission and a new provisional agreement is in place in Europe to ensure that AI in Europe is used safely and in respect to fundamental rights, this of course includes civil and political citizen rights and therefore it affects how parties campaign.
During the last few months, we have seen a lot of coverage about the need to regulate AI. Prominent voices, like Steve Wozniak (Co-Founder of Apple), have been speaking up about the need to slow down the development of AI technologies. Many academics, political representatives and philosophers argue that AI poses a threat to democracy. They argue that AI will contribute to voter manipulation and the spreading of misinformation. Many people, however, need to understand the developing technologies and are keen to have developers explain how these technologies function.
However, political voices are rising about the problems of this new regulation as it might slow down innovation. President Macron mentioned:
“We can decide to regulate much faster and much stronger than our major competitors (…) regulating things that we will no longer produce or invent is never a good idea.”
This can be an analogy also to the fact that AI is in use by parties already to run campaigns, and a new regulation might just limit other parties that did not catch up before their competitors.