“Hey ChatGPT!” This is one of the most used sentences nowadays in almost every office. It is crazy that we got all used to technology less than a year after its release. Daily I meet speechwriters and ask ChatGPT for speaking points. I meet parliamentary assistants feeding the tool with information and asking for policy briefs. I meet colleagues using the technology to get campaign ideas, map previously used messages, write mass emails and identify what competitors did right or wrong in previous elections. Luckily, I am yet to meet political communication experts to think of exploitative uses of AI for their campaigns. One could argue if the use of AI in politics is right or wrong.
During the last few months, we have seen a lot of coverage of the need to regulate AI. Prominent voices like Steve Wozniak (Co-Founder of Apple) have been speaking up about the need to slow down the development of AI technologies. Many academics, political representatives and philosophers argue that AI can be a threat to democracy and that AI will only contribute to voter manipulation and the spreading of misinformation. Many need to understand the developing technologies, keen to have developers explain them.