(1) The integration of AI into campaign strategies and their impact on efficiency and effectiveness;
(2) Shaping of voter behaviour and;
(3) Ethical considerations and regulatory challenges.  

Overall, the paper notes that the 2024 elections revealed that AI has both transformative potential and high risks when it comes to impacting political campaigns and elections more broadly.

AI powers democracy… 

On enhancing the efficiency and effectiveness of political campaigns, the paper points to examples where AI has heightened voter targeting and personalised messaging and also improved the outreach capacity of campaigns through inclusive strategies such as enabling multilingual messaging. Illustratively, an AI-powered tool was used in the United States to translate campaign messages into multiple languages while in India and Indonesia, there were examples of AI being used to generate real-time, data-driven and personalised campaign messages. There were even examples of campaigns using AI-generated speeches, press releases and social media posts.

… but at what cost?

The paper noted that in Africa, deepfake videos were encountered during elections in South Africa and Zambia, while interestingly in Mauritius, the ruling government tried to overcome the scandal of leaked recordings that incriminated them for acts of illegal surveillance and government corruption by branding them as deepfakes. In another vulnerability for our democracies, the paper found that in regions with low-technology literacy, personalised videos and deepfakes generated from AI were found to be rather persuasive. This connects with the wider concerns presented by the social digital platforms that are used to disseminate that content and the risk of lopsided control of content algorithms that some political actors could possess, amplify and manipulate for their own ends. It is in this context that the question of regulation and ethics arises.

Innovation meets regulation

The paper reviews frameworks from the European Union (EU), the United States and Singapore, and based on the analysis, concludes that the ideal ecosystem for the use of AI in elections consists of data privacy safeguards, standard electoral laws and commitments to the ethical use of AI. Noting the dynamism of AI, constructive interpretation of these norms is identified as a key principle in order to address emerging threats and guard against over-regulation which could stifle innovation. 

Elements of such constructive interpretation include strategically reinterpreting existing laws to confront emerging challenges, imposing transparency obligations and general disclaimers for the use of AI and ensuring human oversight. While it has registered limited success, the cooperation of digital platforms is also considered an essential pillar of regulation efforts since it is these platforms that are used to amplify and widely disseminate AI generated content. There have also been examples of political actors and other stakeholders ascribing to voluntary codes of conduct as a good practice. However, the voluntary nature of these codes means that adherence is inconsistent.

Conclusion

In conclusion, the paper rightly calls for States to adopt a holistic approach to the reality of AI in elections. Legal regulations that are developed need to apply to all key stakeholders who include state actors, online platforms and AI companies and developers. Laws should be supplemented with other measures such as ethical guidelines, technological solutions that minimise adverse effects such as disinformation, the institution of cybersecurity measures and the enhancement of technology literacy that fosters greater public vigilance against emerging threats.

AHEAD Africa fills the gap