Aller au contenu principal

Ai Extremism: Technologies, Tactics, Actors

ispole |

ispole
15 May 2024, modified on 13 December 2024

Stéphane J. Baele, Lewys Brace. AI Extremism: Technology, Tactics, Actors. VOXPol Reports. 2024. https://voxpol.eu/wp-content/uploads/2024/04/DCUPN0254-Vox-Pol-AI-Extremism-WEB-240424.pdf

 

Introduction 

Over the past few years, developments across various Artificial Intelligence (AI) technologies have dramatically accelerated, initiating important transformations in a range of human activities from medical diagnosis to sports training and from artistic creation to transportation. Large language models (like GPT3) represent a “paradigm shift” in text analysis and generation (Bommasani et al. 2021), audio and video deepfakes’ unprecedented levels of credibility have turbocharged the porn industry’s pre-existing ills, and military equipment now displays various levels of decision-making autonomy – to name only a few examples of AI-driven evolutions. There is no doubt that such a powerful, multifaceted and versatile technology will sooner or later percolate into the realm of extremism1 and terrorism – in fact, in the context of a steady growth of online extremist ecosystems, it already has. As Chesney and Citron (2019: 1762) already warned a few years ago, the capacity to harness AI “will not stay in the hands of either technologically sophisticated or responsible actors”, no matter what we wish. Europol’s (2023) recent report on the issue recognizes that while the technology “offers great opportunities to legitimate businesses and members of the public”, it also carries severe “risks for the respect of fundamental rights, as criminals and bad actors may wish to exploit [the technology] for their own nefarious purposes”. To be sure, just like all previous technological breakthroughs (from the printing press and gunpowder to the Internet), AI is destined to be used very creatively by entrepreneurs of hate and violence, who will embed it in multiple ways into their strategies and tactics. These uses, and their corollary side-effects, ought to be carefully mapped and evaluated if societies are to design an appropriate set of responses and avoid scattered reactive measures. Such an evaluation is particularly important at the onset of the problem, to provide