Nicole Matejic, School of Terrorism and Security Studies, Charles Sturt University (New South Wales, Australia); email: [email protected]
Chris Wilson, Faculty of Arts, Politics and International relations, University of Auckland (Auckland, New Zealand)
Supported by evolving Crime as a Service (CaaS) models engineered to exploit human cognition, generative AI will challenge legislators, regulators and policymakers in ways that they are currently underprepared for. With generative AI able to surpass its initial deployment configuration via adaptive learning, as well as demonstrating unintended consequences, ‘who’ is then responsible for the crimes it commits when the only human touchpoints occur at the design, deployment and delivery of proceeds-of-crime stages?
This paper draws upon emergent scholarship to explore the present-day exploration of generative AI tools by cybercriminals and terrorists, before looking to a hypothetical future and exploring successful initiatives attempting to address this challenge.