As superpower-driven artificial intelligence (AI) models evolve at pace, Chatham House research is turning the lens to how post-colonial Commonwealth countries are meaningfully shaping the global AI governance space. At a recent roundtable on the research, questions around inclusivity, language, localisation and job security were candidly addressed.

Dr. Nazam Laila is an Open Society University Network (OSUN) Fellow at Chatham House’s The Queen Elizabeth II Academy of International Affairs, where she works with the Digital Society Programme. She has been conducting research for the better part of a year, looking at Rwanda and Bangladesh as case studies of Commonwealth countries meaningfully contributing to the AI landscape. Both Bangladesh and Rwanda are part of the Commonwealth Artificial Intelligence Consortium (CAIC).
Looking to Commonwealth countries for AI leadership
The roundtable was conducted under the Chatham House Rule, meaning that remarks should not be attributed to any speaker or organisation. Following the roundtable Dr Laila said:
“I’ve been exploring how marginalised, postcolonial countries — often overlooked in mainstream narratives — are quietly shaping the global AI discourse. Yet, digital colonialism, as an offspring of neo-colonialism, continues to disregard these voices on the global stage.”
She continued:
“While numerous AI governance frameworks exist today, very few are designed with the specific needs of vulnerable, postcolonial countries in mind. In this context, CAIC stands out. As a ‘minilateral’ platform, it offers a promising platform for collective bargaining power and digital cooperation among Commonwealth countries navigating the complex geopolitics of AI.”
Regional cooperation will strengthen an AI response
Alia Zafar, Commonwealth Secretariat Director of Human Resources and Facilities, with oversight over AI, in conversation with Dr Laila stressed the importance of regional cooperation among countries. She said:
“Looking across the 56 Commonwealth member countries, we can see how regions share similar embedded challenges, and Asia and Africa present two good examples of how shared experiences can lead to advances that benefit both.”
She also raised practical and real concerns when it comes to AI, and added:
“Decision makers within government require support in capacity and clarity in AI governance and therefore some ethical questions are important to ask at country level, such as why do we need AI? In which industry? What does the policy and regulatory framework for that industry look like? Countries are resource-focused, out of necessity, and this information will help influence how we navigate AI.”

AI, and the need for “think local, code global”
The roundtable brought together participants from Meta, Microsoft, academia, think tanks, the UK’s Foreign, Commonwealth and Development Office (FCDO), UN agencies, Commonwealth countries, Commonwealth accredited organisations and the Commonwealth Secretariat.
Participants shared that countries are often looking to AI to address something specific, like increasing economic activity or troubleshooting agricultural or health system problems. An issue that came up quite a bit was the fact that Large Language Models (LLMs) are trained in English when populations — also in the Commonwealth — do not speak English at home.
Dr Laila spoke about the concept of “think local, code global” to highlight the need for balancing global innovation and governance with local context. She will publish a paper on her findings in the coming weeks.
Related content
Commonwealth AI Consortium
AI: Driving innovation across the Commonwealth
Blog: Why the real AI gold rush isn’t where everyone thinks
Blog: Digital colonialism and AI - A call to action to policy makers
Media contact
-
Suné Kitshoff Senior Communications Officer, Communications Division, Commonwealth Secretariat
- M: +44 7740 450 901 | E-mail