What can decolonial methodologies and Ubuntu offer us in the governance of emerging technologies?

  1. Introduction

In this essay, I make the case that decolonial methodologies and Ubuntu stand to offer new perspectives and insights for the governance of intelligent emerging technologies like Artificial Intelligence (Ai). Decoloniality challenges the dominance of “universalised” Eurocentric knowledge systems shaped by colonialism, and calls instead for a multiplicity of knowledge systems. While Ubuntu (an African philosophy) emphasises the interconnectedness of humanity, dignity and collective well-being. A combination of these frameworks presents a unique broadening and deepening of our benchmarks for identifying and responding to harm, well beyond the confines of human rights and administrative violations. I begin by briefly oulining coloniality and decoloniality, next I survey Ubuntu before I consider different ways these frameworks can benefit policy and governance current efforts towards developing risk policies and governance framework for Ai.

  1. Understanding the decolonial methodology

Western or European colonialism refers to the historical period when European powers, primarily from Western Europe, established and maintained colonies in various parts of the world, including in South Africa, my home country. Colonisation was driven by imperialistic ambitions for conquest, economic interests characterised by extraction, and the desire for political and cultural dominance and superiority. It involved the subjugation, devaluation, suppression, and even criminalisation of non-western epistemologies, ontologies, and methodologies.[1]

Colonialism continues to have a profound impact on knowledge systems and traditions across the globe, often presented as the only legitimate way to know or to govern. Today, many jurisdictions outside of the Europe still rely predominantly on Western legal theories, concepts, and methodologies, even in “post” colonial legal systems, with non-Western ways of understanding wellbeing and justice still being systematically marginalised.[2]

Decoloniality, on the other hand, is a response to the ongoing impacts of colonisation; it refers to a theoretical and practical framework that seeks to deconstruct and challenge the Eurocentric worldview that underpinned colonialism and its associated systems of domination, oppression, and exploitation. Decoloniality recognises the intersection between coloniality, capitalism, racism, and other systems of oppression, and is aimed at alternative futures that are based on principles of justice, equality, and self-determination. It acknowledges that colonisation didn’t only involve the physical occupation and control of territories and resources, but also exerted and continues to exert significant influence on systems of thinking, knowing, and policy-making.[3]

In the context of Ai discourse, decoloniality recognises that traditional scholarship and methodologies continue to be shaped by colonial-capitalist ideologies, power dynamics, and epistemologies that privilege Western knowledge systems and actors, while insisting on their universal nature. As a methodology in its own right, decoloniality aims to disrupt this, including by centering marginalised knowledge systems and perspectives, challenging the dominance of Eurocentric conceptions of harm, repair and justice. One way to do this work, I propose, is through the recognition of multiple knowledge systems that exist alongside the dominant European frameworks. Here, the principle of Ubuntu comes to mind.

Ubuntu – I am because you are

Ubuntu, originating in various African cultures, particularly in southern Africa, is a philosophy and ethical principle that, as a policy directive, emphasises the inherent collectivity of humanity, and the  interconnectedness and interdependedness of our dignity and wellfare. Accordingly, individuals exist within a web of relations, their well-being is intertwined with the well-being of others, and our ecological system. In Sesotho, my mother tongue, it translates to motho ke motho ka batho.[5] I am because you are.[6]  Ubuntu recognises that personhood is fundamentally relational and communal, implying that knowledge, wisdom, and even data are not solely individual possessions but shared resources that contribute to the collective well-being.[7] Although often eclipsed by Western epistemic and methodological domination which emphasise individuality, it continues to be widely embraced as an underlying, informal guiding principle for social relations, ethics, and governance in some African contexts.[4] 

Analysis of decolonial methodology and Ubuntu

Recently, my mind has been curious about what the combination of decolonial methodologies and Ubuntu can offer us, specifically in the context of developing legal governance systems for intelligent emerging technologies such as Artificial Intelligence (Ai). For instance, in Western epistemology, knowledge is often understood as objective, detached, and rooted in individual rationality. Ubuntu, on the other hand, would instead point to a more communal nature of knowledge (and data?) and the importance of collective wisdom. What can this perspective offer us in the context of data sovereignty, data governance and the corresponding data protection frameworks whose conception of data, or harm is limited to individual parameters (data protection revolves around “personal data”)? 

My first thought – on the “individual consent” model

Is the “individual consent” model still effective in achieving data protection in the context of Ai? Big Tech companies often have near-monopolies in their respective domains (social media, search, cloud services). For an individual to genuinely “opt out” of their data collection often means opting out of essential communication, economic, or social participation. This isn’t a free choice; it’s a form of coercion by necessity. The individual has virtually no leverage to negotiate terms, and the terms of service (ToS) are presented as a non-negotiable contract. Even if an individual wanted to understand the full implications of their consent, it’s often beyond their capacity or time. ToS are long, complex, and written in legal jargon – most users just click accept. Thus the “informed” aspect of consent is rarely met. Moreover, event where consent is given, the processing of data under Ai often pull in the direction opposite to the principle of purpose limitation. For instance, much of the data used to train Ai systems today was often initially collected for something else. The “purpose” of collection for much of what makes up this training data was never to train AI, it found its relevance to. Ai much later. And while regulations like GDPR offer rights to data portability and erasure, exercising these rights against a giant, distributed data infrastructure is often cumbersome, ineffective, or simply not fully implemented by companies.The individual consent model is a relic of an era before the pervasive, collective, and often insidious impacts of data-driven AI were fully understood. The rise of intelligent emerging technologies like Ai exposes the limitations of this individualistic lens, as many harms have a profound collective dimension.

My second thought – on slow-burn harms

The human rights framework, while crucial for protecting individual freedoms and preventing egregious violations, has limitations in capturing certain types of harms that emerge in the new context of emerging technologies, harms that simply do not meet the definitional requirements for a violation. This if often harm that is systematic or structural in nature, occurring incrementally over time. Human rights frameworks often focus on immediate, episodic, event-based and tangible harms to individual rights. An ubuntu-informed approach to recognising harm can help us appreciate and address slow-burn harms that occur over time, which systems like human rights may not adequately capture. Such an approach is alive to slow-burn harms, and the long-term implications, intergenerational justice, and the collective well-being of human beings beyond temporal scope of rights violations. It would call for comprehensive governance that accounts for the systemic and cumulative effects of Ai technologies on different stakeholders, including on natural environment. The immense energy and water required to train and run large Ai models contributes to environmental degradation, which in turn disproportionately impacts marginalised communities already vulnerable to climate change. When collective harm occurs (e.g., through climate degradation, algorithmic bias, data breach or unlawful processing), the legal and governance frameworks would need to move beyond individual compensation to address the collective impact and aim for restorative justice, thereby suggesting a shift from a purely punitive model to one focused on repairing social fabric and ensuring future collective well-being.

Final thought – Power

We know that the discourse on Ai and other emerging technologies lacks epistemic diversity. The overwhelming majority of the research and jurisprudence on Ai is produced from the Western world (epistemically and geographically).[8] This power imbalance remains largely unacknowledged. Further, we also know that Ai technology is already exposing some underpinning colonial logics, specifically in reported incidents of continued discrimination against social groups oppressed originally under colonisation, like black and brown people.[9] 

Both decolonial and Ubuntu methodologies draw our attention to power dynamics and imbalances in the discourse, development and use of Ai. In the context of governance, this calls for critically examining the distribution of power, resources, and benefits associated with Ai technologies. The term “data colonialism” powerfully captures the essence of this power imbalance. Just as historical colonialism involved the extraction of resources (land, labor, raw materials) from colonized lands for the benefit of colonial powers, data colonialism involves the extraction of data (a new form of resource) from individuals and communities, often in the Global South, for the economic and technological benefit of large tech companies, predominantly in the Global North.

Ultimately, the emerging challenges posed by intelligent emerging technologies require us to critically examine and expand our conceptions of and responses to harm. Decolonial methodologies and Ubuntu compel us to move towards a more holistic, collective, and justice-oriented framework for data governance, where power imbalances are acknowledged and actively addressed, and where technology serves, rather than exploit, communities. By incorporating decolonial and Ubuntu methodologies into the development of legal governance systems, we may broaden and deepen our perspectives on the objectives and targets of governance measures, towards a more holistic and inclusive perspectives.

References

[1] Walter Mignolo and Catherine E Walsh, On Decoloniality: Concepts, Analytics, Praxis (Duke University Press 2018); -Gatsheni Sabelo J. Ndlovu, ‘Decoloniality in Africa: A Continuing Search for a New World Order’ 36 The Australasian Review of African Studies 22; ‘Why Decoloniality in the 21st Century? – University of Johannesburg’ <https://ujcontent.uj.ac.za/esploro/outputs/9910848907691?institution=27UOJ_INST&skipUsageReporting=true&recordUsage=false> accessed 09 June 2023.

[2] Walter D Mignolo, ‘Coloniality Is Far from Over, and So Must Be Decoloniality’ (2017) 43 Afterall: A Journal of Art, Context and Enquiry 38; ‘Why Decoloniality in the 21st Century? – University of Johannesburg’ (n 1).

[3] Mignolo and Walsh (n 1).

[4] Moeketsi Letseka, ‘In Defence of Ubuntu’ (2012) 31 Studies in Philosophy and Education 47.

[5] Literal translation: I am because you are.

[6] Chibvongodze, Danford Tafadzwa. (2016). Ubuntu is Not Only about the Human! An Analysis of the Role of African Philosophy and Ethics in Environment Management. Journal of human ecology (Delhi, India). 53. 157-166. 10.1080/09709274.2016.11906968.

[7] C Ewuoso and S Hall, ‘Core Aspects of Ubuntu : A Systematic Review’ (2019) 12 South African Journal of Bioethics and Law 93; Christian BN Gade, ‘The Historical Development of the Written Discourses on Ubuntu’ (2011) 30 South African Journal of Philosophy, Suid-Afrikaanse Tydskrif vir Wysbegeerte 303; JY Mokgoro, ‘Ubuntu and the Law in South Africa’ (1998) 1 Potchefstroom Electronic Law Journal/Potchefstroomse Elektroniese Regsblad <https://www.ajol.info/index.php/pelj/article/view/43567> accessed 09 June 2023.

[8] J Klinger, J Mateos-Garcia and K Stathoulopoulos, ‘A Narrowing of AI Research?’ [2020] ArXiv <https://consensus.app/details/also-find-sector-researchers-tend-specialise-data-klinger/03c5cb7d0260575aba6d770b6e1a5925/> accessed 29 May 2023.

[9] Gabbrielle M Johnson, Algorithmic Bias, On the Implicit Biases of Social Technology, 2021, New York University (“Algorithmic Bias 2021”); Gillian Tett, Mapping Crime – or Stirring Hate? When Chicago Police Ran Their Predictive Statistics, There Was a Strong Racial Imbalance, Financial Times, available at https://www.ft.com/content/200bebee-28b9-11e4-8bda-00144feabdcO.

[10] Article 24, African Charter on Human and People’s Rights.