It is often assumed that modern technologies will lift all boats. But like many other aspects of society, the benefits and opportunities of technologies such as AI are rarely shared equally. The myth of the rising tide can also conceal a troublesome undercurrent: while the benefits of new technology accelerate economic gains and provide every day for the established, the harms of that same technology fall overwhelmingly on the most vulnerable and already marginalized. We agree with Daron Acemoglu that AI must be redirected to promote shared prosperity, but any genuine effort toward that goal will have to reckon with injustice—and work to ensure that both the risks and the benefits of new technology are shared equally.

Formal colonialism may have ended, but its logics, institutions, and practices endure—including within AI development and deployment.

Central to this work must be what we call Decolonial AI. The harms that have been documented as a consequence of AI deployments across the world—whether in facial recognition, predictive policing, resource distribution, shifts in labor practice, or health care diagnostics—did not emerge by chance. They result from long-term, systematic mistreatment and inadequate legal and economic protections rooted in the colonial project. Formal colonialism may have ended, but its logics, institutions, and practices endure—including within AI development and deployment. Any pathway to shared prosperity will have to contend with this legacy, and in particular with at least three distinct forms of algorithmic harm: algorithmic oppression, algorithmic exploitation, and algorithmic dispossession.

Algorithmic oppression describes the unjust privileging of one social group at the expense of others, maintained through automated, data-driven, and predictive systems. From facial recognition to predictive policing, such systems are often based on unrepresentative datasets and reflect historical social injustices in the data used to develop them. And amidst the COVID-19 pandemic, unrepresentative datasets have meant biased resource allocation, and prediction models further exacerbated health inequalities already disproportionately borne by underserved populations. Much of the current discussion about “algorithmic bias” centers on this first category of harm, but we must broaden our sights to other forms of algorithmic coloniality as well.

Whereas the harms of algorithmic oppression manifest during the deployment or production phase of AI, algorithmic exploitation and dispossession emerge during the research and design phase. Exploitation is perhaps clearest within the realm of workers’ rights. The large volumes of data required to train AI systems are annotated by human experts—so-called “ghost workers,” as Mary L. Gray and Siddharth Suri put it in their 2019 book Ghost Work. These jobs are increasingly outsourced to jurisdictions with limited labor laws and workers’ rights, rendering them invisible to researchers and fields of study that rely on them. Algorithmic exploitation construes people as automated machines, obscuring their rights, protections, and access to recourse—erasing the respect due to all people.

Much of the current discussion about “algorithmic bias” centers on this first category of harm, but we must broaden our sight to other forms of algorithmic coloniality as well.

But even these two categories—oppression and exploitation—do not exhaust the range of algorithmic harms. There is also dispossession: at its core, the centralization of power, assets, and rights in the hands of a minority. In the algorithmic context, this can manifest in technologies that curtail or prevent certain forms of expression, communication, and identity (such as content moderation that flag queer slang as toxic) or through institutions that shape regulatory policy. Similar dispossession dynamics exist in climate policy, which has been largely shaped by the environmental agendas of the Global North—the primary beneficiary of centuries of climate-altering economic policies. The same pattern holds for AI ethics guidance, despite the technology’s global reach. We must ask who regulatory norms and standards are designed to protect—and who is empowered to enforce them.

While these forms of algorithmic coloniality pose a significant obstacle to shared prosperity, we believe steps can be taken to achieve a more just future for AI. We should think of these efforts as realigning the course of modern technology, interconnecting us all as stakeholders.

To start we must think critically about what form of prosperity we seek to realign. Acemoglu’s central argument relies on the collective distribution of increased economic productivity. However, given the inequalities in contemporary modes of production, should economic productivity be the measure of collective prosperity? While this form of prosperity, centering on material and economic well-being is an important element, a more comprehensive definition of prosperity should also encapsulate the fundamental role of dignity, greater expansion of rights and open society, and new forms of vibrant social and political community.

Second, like Acemoglu, we believe it is critical for AI researchers and practitioners to become more aware of the social implications of their work. Our approach to Decolonial AI describes this effort as creating a new critical technical practice of AI, achieved by developing a reflexive or critical approach to research. We are already seeing this type of shift at institutional levels. Several large AI research conferences, for example, now require researchers to include statements considering the potential impacts of their work.

But consciousness raising alone is insufficient. It is also crucial to advance participatory methods in research, development, and deployment—practices that incorporate affected communities into the process. This form of reverse tutelage, which facilitates reciprocal learning between practitioners and communities, aims to provide additional context—that researchers may not be in a position to appreciate—and refine the ultimate design of a given technology.

Consciousness raising alone is insufficient. We must also advance participatory methods in research, development, and deployment—practices that incorporate affected communities into the process.

Recently there has been a surge of interest in advancing the new field of “participatory machine learning,” which merges the practice of participatory action research with machine learning methods. While the work is promising, we must heed concerns that the concept of “participation” may do little work if it is not made precise and taken seriously. In particular, we should distinguish between approaches that genuinely empower communities from those that merely repackage existing forms of data work—such as annotation or labeling—under the guise of participation. The industry has a crucial role to play in reforming and reimagining AI development and deployment, and it will need to actively rebalance these risks and benefits to ensure that social and economic gains benefit all stakeholders involved.

Last, we must also pursue more systemic change beyond the AI industry itself. Further accountability can be established by building infrastructure for greater public awareness and scrutiny of AI applications. Better documentation and impact assessments for AI systems, which describe their intended usage and limitations on reuse and portability, could provide greater transparency and resources for impacted stakeholders to understand their potential harms and benefits. On the development side, there should be greater efforts to ensure higher standards for data collection and annotation, as well as more robust platforms for safe auditing or evaluation of AI systems and datasets by external stakeholders. As systems increase in scale and capability, we will also need greater investment into research on the sociotechnical factors concerning AI. And we must develop new frameworks to deepen AI ethics principles and their relation to human and civil rights law.

Algorithmic coloniality presents a daunting challenge, but it is not insurmountable. If we take these steps to realize the real promise of this technology, we can realign AI to serve the interests of us all.