Daron Acemoglu considers the social impact of AI through two pathways: the consequences for labor and wages, and the effect of social media and surveillance technologies on democracy and individual freedoms. He argues that these twin challenges lead to a worrisome and mutually reinforcing dynamic, but also that the future is not doomed. The direction of technical change is not a given—it is a consequence of policy choices and social norms that remain within our power to change.

Many leading notions of algorithmic fairness may in fact have the effect of justifying and preserving the status quo of economic and social inequalities.

We could not agree more. In fact many of the arguments Acemoglu makes regarding the impact of AI on the labor market could be made regarding any impactful new technology, from steam engines to electrification. Such technologies substitute for some forms of labor, complement others, and have generally led to a redistribution of employment and incomes. And as history has revealed, their development, use, and impact is indeed not preordained; it has changed over time in response to political and social choices.

All this is true for AI as well. In this spirit we want to spell out further how we might redirect the development of AI, focusing on issues that arise specifically in applications of AI. In particular we want to call attention to the limitations of two approaches to achieving algorithmic justice.

A salient—and often distressing—use of AI involves targeted treatment of individuals in a wide range of domains, from hiring and credit scoring, to pricing, housing, advertisements, and our social media feeds. These applications have elicited criticisms about fairness and discrimination by algorithmic decision-making systems.

But these criticisms stop far too short. Many leading notions of algorithmic fairness—often defined as the absence of discrimination between individuals with the same “merit” within a given modeling context—may in fact have the effect of justifying and preserving the status quo of economic and social inequalities. They take as given the decision-maker’s objective as a normative goal and do little to challenge the profit-maximizing objectives of technology companies. Discrimination is then effectively defined as a deviation from profit maximization. In our work we have argued that such notions of fairness suffer from three crucial limitations.

First, they fail to grapple with questions about how we define merit (such as trustworthiness, recidivism, or future education success) and evade questions about whether it is acceptable to generate and perpetuate inequality justified by this notion. Because of this improvements in the predictive ability of algorithms can increase inequality while reducing “unfairness.”

Second, these fairness definitions are narrowly bracketed: they only consider differential treatment within the algorithm. They do not aim to compensate for preexisting inequalities in the wider population, nor do they consider inequalities they may generate within the wider population. Unequal treatment that compensates preexisting inequalities, such as affirmative action in college admissions, might reduce overall inequality in the population. But such unequal treatment would be considered “unfair” according to standard definitions.

Third, leading notions of fairness consider differences between protected groups (e.g., people of different genders) and not within these groups (e.g., differences between women of different races, socioeconomic backgrounds, immigration and disability status, among other axes of oppression). But as intersectional feminist scholars have long argued, equal treatment across groups can be consistent with significant inequality within groups.

Instead of this fairness-focused framework, we think the study of the impact of algorithmic decision-making on society calls for an inequality- and power-based framework.

Instead of this fairness-focused framework, we think the study of the impact of algorithmic decision-making on society calls for an inequality- and power-based framework. Our work has shown that decisions that increase fairness, as it is commonly construed, can in fact lead to greater inequality and decrease welfare. This tension brings a crucial question into sharp relief: Do we want fairness, as defined by narrow notions of what constitutes fair allocation or treatment, or do we want equality?

Given these limitations of a fairness-based approach to AI redirection, we need to think more deeply about who controls data and algorithms. How do we reason about the impact of algorithmic decision-making on the overall population? At their core AI systems are just systems that maximize some objective. But who gets to define the objective? Whose goals count? That is very much a function of the property rights assigned by society. As Marx might have put it, we need to ask who controls the means of prediction.

The flip side of this question of property rights is who we consider to be a possible agent of change when it comes to redirecting the course of AI development and deployment. Just as there is a question about who gets to pick the objectives that AI systems aim to maximize, there is a question about who might potentially remedy the adverse social impact of these technologies.

A booming field in academia and beyond considers the ethics of AI, focusing on questions such as fairness, accountability, and privacy. Whether explicitly or implicitly, much of this field takes as its audience the corporations implementing these technologies, or the engineers working for these corporations. We believe that this focus is misguided, or at the very least incomplete.

While social norms certainly matter for behavior, economic and financial forces ultimately determine organizational objectives in our capitalist economy. We must not lose sight of the fact that corporations will first and foremost maximize profits. There is a reason that “corporate social responsibility” goes “hand in hand” with marketing, as one Forbes contributor puts it, and that arguments for diversity are often advanced in terms of the “business case” for a more diverse workforce. Left to industry, ethical considerations will either remain purely cosmetic—a subgoal to the ultimate objective of profit maximization—or play only an instrumental role, whether because of the elusive business case for diversity, or as a way to avert antidiscrimination lawsuits, union organizing, bad press, consumer boycotts, or government regulation.

Those of us who work on the ethics and social impact of AI and related technologies should think hard about who our audience is and whose interests we want to serve.

In order for these pressures to play a meaningful role in corporate calculations, we need external regulation, advocacy, and oversight: actors outside these corporations who are aware of the problems new technologies might be causing, who understand how they impact all members of society, and who can influence norms, change incentives, and take direct action to alter the course of development. There are many forms this action might take. There are organizations and unions of workers who have the potential leverage of strikes. There are civil society actors, nongovernmental organizations, and journalists who have the potential leverage of public attention and consumer boycotts. And there are government policymakers, the judiciary, regulatory agencies, and politicians who have the leverage of legislation and litigation. All these actors have an essential role to play in Acemoglu’s vision for a more just future for AI. We cannot leave the decisions to the companies themselves.

We thus want to conclude with a call to arms. Those of us who work on the ethics and social impact of AI and related technologies should think hard about who our audience is and whose interests we want to serve. We agree with Acemoglu that the future is not preordained. But his program for redirection will succeed only if it includes a wider range of agents of change—especially those who have been left to the margins of society and bear a disproportionate brunt of the burden of algorithmic harms.