We are a public forum committed to collective reasoning and imagination, but we can’t do it without you. Join today to help us keep the discussion of ideas free and open to everyone, and enjoy member benefits like our quarterly books.
I am grateful to the respondents for these thoughtful replies. I am particularly heartened by the broad agreement that the U.S. labor market, like that of other industrialized nations, is fundamentally not working—and that AI isn’t helping. Beyond this consensus, however, there are many important nuances in these comments. It is impossible to do justice to all of them here, but they can be usefully separated into four groups.
The first is the most optimistic. Erik Brynjolfsson and Lama Nachman agree with the broad outlines of my critique of the current course of AI development, but they are more sanguine than others about the future—and about what is already going on in the industry. Particularly on point is Brynjolfsson’s contention that “the real challenge is wages, not jobs.” Nachman is right, too, when she emphasizes “the complementary nature of human and AI capabilities.” One helpful example is Brynjolfsson’s discussion of Cresta’s work on chat bots. And as both point out, much more can be done when it comes for AI’s opportunities to empower rather than replace humans.
Yet I do not see these technologies becoming central in the current environment without significant efforts at redirection. Automation and monitoring remain the main focus of AI development, and a few large companies with a strong focus on algorithmic automation are having an oversize impact on the direction of this technology. Consistent with this, my recent research with David Autor, Joe Hazell, and Pascual Restrepo suggests that a lot of current AI is in automation mode, rather than the more collaborative mode Brynjolfsson and Nachman envision. I hope time—and our efforts at redirection—will prove them right and me wrong.
The next two groups think that things are worse than I have described. Daniel Susskind argues that a certain degree of job loss to automation is inevitable—and thus that technological change is less susceptible to redirection than I contend. In particular, in a version of John Maynard Keynes for the age of AI, Susskind foresees the labor market, at least for some workers, almost completely breaking down with “technological unemployment.” Based on this assessment, he advocates “decoupling work and income” using schemes such as state-guaranteed jobs and more redistribution (perhaps as a universal basic income or a guaranteed basic income). Like Keynes he believes that such schemes are feasible in part because it is possible to help people “find purpose through other socially valuable activities.”
Just as I hope Brynjolfsson and Nachman turn out to be right, I would be happy if Susskind’s predictions about the decoupling of work and purpose are borne out. But even if such a separation were feasible in the long run, I think it would be imprudent to presume that a large fraction of the current (or perhaps even the next) generation will be able to adapt seamlessly to a workless future without losing social meaning—and in the process further degrading democracy and cohesion in our society.
A third group also believes things are worse than I have described, but for different reasons than Susskind. These authors agree that digital technologies and AI have played a major role in our predicament, but they also see other factors at work that are equally important.
Andrea Dehlendorf and Ryan Gerety emphasize the dwindling power of and protections for workers, even as they recognize that this is partly a technological story. They also rightly emphasize that new technologies disproportionately disempower minorities and women, especially in the service sector. Molly Kinder argues that COVID-19 has done much more damage to workers in one year than decades of excessive automation, and she too recognizes the unequal nature of the suffering. These are important observations, but I expect that some of the damage from COVID-19 will be reversed, while there is no turning back from jobs lost to automation.
Rob Reich points out two aspects to which I did not give enough emphasis. First, he joins Kate Crawford, as well as Dehlendorf and Gerety, in stressing the technological transformation of workplaces—constant monitoring, insecure staffing arrangements, the increasing disempowerment of workers in the gig economy. He also notes the troubling dominance of AI companies over academia, which I agree must be addressed.
All these respondents suggest policy remedies in line with my assessment: regulation of the technology sector, more diverse voices, and more power for workers have to be part of the solution. I wholeheartedly endorse these prescriptions, but I would reiterate that such reforms, by themselves, won’t be enough and may even backfire. Available evidence suggests that in current circumstances, greater wage pressure and collective bargaining power for workers may encourage even more automation by firms. Efforts to empower workers—in the workplace and in politics more broadly—must go hand-in-hand with efforts to redirect technological change.
Some of these objectives need robust regulation—a point underscored by Crawford and Reich, as well as by Rediet Abebe and Maximilian Kasy in their discussion of actors outside the AI industry. I should have been more emphatic on this point. But there was a reason for my focus on attitudes and norms: the policy prescriptions highlighted by these authors will not be effective without a change in social norms. Short of such changes, there will be myriad ways for tech companies to avoid or circumvent regulations, and they can do so without suffering sufficient blowback from their employees or customers to force them to change course (in the same way that, without pressure from their customers or employees, banks circumvented regulations before the financial crisis). I thus stand by my conclusion that the first step has to be securing a broad recognition of what current digital and AI technologies are doing to the labor market and democracy, and building general agreement about the responsibilities of the technologists and leading firms in this area.
Two responses—by Abebe and Kasy, and by Shakir Mohamed, Marie-Therese Png, and William S. Isaac—offer complementary analyses to mine, but like the others in this group, they again rightly emphasize the adverse effects on marginalized groups. Abebe and Kasy are undoubtedly correct that prevailing standards for algorithmic fairness fall short of ensuring an equitable distribution of costs and benefits of many AI applications founded on “targeted treatment.” Mohamed, Png, and Isaac draw out similar concerns through the frame of “algorithmic coloniality”—algorithmic harms that grow out of the “colonial project.”
Of course tensions related to inequality of social power and the role of businesses in steering the direction of technology are nothing new. Nonetheless the regulated market economy of the 1950s and ’60s generated plenty of technologies that increased workers’ productivity and earnings (and even some, like mass media, at times helped amplify organized labor’s voice). However pernicious the practices and legacies of colonialism, to understand and to correct the problems that AI is creating specifically for labor and democracy, we should focus on changes since the 1970s—including the decimation of regulation, greater focus on shareholder values and cost-cutting, the dominance of the business and technology models of a few tech companies, the disappearance of government leadership in research, and increasing tax subsidies for automation.
The comment by Aaron Benanav is the most critical of my argument. Benanav shares my assessment of the current state of the labor market, but he takes a more skeptical view of “the degree to which technology is responsible.” In his telling what we need is not a change in the direction of technological development but a more fundamental transformation of the economic system. He calls for “public investment for and by the people,” based on “democratically designed, public protocols for the allocation of productive resources.”
Yet it is not clear how to implement these prescriptions, and there are no clear models of past success in this realm. The swift and shared prosperity of postwar American and European economies wasn’t based on large-scale public investments that sidelined market incentives (not even in the Nordic countries); it was driven by a regulated market economy generating rapid technological advances. My discussion of renewable energy was meant to provide a case in point. Massive public subsidies to clean energy could have been tried in the 1980s and early 1990s, but it would have been very, perhaps even prohibitively, expensive. The chief factors that redirected technological change in the energy sector—delivering impressive cost reductions for clean energy along the way—were some basic regulations, R&D inducements from the government, and nonpecuniary incentives provided by changing social norms. If we had followed Benanav’s vision, we would be far behind where we are in terms of having a fighting chance against climate change.
The same lessons apply today for the future of AI. It is not too late to put technology to work to create jobs and opportunities and to support individual freedom and democracy. But doing so requires a massive redirection of technological change, especially in the field of AI. We cannot expect this redirection to be led by today’s corporate giants, whose profit incentives and business models have centered on automation and monitoring. Nor can we expect anything better from China’s state-led model, which has, if anything, been even more fixated on using technology to disempower workers and citizens. The only path out of our current predicament requires both robust regulation and a fundamental transformation in societal norms and priorities, so that we put pressure on corporations and governments as customers, employees, and voters—while we still can.
AI can be used to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms—but only if we modify our approach.
…we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox
The Judge Rotenberg Center, a Massachusetts school, still uses electric shock therapy to punish disabled students. How can an entire field of mental health accept this as fine?
Because it hinges on who will accept blame for causing climate change, there’s never been so much at stake in the naming of a geological era.
A sweeping new history of humanity upends the story of civilization, inviting us to imagine how our own societies could be radically different.
Many development experts promote information and communication technology (ICT) as a way to relieve global poverty. They should pay more attention to the human beings who use it.
How to meet the security threat.
The philosophy of personal responsibility has ruined criminal justice and economic policy. It’s time to move past blame.