We are a public forum committed to collective reasoning and imagination, but we can’t do it without you. Join today to help us keep the discussion of ideas free and open to everyone, and enjoy member benefits like our quarterly books.
As an economist my own research agenda is based on the premise that guiding AI toward beneficial outcomes is the most important issue facing our generation. Since Daron Acemoglu and I are kindred thinkers about this subject, I will take this opportunity to underscore and extend his essential message.
Acemoglu is right on three key points. First, core AI technologies are indeed advancing rapidly and becoming increasingly powerful. Improvements in machine learning, in particular—such as the deep learning techniques that made such rapid progress on the Imagenet dataset—are affecting more and more of the economy. In the right applications, the payoff can be large. Adoption is still in its early days—only 1.3 percent of firms in the United States have adopted robotics, for instance—but the numbers are growing rapidly. Second, the societal implications of AI are profound, especially regarding the future of work and our individual freedoms and democracy. Third, and most important, outcomes are not preordained.
This last point is not to be taken for granted. The most common question I get from audiences when I speak about AI is some version of: What will AI do to society? But this is not the right question to ask; it erases our agency. Acemoglu deserves special credit not simply for diagnosing the challenges created by these technologies, but also for suggesting a set of specific solutions. (In fact Acemoglu has long been one of the most powerful and rigorous advocates of the idea that we can and should direct the course of technical change. He published an influential article making this argument nearly twenty years ago.)
Regarding Acemoglu’s diagnosis, I would emphasize the following details. When it comes to AI’s effect on the workforce, the real challenge is wages, not jobs. While employment has grown over the past forty years, real wages for Americans with a high school education or less have fallen. Tyler Cowen and others have argued that this is evidence of a lack of technological progress, but overall GDP and GDP per capita have also grown, and 2019 saw a record number of billionaires. Drawing on the work of Acemoglu as well as David Autor, Lawrence Katz, Melissa Kearney, Frank Levy, and Richard Murnane, Andrew McAfee and I have made the case that advances in technology were not inconsistent with falling wages for some or even large part of the workforce.
The scale of these changes, and the bigger ones yet to come, is massive. The value of all the human capital in the United States—the sum of American workers’ skills, experience, education, and knowhow—is likely around $240 trillion. That implies that if our decisions change the trajectory of technology’s effects on the U.S. economy enough to cause even a 10 percent change in that value, it would be worth more than an entire year’s GDP (currently $21 trillion).
As for AI’s effect on democracy, we ought to be concerned about increasing polarization and enabling Orwellian levels of surveillance. As Marshall van Alstyne and I have written, it is precisely because digital technologies better enable us to find content and people we like that they can also separate and polarize us. These technologies can also massively amplify the power of the state to monitor the words and actions of its citizens, giving it the power to not only silence critics but even shape their thoughts. The implications are increasingly recognized by world leaders; as even Vladimir Putin has put it, “Whoever becomes the leader in this sphere will become the ruler of the world.” In the wrong hands, the result may be what Jean Tirole calls a digital dystopia.
But no outcome is inevitable. We have the ability to direct AI, just as we can direct other types of technical change. Let me focus on three groups that can and should play a role in shaping AI for good: technologists, managers, and policymakers.
“It is remarkable,” Acemoglu writes, “how much of AI research still focuses on applications that automate jobs.” This is an underrated problem. While it can be profitable to automate jobs, thereby substituting technology for human labor, in the longterm the bigger gains come from complementing humans and making it possible to create value in new ways. Moreover, when technology substitutes for labor, pitting humans against machines, it tends to drive down wages and lead to a greater concentration of wealth.
By contrast, when technology complements labor, wages tend to rise, creating more broadly shared prosperity. (In addition to substituting or complement labor, Tom Mitchell and I describe four additional considerations for how technology will affect wages: price elasticity, income elasticity, labor supply elasticity, and business process redesign. In many cases, the net result of these six factors will be higher wages.) That is why McAfee and I have argued that “in medicine, law, finance, retailing, manufacturing and even scientific discovery, the key to winning the race is not to compete against machines but to compete with machines.” Indeed, at a major AI conference three years ago, I directly called on the gathered technologists to redirect their work from replicating and automating human labor to augmenting it.
Fortunately, a growing number of researchers are working to use AI to augment humans rather than replace them. Take Cresta, an AI start-up I advise. While many competitors work to develop fully automated chatbots that directly interact with potential customers, Cresta keeps a person in the loop. The system works alongside human operators, looking for opportunities to suggest ways of improving the dialogue—suggesting a product upgrade or service, offering a reminder about pricing, or coaching on tone and tactics. Via a series of A/B tests, Cresta found that this approach created demonstrable benefits for customers and also seems to benefit newer and less skilled workers especially, helping to close the wage gap and reduce inequality.
Managers, entrepreneurs, worker representatives, and other business leaders also have a critical role to play. Like technologists, they too often look at existing processes and ask the easy question: How can machines do what humans are now doing? The harder but ultimately more valuable question is different: How can technology and people work together to create novel sources of value? The more powerful and general the technology, the more important it is to rethink work. As Paul David, Warren Devine, Jr., and others have documented, significant productivity gains from electricity in manufacturing did not arise until managers fundamentally re-invented the organization of factories, a process that took thirty or more years—long enough for a generation of manager to retire and be replaced by fresher thinking. Likewise, modern enterprises are subject to similar dynamics, creating a lull in productivity while intangible investments in organizational and human capital are created complementing the new technologies like AI.
Policymakers can help each of the first two groups make better decisions by changing incentives. Take taxation. A key lesson from public finance is that we tend to get less of whatever we tax more. The current U.S. tax regime treats capital more favorably than labor. If two entrepreneurs each has a billion-dollar idea for using AI, the one who employs more labor will likely be taxed more than the one who is more capital-intensive. To the extent that labor income is more widely distributed, this element of our tax system discourages shared prosperity. This is a powerful argument for leveling the playing field. In fact there is also a good argument that we should go further to the extent we think there are positive externalities to employment. (Robert Putnam’s 2015 book Our Kids: The American Dream in Crisis describes the negative effects of joblessness, while Anne Case and Angus Deaton’s 2020 study Deaths of Despair and the Future of Capitalism documents rising deaths from suicide, drug abuse, and alcoholism in demographic groups most negatively affected by falling labor demand.) Depending on how strong these externalities are, they could reverse the classic results suggesting that taxes on capital should be lower than taxes on labor.
This list of change makers is not exhaustive, of course. Economists also have a role to play in guiding the debate, as does the public at large. As the power of AI grows, our values become increasingly important. It’s incumbent on each of us to think deeply about what kind of society we want. Bringing these issues to the forefront of popular discussion is crucial.
In the face of all these possibilities for change, I remain a mindful optimist. Acemoglu notes that we are far from consensus about how to make progress. While this is a challenge, it is also an opportunity to forge a shared vision. But our window is short. If wealth and power become increasingly concentrated, and if democracy is further weakened, we will reach a point of no return. We can and must act now to prevent that from happening—and to redirect AI for the good of the many, not just the few.
Erik Brynjolfsson is Jerry Yang and Akiko Yamazaki Professor and Senior Fellow at the Stanford Institute for Human-Centered AI, Director of the Stanford Digital Economy Lab, and coauthor, with Andrew McAfee, of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Find him on Twitter @erikbryn.
AI can be used to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms—but only if we modify our approach.
…we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox
Because it hinges on who will accept blame for causing climate change, there’s never been so much at stake in the naming of a geological era.
A sweeping new history of humanity upends the story of civilization, inviting us to imagine how our own societies could be radically different.
Nearly two years into a global pandemic, uncertainty has profoundly unsettled both our personal and political lives. In our Fall 2021 book, eleven thinkers consider its scientific, philosophical, and economic aspects.
Many development experts promote information and communication technology (ICT) as a way to relieve global poverty. They should pay more attention to the human beings who use it.
How to meet the security threat.
The philosophy of personal responsibility has ruined criminal justice and economic policy. It’s time to move past blame.