Though AI has seen powerful advances over the last decade, the state of the art falls far short of artificial general intelligence—the Holy Grail for some technologists, and a voguish fear for many observers. But the systems we do have are already powerful enough to send tremors through our individual, professional, and political lives, and astounding advances in language and vision models in just the past few years portend only more seismic activity to come. Daron Acemoglu reminds us that the effects of technological innovation are not preordained, and he provides an essential playbook for citizens in democratic societies to steer the development of AI in a direction that supports rather than subverts human and societal flourishing.

I agree with Acemoglu that such redirection is essential. But I want to point to two neglected dimensions of his analysis: the transformation of the workplace, and the essential role of government funding and policy in making academia rather than industry the center of research activity.

The focus on job-destroying robots obscures an equally important debate about how autonomous systems change the experience of work.

AI threatens more than the elimination of human labor by machines or autonomous systems; it is also transforming the experience of the workplace, often for the worse. Most of the discussion about the social and economic effects of AI and autonomous systems has focused on the technological displacement of human labor—the rising number of job-destroying robots. This discussion is essential, but such a focus obscures an equally important debate about how autonomous systems change the experience of work. It comes in at least three forms.

First, service-industry employers—one of the largest sectors of the economy—once provided stable and predictable work schedules, with employers absorbing the risk of potential labor oversupply in the face of weak demand. Now automated systems help drive optimal staffing schedules, generating uncertain work hours and total compensation and shifting the risks to employees.

Second, where once the supervisor on the shop floor monitored employees for being on-task and productive, the rise of AI-driven “bossware” creates a surveillance panopticon of eyeball tracking, keystroke logging, location tracing, and other forms of automated monitoring of labor. Less invasive but increasingly common tools in the workplace, such as Slack and video conferencing, expose even the most ordinary forms of collaboration to surveillance.

And third, AI-driven systems also sit behind the platform or gig economy, connecting customers to providers of services—think TaskRabbit, Uber, Lyft, and DoorDash—who are not full employees but contractors. These platforms provide the benefit of flexible work schedules to the gig worker but also deny the ordinary protections of full employment, from health care to retirement benefits. Once again risk to the owners of staffing with full-time employees is shifted to the worker, creating new forms of precarity.

The upshot is that policy responses to AI, as Acemoglu rightly says, must rely on more than redistribution. But we must also go beyond redressing the distortionary incentives in the form of asymmetrical tax rates on labor versus equipment and software. The ongoing shifting of risk from employers to workers and the privacy-abusing practices of bossware also require policy invention.

This conclusion leads to the more general question posed by Acemoglu: How can government policy steer the development of AI away from automation that has negative consequences for individuals and societies? His three-pronged approach of removing policy distortions, changing research norms, and rejuvenating democratic governance is spot-on. But I would point to a number of different policy areas that could yield clear progress toward the aim of an AI future that generates shared prosperity, enhances the livelihood of workers, and increases our freedoms.

One worrisome trend at the frontier of AI science is the brain drain of talent from academia to industry. The most recent AI Index reports that in North America, 65 percent of graduating PhDs in AI went into industry in 2019, compared to 44.4 percent in 2010. The result is a steady rise in research coming from industry—research that quite possibly responds more to corporate and profit-making interests than the goal of individual and societal flourishing. The reason for the brain drain is not (just) the ordinary explanation that AI talent is scarce and industry compensation far exceeds what academia can pay. Another factor is the greater access to computing power and enormous pools of data, especially in big tech companies. In many cases, research at the frontier of AI science cannot be carried out from within academia—at least without massive funding and corporate partners to provide data and compute.

Timnit Gebru’s experience provides a cautionary tale for the fate of research that runs afoul of corporate control. Universities provide academic freedom; companies do not.

This problem can be addressed through policy by the obvious mechanism of greatly increasing federal AI science budgets that flow to universities. The federal government can also fund the creation of a national research cloud that provides compute and data access to a wide array of academic researchers. The newly formed Institute for Human-Centered AI at Stanford, where I serve as associate director, is championing this idea. The institute itself might be a model for other innovations that can shape the development of AI for the better—gathering together scholars from across the entirety of the university, training the next generation of AI scientists as well as policymakers, and reaching out to industry, civil society, and government to convene and educate. To take stock of the social transformations wrought by AI, we need research far beyond the precincts of computer science departments.

Still better, we need to move beyond the study of AI’s effects in the world only after new technology has already been created and deployed. We can change the norms of AI science by bringing talent from across the university into AI labs, as it were, changing the frontier of research into an interdisciplinary collaboration. In this spirit, though universities will never be able to match the compensation packages offered to AI talent by industry, they do offer settings for research and collaboration with scholars across fields and professional schools that few companies, if any, can match. The emerging study of AI ethics provides just one example. The experience of leading scholars such as Timnit Gebru at Google provides a cautionary tale for the fate of interdisciplinary research that runs afoul of corporate control. Universities provide academic freedom; companies do not.

Learning to govern AI before it governs us is one of the most important tasks of the twenty-first century. Leaving the task to AI scientists alone, especially to those who answer to corporate leaders rather than academic norms fully apart from the marketplace, is a trend we must reverse.