The Robots Are Coming
Ethics, Politics, and Society in the Age of Artificial Intelligence
March 9, 2020
Mar 9, 2020
39 Min read time
We cannot leave decisions about AI in the hands of those who stand to profit from its use.
Editors’ Note: The philosopher Kenneth A. Taylor passed away suddenly in December 2019. Boston Review is proud to publish this essay, which grows out of talks Ken gave throughout 2019, in collaboration with his estate. Preceding it is an introductory note by Ken’s colleague, John Perry.
In memoriam Ken Taylor
On December 2, 2019, a few weeks after his sixty-fifth birthday, Ken Taylor announced to all of his Facebook friends that the book he had been working on for years, Referring to the World, “finally existed in an almost complete draft.” That same day, while at home in the evening, Ken died suddenly and unexpectedly. He is survived by his wife, Claire Yoshida; son, Kiyoshi Taylor; parents, Sam and Seretha Taylor; brother, Daniel; and sister, Diane.
Ken was an extraordinary individual. He truly was larger than life. Whatever the task at hand—whether it was explaining some point in the philosophy of language, coaching Kiyoshi’s little league team, chairing the Stanford Philosophy department and its Symbolic Systems Program, debating at Stanford’s Academic Senate, or serving as president of the Pacific Division of the American Philosophical Association (APA)—Ken went at it with ferocious energy. He put incredible effort into teaching. He was one of the last Stanford professors to always wear a tie when he taught, to show his respect for the students who make it possible for philosophers to earn a living doing what we like to do. His death leaves a huge gap in the lives of his family, his friends, his colleagues, and the Stanford community.
Ken went to college at Notre Dame. He entered the School of Engineering, but it didn’t quite satisfy his interests so he shifted to the Program of Liberal Studies and became its first African American graduate. Ken came from a religious family, and never lost interest in the questions with which religion deals. But by the time he graduated he had become a naturalistic philosopher; his senior essay was on Kant and Darwin.
Ken was clearly very much the same person at Notre Dame that we knew much later. Here is a memory from a Katherine Tillman, a professor in the Liberal Studies Program:
This is how I remember our beloved and brilliant Ken Taylor: always with his hand up in class, always with that curious, questioning look on his face. He would shift a little in his chair and make a stab at what was on his mind to say. Then he would formulate it several more times in questions, one after the other, until he felt he got it just right. And he would listen hard, to his classmates, to his teachers, to whomever could shed some light on what it was he wanted to know. He wouldn’t give up, though he might lean back in his chair, fold his arms, and continue with that perplexed look on his face. He would ask questions about everything. Requiescat in pace.
From Notre Dame Taylor went to the University of Chicago; there his interests solidified in the philosophy of language. His dissertation was on reference, the theory of how words refer to things in the world; his advisor was the philosopher of language Leonard Linsky. We managed to lure Taylor to Stanford in 1995, after stops at Middlebury, the University of North Carolina, Wesleyan, the University of Maryland, and Rutgers.
In 2004 Taylor and I launched the public radio program Philosophy Talk, billed as “the program that questions everything—except your intelligence.” The theme song is “Nice Work if You Can Get It,” which expresses the way Ken and I both felt about philosophy. The program dealt with all sorts of topics. We found ourselves reading up on every philosopher we discussed—from Plato to Sartre to Rawls—and on every topic with a philosophical dimension, from terrorism and misogyny to democracy and genetic engineering. I grew pretty tired of this after a few years. I had learned all I wanted to know about imporant philosophers and topics. I couldn’t wait after each Sunday’s show to get back to my world: the philosophy of language and mind. But Ken seemed to love it more and more with each passing year. He loved to think; he loved forming opinions, theories, hypotheses and criticisms on every possible topic; and he loved talking about them with the parade of distinguished guests that joined us.
Until the turn of the century Ken’s publications lay pretty solidly in the philosophy of language and mind and closely related areas. But later we begin to find things like “How to Vanquish the Still Lingering Shadow of God” and “How to Hume a Hegel-Kant: A Program for the Naturalization of Normative Consciousness.” Normativity—the connection between reason, duty, and life—is a somewhat more basic issue in philosophy than proper names. By the time of his 2017 APA presidential address, “Charting the Landscape of Reason,” it seemed to me that Ken had clearly gone far beyond issues of reference, and not only on Sunday morning for Philosophy Talk. He had found a broader and more natural home for his active, searching, and creative mind. He had become a philosopher who had interesting things to say not only about the most basic issues in our field but all sorts of wider concerns. His Facebook page included a steady stream of thoughtful short essays on social, political, and economic issues. As the essay below shows, he could bring philosophy, cognitive science, and common sense to bear on such issues, and wasn’t afraid to make radical suggestions.
Some of us are now finishing the references and preparing an index for Referring to the World, to be published by Oxford University Press. His next book was to be The Natural History of Normativity. He died as he was consolidating the results of thirty-five years of exciting productive thinking on reference, and beginning what should have been many, many more productive and exciting years spent illuminating reason and normativity, interpreting the great philosophers of the past, and using his wisdom to shed light on social issues—from robots to all sort of other things.
His loss was not just the loss of a family member, friend, mentor and colleague to those who knew him, but the loss, for the whole world, of what would have been an illuminating and important body of philosophical and practical thinking. His powerful and humane intellect will be sorely missed.
Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machinery—by automatons in human form—it would be a considerable loss to exchange for these automatons even the men and women who at present inhabit the more civilized parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.
—John Stuart Mill, On Liberty (1859)
Some believe that we are on the cusp of a new age. The day is coming when practically anything that a human can do—at least anything that the labor market is willing to pay a human being a decent wage to do—will soon be doable more efficiently and cost effectively by some AI-driven automated device. If and when that day does arrive, those who own the means of production will feel ever increasing pressure to discard human workers in favor of an artificially intelligent work force. They are likely to do so as unhesitatingly as they have always set aside outmoded technology in the past.
We are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it.
To be sure, technology has disrupted labor markets before. But until now, even the most far reaching of those disruptions have been relatively easy to adjust to and manage. That is because new technologies have heretofore tended to displace workers from old jobs that either no longer needed to be done—or at least no longer needed to be done by humans—into either entirely new jobs that were created by the new technology, or into old jobs for which the new technology, directly or indirectly, caused increased demand.
This time things may be radically different. Thanks primarily to AI’s presumed potential to equal or surpass every human cognitive achievement or capacity, it may be that many humans will be driven out of the labor market altogether.
Yet it is not necessarily time to panic. Skepticism about the impact of AI is surely warranted on inductive grounds alone. Way back in 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, an event that launched the first AI revolution, the assembled gaggle of AI pioneers—all ten of them—breathlessly anticipated that the mystery of fully general artificial intelligence could be solved within a couple of decades at most. In 1961, Minsky, for example, was confidently proclaiming, “We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines.” Well over a half century later, we are still waiting for the revolution to be fully achieved.
AI has come a long way since those early days: it is now a very big deal. It is a major focus of academic research, and not just among computer scientists. Linguists, psychologists, the legal establishment, the medical establishment, and a whole host of others have gotten into the act in a very big way. AI may soon be talking to us in flawless and idiomatic English, counseling us on fundamental life choices, deciding who gets imprisoned for how long, and diagnosing our most debilitating diseases. AI is also big business. The worldwide investment in AI technology, which stood at something like $12 billion in 2018, will top $200 billion by 2025. Governments are hopping on the AI bandwagon. The Chinese envision the development of a trillion-dollar domestic AI industry in the relatively near term. They clearly believe that the nation that dominates AI will dominate the world. And yet, a sober look at the current state of AI suggests that its promise and potential may still be a tad oversold.
Excessive hype is not confined to the distant past. One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled “deep learning” variety, rather than by AI of the more or less passé “logic-based symbolic processing” variety—affectionately known in some quarters, and derisively known in others, as GOFAI (Good Old Fashion Artificial Intelligence).
It was mostly logic-based, symbolic processing GOFAI that so fired the imaginations of the founders of AI back in 1956. Admittedly, to the extent that you measure success by where time, money, and intellectual energy are currently being invested, GOFAI looks to be something of dead letter. I don’t want to rehash the once hot theoretical and philosophical debates over which approach to AI—logic-based symbolic processing, or neural nets and deep learning—is the more intellectually satisfying approach. Especially back in the ’80s and ’90s, those debates raged with what passes in the academic domain as white-hot intensity. They no longer do, but not because they were decisively settled in favor of deep learning and neural nets more generally. It’s more that machine learning approaches, mostly in the form of deep learning, have recently achieved many impressive results. Of course, these successes may not be due entirely to the anti-GOFAI character of these approaches. Even GOFAI has gotten into the machine learning act with, for example, Bayesian networks. The more relevant divide may be between probabilistic approaches of various sorts and logic-based approaches.
It is important to distinguish AI-as-engineering from AI-as-cognitive-science. The former is where the real money turns out to be.
However exactly you divide up the AI landscape, it is important to distinguish what I call AI-as-engineering from what I call AI-as-cognitive-science. AI-as-engineering isn’t particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things. The strategy of engineering machines that do things that are in some sense intelligent, even if they do what they do in their own way, is a perfectly fine way to pursue artificial intelligence. AI-as-cognitive science, on the other hand, takes as its primary goal that of understanding and perhaps reverse engineering the human mind. AI pretty much began its life by being in this business, perhaps because human intelligence was the only robust model of intelligence it had to work with. But these days, AI-as-engineering is where the real money turns out to be.
Though there is certainly value in AI-as-engineering, I confess to still have a hankering for AI-as-cognitive science. And that explains why I myself still feel the pull of the old logic-based symbolic processing approach. Whatever its failings, GOFAI had as one among its primary goals that of reverse engineering the human mind. Many decades later, though we have definitely made some progress, we still haven’t gotten all that far with that particular endeavor. When it comes to that daunting task, just about all the newfangled probability and statistics-based approaches to AI—most especially deep learning, but even approaches that have more in common with GOFAI like Bayesian Nets—strike me as if not exactly nonstarters, then at best only a very small part of the truth. Probably the complete answer will involve some synthesis of older approaches and newer approaches and perhaps even approaches we haven’t even thought of yet. Unfortunately, however, although there are a few voices starting to sing such an ecumenical tune; neither ecumenicalism nor intellectual modesty are exactly the rage these days.
• • •
Back when the competition over competing AI paradigms was still a matter of intense theoretical and philosophical dispute, one of the advantages often claimed on behalf of artificial neural nets over logic-based symbolic approaches was that the former but not the latter were directly neuronally inspired. By directly modeling its computational atoms and computational networks on neurons and their interconnections, the thought went, artificial neural nets were bound to be truer to how the actual human brain does its computing than its logic-based symbolic processing competitor could ever hope to be.
Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world.
This is not the occasion to debate such claims at length. My own hunch is that there is little reason to believe that deep learning actually holds the key to finally unlocking the mystery of general purpose, humanlike intelligence. Despite being neuronally inspired, many of the most notable successes of the deep learning paradigm depend crucially on the ability of deep learning architectures to do something that the human brain isn’t all that good at: extracting highly predictive, though not necessarily deeply explanatory patterns, on the basis of being trained up, via either supervised or unsupervised learning, on huge data sets, consisting, from the machine eye point of view, of a plethora of weakly correlated feature bundles, without the aid of any top-down direction or built-in worldly knowledge. That is an extraordinarily valuable and computationally powerful, technique for AI-as-engineering. And it is perfectly suited to the age of massive data, since the successes of deep learning wouldn’t be possible without big data.
It’s not that we humans are pikers at pattern extraction. As a species, we do remarkably well at it, in fact. But I doubt that the capacity for statistical analysis of huge data sets is the core competence on which all other aspects of human cognition are ultimately built. But here’s the thing. Once you’ve invented a really cool new hammer—which deep learning very much is—it’s a very natural human tendency to start looking for nails to hammer everywhere. Once you are on the lookout for nails everywhere, you can expect to find a lot more of them than you might have at first thought, and you are apt to find some of them in some pretty surprising places.
But if it’s really AI-as-cognitive science that you are interested in, it’s important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind. You can’t let your obsession with your cool new hammer make you lose sight of the fact that in some domains, the human mind seems to deploy quite a different trick from the main sorts of tricks that are at the core not only of deep learning but also other statistical paradigms (some of which, again, are card carrying members of the GOFAI family). In particular, the human mind is often able to learn quite a lot from relatively little and comparatively impoverished data. This remarkable fact has led some to conjecture that human mind must come antecedently equipped with a great deal of endogenous, special purpose, task specific cognitive structure and content. If true, that alone would suffice to make the human mind rather unlike your typical deep learning architecture.
Indeed, deep learning takes quite the opposite approach. A deep learning network may be trained up to represent words, say, as points in a micro-featural vector space of, say, three hundred dimensions, and on that basis of such representations, it might learn, after many epochs of training, on a really huge data set, to make the sort of pragmatic inferences—from say, “John ate some of the cake” to “John did not eat all of the cake”—that humans make quickly, easily and naturally, without a lot of focused training of the sort required by deep learning and similar such approaches. The point is that deep learning can learn to do various cool things—things that one might once have thought only human beings can do—and although they can do some of those things quite well, it still seems highly unlikely that they do those cool things in precisely the way that we humans do.
• • •
I stress again, though, that if you are not primarily interested in AI-as-cognitive science, but solely in AI-as-engineering, you are free to care not one whit whether deep learning architectures and its cousins hold the ultimate key to understanding human cognition in all its manifestations. You are free to embrace and exploit the fact that such architectures are not just good, but extraordinarily good, at what they do, at least when they are given large enough data sets to work with. Still, in thinking about the future of AI, especially in light of both our darkest dystopian nightmares and our brightest utopian dreams, it really does matter whether we are envisioning a future shaped by AI-as-engineering or AI-as-cognitive-science. If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it.
Once you’ve invented a new hammer—which deep learning very much is—it’s a very natural human tendency to start looking for nails to hammer everywhere.
Deep learning and its cousins may do what they do better than we could possibly do what they do. But that doesn’t imply that they do what we do better than we do what we do. If so, then, at the very least, we needn’t fear, at least not yet, that AI will radically outpace humans in our most characteristically human modes of cognition. Nor should we expect the imminent arrival of the so-called singularity in which human intelligence and machine intelligence somehow merge to create a super intelligence that surpasses the limits of each. Given that we still haven’t managed to understand the full bag of tricks our amazing minds deploy, we haven’t the slightest clue as to what such a merger would even plausibly consist in.
Nonetheless, it would still be a major mistake to lapse into a false sense of security about the potential impact of AI on the human world. Even if current AI is far from being the holy grail of a science of mind that finally allows us to reverse engineer it, it will still allow us to the engineer extraordinarily powerful cognitive networks, as I will call them, in which human intelligence and artificial intelligence of some kind or other play quite distinctive roles. Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing what I will call the division of cognitive labor between human and artificial intelligence within engineered cognitive networks will be with us to stay. And it will almost certainly be a rather fraught and urgent matter. And this will be thanks in large measure to the power of AI-as-engineering rather than to the power of AI-as-cognitive-science.
Indeed, there is a distinct possibility that AI-as-engineering may eventually reduce the role of human cognitive labor within future cognitive networks to the bare minimum. It is that possibility—not the possibility of the so-called singularity or the possibility that we will soon be surrounded by a race of free, autonomous, creative, or conscious robots, chafing at our undeserved dominance over them—that should now and for the foreseeable future worry us most. Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world. It will not necessarily do so by superseding human intelligence, but simply by displacing a great deal of it within various engineered cognitive networks. And if that’s right, it simply won’t take the arrival of anything close to full-scale super AI, as we might call it, to radically disrupt, for good or for ill, the built cognitive world.
Start with the fact that much of the cognitive work that humans are currently tasked to do within extant cognitive networks doesn’t come close to requiring the full range of human cognitive capacities to begin with. A human mind is an awesome cognitive instrument, one of the most powerful instruments that nature has seen fit to evolve. (At least on our own lovely little planet! Who knows what sorts of minds evolution has managed to design on the millions upon millions of mind-infested planets that must be out there somewhere?) But stop and ask yourself, how much of the cognitive power of her amazing human mind does a coffee house Barista, say, really use in her daily work?
Not much, I would wager. And precisely for that reason, it’s not hard to imagine coffee houses of the future in which more and more of the cognitive labor that needs doing within them is done by AI finely tuned to cognitive loads they will need to carry within such cognitive networks. More generally, it is abundantly clear that much of the cognitive labor that needs doing within our total cognitive economy that now happens to be performed by humans is cognitive labor for which we humans are often vastly overqualified. It would be hard to lament the off-loading of such cognitive labor onto AI technology.
Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing the division of cognitive labor between human and artificial intelligence will be with us to stay.
But there is also a flip side. The twenty-first century economy is already a highly data-driven economy. It is likely to become a great deal more so, thanks—among other things—to the emergence of the internet of things. The built environment will soon be even more replete with so-called “smart” devices. And these smart devices will constantly be collecting, analyzing and sharing reams and reams of data on every human being who interacts with them. It will not be just the usual suspects, like our computers, smart phones or smart watches, that are so engaged. It will be our cars, our refrigerators, indeed every system or appliance in every building in the world. There will be data-collecting monitors of every sort—heart monitors, sleep monitors, baby monitors. There will be smart roads, smart train tracks. There will be smart bridges that constantly monitor their own state and automatically alert the transportation department when they need repair. Perhaps they will shut themselves down and spontaneously reroute traffic while they are waiting for the repair crews to arrive. It will require an extraordinary amount of cognitive labor to keep such a built environment running smoothly. And for much of that cognitive labor, we humans are vastly underqualified. Try, for example, running a data mining operation using nothing but human brain power. You’ll see pretty quickly that human brains are not at all the right tool for the job, I would wager.
• • •
Perhaps what should really worry us, I am suggesting, is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other cognitive labor will leave us open to something of an AI pincer attack. AI-as-engineering may give us the power to design cognitive networks in which each node is exquisitely fine-tuned to the cognitive load it is tasked to carry. Since distinctively human intelligence will often be either too much or too little for the task at hand, future cognitive networks may assign very little cognitive labor to humans. And that is precisely how it might come about that the demand for human cognitive labor within the overall economy may be substantially diminished. How should we think about the advance of AI in light of its capacity to allow us to re-imagine and re-engineer our cognitive networks in this way? That is the question I address in the remainder of this essay.
There may be lessons to be learned from the ways that we have coped with disruptive technological innovations of the past. So perhaps we should begin by looking backward rather than forward. The first thing to say is that many innovations of the past are now widely seen as good things, at least on balance. They often spared humans work that payed dead-end wages, or work that was dirty and dangerous, or work that was the source of mind-numbing drudgery.
What should really worry us is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other will leave us open to something of an AI pincer attack.
But we should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come. Even looking backward, we can see that new and disruptive technologies have sometimes been the culprit in increasing rather than decreasing the drudgery and oppressiveness of work. They have also served to rob work of a sense of meaning and purpose. The assembly line is perhaps the prime example. The rise of the assembly line doubtlessly played a vital role in making the mass production and distribution of all manner of goods possible. It made the factory worker vastly more productive than, say, the craftsman of old. In so doing, it increased the market for mass produced goods, while simultaneously diminishing the market for the craftsman’s handcrafted goods. As such, it played a major role in increasing living standards for many. But it also had the downside effect of turning many human agents into mere appendages within a vast, impersonal and relentless mechanism of production.
All things considered, it would be hard to deny that trading in skilled craftsmanship for unskilled or semiskilled factory labor was a good thing. I do not intend to relitigate that choice here. But it is worth asking whether all things really were considered—and considered not just by those who owned the means of production but collectively by all the relevant stakeholders. I am no historian of political economy. But I venture the conjecture that the answer to that question is a resounding no. More likely than not, disruptive technological change was simply foisted on society as a whole, primarily by those who owned and controlled the means of production, and primarily to serve their own profit, with little, if any intentionality or democratic deliberation and participation on the part of a broader range of stakeholders.
Given the disruptive potential even of AI-as-engineering, we cannot afford to leave decisions about the future development and deployment of even this sort of AI solely in the hands of those who stand to make vast profits from its use. This time around, we have to find a way to ensure that all relevant stakeholders are involved and that we are more intentional and deliberative in our decision making than we were about the disruptive technologies of the past.
I am not necessarily advocating the sort of socialism that would require the means of production to be collectively owned or regulated. But even if we aren’t willing to go so far as collectively seizing the machines, as it were, we must get past the point of treating not just AI but all technology as a thing unto itself, with a life of its own, whose development and deployment is entirely independent of our collective will. Technology is never self-developing or self-deploying. Technology is always and only developed and deployed by humans, in various political, social, and economic contexts. Ultimately, it is and must be entirely up to us, and up to us collectively, whether, how, and to what end it is developed and deployed. As soon as we lose sight of the fact that it is up to us collectively to determine whether AI is to be developed and deployed in a way that enhances the human world rather than diminishes it, it is all too easy to give in to either utopian cheerleading or dystopian fear mongering. We need to discipline ourselves not to give into either prematurely. Only such discipline will afford us the space to consider various tradeoffs deliberatively, reflectively and intentionally.
We should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come.
Utopian cheerleaders for AI often blithely insist that it is more likely to decrease rather than increase the amount of dirt, danger, or drudgery to which human workers are subject. As long as AI is not turned against us—and why should we think that it would be?—it will not eliminate the work for which we humans are best suited, but only the work that would be better left to machines in the first place.
I do not mean to dismiss this as an entirely unreasonable thought. Think of coal mining. Time was when coal mining was extraordinarily dangerous and dirty work. Over 100,000 coal miners died in mining accidents in the U.S. alone during the twentieth century—not to mention the amount of black lung disease they suffered. Thanks largely to automation and computer technology, including robotics and AI technology, your average twenty-first-century coal industry worker relies a lot more on his or her brains than on mere brawn and is subject to a lot less danger and dirt than earlier generations of coal miners were. Moreover, it takes a lot fewer coal miners to extract more coal than the coal miners of old could possibly hope to extract.
To be sure, thanks to certain other forces having nothing to do with the AI revolution, the number of people dedicated to extracting coal from the earth will likely diminish even further in the relatively near term. But that just goes to show that even if we could manage to tame AI’s effect on the future of human work, we’ve still got plenty of other disruptive challenges to face as we begin to re-imagine and re-engineer the made human world. But that just gives us even more reason to be intentional, reflective, and deliberative in thinking about the development and deployment of new technologies. Whatever one technology can do on its own to disrupt the human world, the interactive effects of multiple apparently independent technologies can greatly amplify the total level of disruption to which we may be subject.
I suppose that, if we had to choose, utopian cheerleading would at least feel more satisfying and uplifting than dystopian fear mongering. But we shouldn’t be blind to the fact that any utopian buzz we may fall into while contemplating the future may serve to blind us to the fact that AI is very likely to transform—perhaps radically—our collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall in the first place. The point is that that boundary is likely to be drawn, erased, and redrawn by the progress of AI. And as our conception of the proper boundary evolves, our conception of what we humans are here for is likely to evolve right along with it.
The upshot is clear. If it is only relative to our sense of where the boundary is properly drawn that we could possibly know whether to embrace or recoil from the future, then we are now currently in no position to judge on behalf of our future selves which outcomes are to be embraced and which are to be feared. Nor, perhaps, are we entitled to insist that our current sense of where the boundary should be drawn should remain fixed for all time and circumstances.
• • •
To drive this last point home, it will help to consider three different cognitive networks in which AI already plays, or soon can be expected to play, a significant role: the air traffic control system, the medical diagnostic and treatment system, and what I’ll call the ground traffic control system. My goal in so doing is to examine some subtle ways in which our sense of proper boundaries may shift.
We cannot afford to leave decisions about the future development and deployment even of AI-as-engineering solely in the hands of those who stand to make vast profits from its use.
Begin with the air traffic control system, one of the more developed systems in which brain power and computer power have been jointly engineered to cooperate in systematically discharging a variety of complex cognitive burdens. The system has steadily evolved over many decades into a system in which a surprising amount of cognitive work is done by software rather than humans. To be sure, there are still many humans involved. Human pilots sit in every cockpit and human brains monitor every air traffic control panel. But it is fair to say that humans, especially human pilots, no longer really fly airplanes on their own within this vast cognitive network. It’s really the system as a whole that does the flying. Indeed, it’s only on certain occasions, and on an as needed basis, that the human beings within the system are called upon to do anything at all. Otherwise, they are mostly along for the ride.
This particular human-computer cognitive network works extremely well for the most part. It is extraordinarily safe in comparison with travel by automobile. And it is getting safer all the time. Its ever-increasing safety would seem to be in large measure due to the fact that more and more of the cognitive labor done within the system is being offloaded onto machine intelligence and taken away from human intelligence. Indeed, I would hazard the guess that almost no increases in safety have resulted from taking burdens away from algorithms and machines and giving them to humans instead.
To be sure, this trend started long before AI had reached anything like its current level of sophistication. But with the coming of age of AI-as-engineering you can expect that the trend will only accelerate. For example, starting in the 1970s, decades of effort went into building human-designed rules meant to provide guidance to pilots as to which maneuvers executed in which order would enable them to avoid any possible or pending mid-air collision. In more recent years, engineers have been using AI techniques to help design a new collision avoidance system that will make possible a significant increase in air safety. The secret to the new system is that instead of leaving the discovery of optimal rules of the airways to human ingenuity, the problem has been turned over to the machines. The new system uses computational techniques to derive an optimized decision logic that better deals with various sources of uncertainty and better balances competing system objectives than anything that we humans would be likely to think up on our own. The new system, called Airborne Collision Avoidance System (ACAS) X, promises to pay considerable dividends by reducing both the risks of mid-air collision and the need for alerts that call for corrective maneuvers in the first place.
In all likelihood, the system will not be foolproof—probably no system will ever be. But in comparison with automobile travel, air travel is already extraordinarily safe. It’s not because the physics makes flying inherently safer than driving. Indeed, there was a time when flying was much riskier than it currently is. What makes air travel so much safer is primarily the differences between the cognitive networks within which each operates. In the ground traffic control system, almost none of the cognitive labor has been off loaded onto intelligent machines. Within the air traffic control system, a great deal of it has.
To be sure, every now and then, the flight system will call on a human pilot to execute a certain maneuver. When it does, the system typically isn’t asking for anything like expert opinion from the human. Though it may sometimes need to do that, in the course of its routine, day-to-day operations, the system relies hardly at all on the ingenuity or intuition of human beings, including human pilots. When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault. Humans too often fail to respond, or they respond with the wrong maneuver, or they execute the needed maneuver but in an untimely fashion.
Utopian buzz may serve to blind us to the fact that AI is very likely to transform—perhaps radically—our collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall.
I have focused on the air traffic control system because it is a relatively mature and stable cognitive network in which a robust balance between human and machine cognitive labor has been achieved over time. Given its robustness and stability and the degree of safety it provides, it’s pretty hard to imagine anyone having any degree of nostalgia for the days when that task of navigating the airways fell more squarely on the shoulders of human beings and less squarely on machines. On the other hand, it is not at all hard to imagine a future in which the cognitive role of humans is reduced even further, if not entirely eliminated. No one would now dream of traveling on an airplane that wasn’t furnished with the latest radar system or the latest collision avoidance software. Perhaps the day will soon come when no would dream of traveling on an airplane piloted by, of all things, a human being rather than by a robotic AI pilot.
I suspect that what is true of the air traffic control system may eventually be true of many of the cognitive networks in which human and machine intelligence systematically interact. We may find that the cognitive labor that was once assigned to the human nodes has been given over to intelligent machines for narrow economic reasons alone—especially if we fail to engage in collective decision making that is intentional, deliberative, and reflective and thereby leave ourselves to the mercy of the short-term economic interests of those who currently own and control the means of production.
We may comfort ourselves that even in such an eventuality, that which is left to us humans will be cognitive work of very high value, finely suited to the distinctive capacities of human beings. But I do not know what would now assure us of the inevitability of such an outcome. Indeed, it may turn out that there isn’t really all that much that needs doing within such networks that is best done by human brains at all. It may be, for example, that within most engineered cognitive networks, the human brains that still have a place within them will mostly be along for the ride. Both possibilities are, I think, genuinely live options. And if I had to place a bet, I would bet that for the foreseeable future the total landscape of engineered cognitive networks will increasingly contain engineered networks of both kinds.
In fact, the two system I mentioned earlier—the medical diagnostic and treatment system and the ground transportation system—already provide evidence of my conjecture. Start with the medical diagnostic and treatment system. Note that a great deal of medical diagnosis involves expertise at interpreting the results of various forms of medical imaging. As things currently stand, it is mostly human beings that do the interpreting. But an impressive variety of machine learning algorithms that can do at least as well as humans are being developed at a rapid pace. For example, CheXNet, developed at Stanford, promises to equal or exceed the performance of human radiologists in the diagnosis a wide variety of difference diseases from X-ray scans. Partly because of the success of CheXNEt and other machine learning algorithms, Geoffrey Hinton, the founding father of deep learning, has come to regard radiologists as an endangered species. On his view, medical schools ought to stop training radiologists beginning right now.
Even if Hinton is right, that doesn’t mean that all the cognitive work done by the medical diagnostic and treatment system will soon be done by intelligent machines. Though human-centered radiology may soon come to seem quaint and outmoded, there is, I think, no plausible short- to medium-term future in which human doctors are completely written out of the medical treatment and diagnostic system. For one thing, though the machines beat humans at diagnosis, we still outperform the machines when it comes to the treatment—perhaps because humans are much better at things like empathy than any AI system is now or is likely to be anytime soon. Still, even if the human doctors are never fully eliminated from the diagnostic and treatment cognitive network, it is likely that their enduring roles within such networks will evolve so much that human doctors of tomorrow will bear little resemblance to human doctors of today.
We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst.
By contrast, there is a quite plausible near- to medium-term future in which human beings within the ground traffic control system are gradually reduced to the status of passengers. Someday in the not terribly distant future, our automobiles, buses, trucks, and trains will likely be part of a highly interconnected ground transportation system in which much of the cognitive labor is done by intelligent machines rather than human brains. The system will involve smart vehicles in many different configurations, each loaded with advanced sensors that allow them collect, analyze, and act on huge stores of data, in coordination with each other, the smart roadways on which they travel, and perhaps some centralized information hub that is constantly monitoring the whole. Within this system, our vehicles will navigate the roadways and railways safely and smoothly with very little guidance from humans. Humans will be able to direct the system to get this or that cargo or passenger from here to there. But the details will be left to the system to work out without much, if any, human intervention.
Such a development, if and when it comes to full fruition, will no doubt be accompanied by quantum leaps in safety and efficiency. But no doubt it would be a major source of a possibly permanent and steep decrease in the net demand for human labor of the sort that we referred to at the outset. All around the world, many millions of human beings make their living by driving things from one place to another. Labor of this sort has traditionally been rather secure. It cannot possibly be outsourced to foreign competitors. That is, you cannot transport beer, for example, from Colorado to Ohio by hiring a low-wage driver operating a truck in Beijing. But it may soon be the case that we can outsource such work after all. Not to foreign laborers but to intelligent machines, right here in our midst!
• • •
I end where I began. The robots are coming. Eventually, they may come for every one of us. Walls will not contain them. We cannot outrun them. Nor will running faster than the next human being suffice to save us from them. Not in the long run. They are relentless, never breaking pace, never stopping to savor their latest prey before moving on to the next.
If we cannot stop or reverse the robot invasion of the built human world, we must turn and face them. We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst. Should we seek to regulate their development and deployment? Should we accept the inevitability that we will lose much work to them? If so, perhaps we should rethink the very basis of our economy. Nor is it merely questions of money that we must face. There are also questions of meaning. What exactly will we do with ourselves if there is no longer any economic demand for human cognitive labor? How shall we find meaning and purpose in a world without work?
These are the sort of questions that the robot invasion will force us to confront. It should be striking that these are also the questions presaged in my prescient epigraph from Mill. Over a century before the rise of AI, Mill realized that the most urgent question raised by the rise of automation would not be the question of whether automata could perform certain tasks faster or cheaper or more reliably than human beings might. Instead, the most urgent question is what we humans would become in the process of substituting machine labor for human labor. Would such a substitution enhance us or diminish us? That has, in fact, has always been the most urgent question raised by disruptive technologies, though we have seldom recognized it.
This time around, may we face the urgent question head on. And may we do so collectively, deliberatively, reflectively, and intentionally.
While we have you...
...we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
March 09, 2020
39 Min read time