and errors can happen to you and computers
’cause you are . . . a computer!

go and do it!
 program yourself!
just do it!

explore your toes, explore your nose,
explore everything you have goes
and if you don’t want to do that

you can’t even live

not even houses, ’cause houses are us

—Eleanor Auerbach (age four), “The Blah Blah Blah Song”

A few years after leaving Google, I had a daughter, and thus began another long-term engineering project—one that is still ongoing. Parents program their children, after all—and vice versa—and it was in those early months of parenting that my child—unable to make a facial expression, unable to express anything but varying levels of comfort or discomfort—seemed most like a machine. Her responses were, if not predictable, closely circumscribed. I imagined coding up a stochastic algorithm, one that relies partly on chance, to cause her to move her arms and legs jerkily, cry when hungry or uncomfortable, sleep nonstop, and nurse—not completely predictable, but rarely doing the wholly unexpected.

The stimulus-response cycle is out in the open with a child, at least initially, and the feedback loop created between parent and child is tight, controlled, and frequently comprehensible. I trained my child to know that certain behaviors would get her fed, put to sleep, hugged, rocked, burped, and entertained. And my child trained me, in turn, to respond to her cries with what she wanted. You come to an accommodation; both your systems have synchronized, at least roughly, for mutual benefit (though mostly, for hers).

So much of that behavior in infancy appears hard-coded, from crying to nursing to crawling to grabbing everything in sight, that I often felt like we were playing out a scripted pageant of upbringing that had been drawn up over many millennia and delivered to me through the telegrams of my DNA.

It was in the early months of parenting that my child seemed most like a machine. 

Yet programming is an iterative process. When I wrote software, I would code, test, and debug my code. After fixing a bug, I would recompile my code and start it again in its uncorrupted state, before the next bug emerged. The idea of initial conditions—the ability to restart as many times as you like—is integral to software development and to algorithms. An algorithmic recipe presumes a set of initial conditions and inputs. When an algorithm terminates, only the outputs remain. The algorithmic process itself comes to an end. Every time an algorithm runs, it starts afresh with new inputs. Colloquially, we can call this the reset button.

The scientific process depends on the reset button: the ability to conduct an experiment multiple times from identical starting conditions. In the absence of precisely identical starting conditions—whether in the study of distant stars or extremely rare circumstances or many varied human beings—the goal is that initial conditions are as close as possible in all relevant aspects.

But you cannot reset a human being. A child is not an algorithm. It is a persistent, evolving system. Software too is becoming a persistent system. Algorithms themselves may remain static, but they are increasingly acting on large, persistent systems that are now as important to computing as the algorithms themselves. The names of these systems include Google, Amazon, Facebook, and Twitter. These companies write software, but the products they create are systems or networks. While Microsoft had to carry over a fair amount of code from one version of Windows to the next to ensure backward compatibility, each version of Windows was a discrete program. Every time a user started up Windows, the memory of the computer was cleared and reassembled from scratch, based on the state that had been saved to disk. If Windows got into a strange state and stopped behaving well, I could reboot and, more often than not, the problem fixed itself. In the worst cases, I could reinstall Windows and have a completely fresh start.

That’s not possible with systems. Constituent pieces of Google’s search engine are replaced, rebooted, and subject to constant failures, but the overall system must be up all the time. There is no restarting from scratch. Google, Amazon, and Facebook are less valuable for their algorithms than for their state: the sum total of all the data the system contains and manipulates. None of these companies can clear out its systems and “start over,” algorithmically.

As with children, we don’t debug these networks; we educate them.

•••

In the first months of her life, I kept a spreadsheet of my daughter’s milestones. Hardware upgrades to her height and weight were ongoing, but I declared a new “version” whenever my wife and I deemed her sufficiently different to appear as though a software upgrade had been installed.

As with children, we don’t debug Facebook or Google; we educate them.

It was tempting to see these changes as upgrades because I wasn’t doing anything to trigger them. My daughter was just figuring it out on her own. Having spent two decades of our lives in front of computers, my wife and I weren’t used to seeing our “projects” alter their behavior without long and hard intervention. “Maintenance” was required (nutrition came in, waste went out), but there was no clear connection between these efforts and the changes taking place in our daughter.

The “upgrades,” however, became more difficult to track as my daughter’s skills expanded and her comprehension of the world around her developed. As she learned more sounds and began to experiment with using words to mean more than just “I want that!” I let go of the fantasy that any sort of “upgrades” were taking place at all and I came to see her as a mysterious, ever-evolving network.

The leap from observational data to thought is one of the most amazing and incomprehensible processes in nature. Any parent will know how baffling it is to see this happening in stages. There are limits past which a child cannot go in understanding, until one day those limits mysteriously vanish, replaced by new and deeper ones. When, at two and a half, my daughter said, “Worms and noodles are related by long skinny things,” she lumped together two entities based on superficial appearance, but she hadn’t yet learned what a relation was.

Before long, she had learned to use logic to argue her position when she needed to. Sometimes it took the form of threats, particularly at bedtime: “If you don’t give me any milk, I’ll stay awake all night. Then you’ll never get any sleep and you’ll die sooner.”

And then, by three and a half, Eleanor was modeling our motives, and not always flatteringly, as when she said to her blanket, “Now I will raspberry you. You will not like it but I enjoy it and that is why we will do it.” At this point, she was able to determine that everyone around her had goals and that sometimes those goals conflicted with hers. She couldn’t necessarily determine others’ motivations, but she knew they were there.

Eventually, most children come to the same shared understanding that we all possess. But what remains a puzzle to me, and to researchers in general, is how children leap from superficial imitation and free association to reasoning. The brain grows and develops, with billions of neurons added year after year—but no matter how much memory or processing power I add to my desktop server, it never gains any new reasoning capabilities.

As an ever-evolving network, there are algorithms that guide the development of the child, chief among them the workings of DNA. But those algorithms are the builders, not the building itself, and they are hidden from us. Some small clues to what is happening, however, may lie in thinking about what happens to software programs when we don’t shut them down and restart them, but let them linger on and evolve.

•••

Algorithmic systems or networks such as Google, Facebook, Amazon, and Twitter create a persistent system (or network) that modifies its behavior over time, in response to how it is used. In essence, these systems rely on feedback: their outputs affect the environment in which these systems exist, and the systemic environment—its users and also other algorithmic systems like it—provides new inputs that change the system further.

What remains a puzzle is how children leap from superficial imitation and free association to reasoning.

Algorithms establish and maintain these systems, but they can’t predict how a system will behave at a given point in time. For that, one must know the ongoing state of the system. The result is an evolving ecosystem. Once a network is in play, evolving over time and never reset to its initial state, it gains a complex existence independent of the algorithms that produced it, just as our bodies and minds gain a complex existence independent of the DNA that spawned them. These independent systems are not coded. Rather, they are trained, and they learn. This means that these networks are not fundamentally algorithmic and they cannot be wholly reset, for to do so would be to return the system to its starting point of ignorance and inexperience.

There are many different types of networks coming into existence besides giant informational systems such as Google and Facebook. There are neural networks, deep learning networks, and belief networks, among others. All these fall under the broad rubric of machine learning.

In 1951 neural network pioneer Warren McCulloch wrote that the distinction between machines and humans was that human minds reacted and adapted to their environment with the purpose of thriving in it in a multiplicity of ways:

Why is the mind in the head? Because there, and only there, are hosts of possible connections to be formed as time and circumstance demand. Each new connection serves to set the stage for others yet to come and better fitted to adapt us to the world, for through the cortex pass the greatest inverse feedbacks whose function is the purposive life of the human intellect. The joy of creating ideals, new and eternal, in and of a world, old and temporal, robots have it not.

Not yet, anyway. McCulloch was speaking of the calculating machines of the mid-twentieth century. But now that large systems such as Google and Facebook are persisting and growing for years and decades, we can contemplate the possibility of an evolving, maturing network whose intelligence is not intrinsic to its algorithms but lies in its evolved complexity, developed over great periods of time and through repeated, varied, and error-prone interactions with the world—just like a child.

Today’s most powerful machine learning techniques, such as those employed by Google’s DeepMind, excel at recognizing similarities between explicit patterns, whether those patterns are made of words or pixels or sound waves. They can judge whether two passages of text have similar lexical structures and word choices, but so far they can say little about the texts’ meaning. They can determine whether a creature in a photograph looks more like a dog or a cat, but so far, they know nothing of what a cat or a dog is. They can beat humans at Go, but they cannot yet discern whether a particular Go board is beautiful or not—unless we train an algorithm on a set of “beautiful” and “non-beautiful” boards and have it try to learn that classification.

While these machine learning networks can perform feats that leave humans in the dust, they inherit contexts, standards, and judgments from humans, and they are unable to generalize from a given task to similar yet distinct tasks without human guidance. They cannot yet reason about the application of labels, as my daughter did at age four:

HER: These ballet shoes are so soft. I bet they are made out of polyester.

ME: Maybe they are made out of marshmallows and you could eat them.

HER: You can’t eat ballet shoes.


ME: Then they aren’t made out of marshmallows.

HER: Nothing’s made out of marshmallows.

ME AND HER (simultaneously): Except marshmallows.


HER: Only marshmallows are made out of marshmallows. That’s why they are called marshmallows. All the other names are used up by people and other stuff.

In contrast to the image classification performed by machine learning networks, children quickly learn to categorize by far more than visual similarity, and in fact learn to reject visual similarity in favor of other categorizations. As psychologist Susan Gelman has found, once told that a pterodactyl is a dinosaur and not a bird, even a young child will tend to infer based on that category membership rather than any visual similarity. They will guess, for example, that a pterodactyl does not live in a nest.

The problem facing AI is how to move from the specific to the general in a humanly rational way.

A machine learning network cannot yet switch from visual to nonvisual categories of its own accord. The problem facing AI at this time is how to move from the specific to the general in a humanly rational way: how to take the knowledge from one clearly defined task, such as labeling images or playing Go, and put it to new and different use in a general-purpose thinking network. I suspect that accomplishing this will require the creation of networks that engage with the physical world in a variety of different ways, processing visual and verbal information in a variety of contexts and learning—slowly—what approaches do and don’t work in various situations. If this were possible, the network would also need to be sufficiently powerful to apply the same broad set of techniques to varied and novel problems. We still have a long way to go.

But, as computer networks grow in complexity and endure over greater lengths of time, it is important to note that our degree of direct control over them diminishes. Programmers can code, debug, and fix individual algorithms within the system, but the overall system has an ongoing, linear continuity. Components of Google’s and Facebook’s networks are constantly shut down, modified, and restarted, for instance, but the entire system persists and evolves.

As it is with a child. Their algorithmic components include intrinsic biological mechanisms, the physical effects of its surrounding environment, and other living creatures—for example, parents who may wish they could reset their child’s emotional valences on hearing a four-year-old sing this song, as I did when I asked what my daughter was sad about:

So many sad things, I can’t even tell you.
They are all squished up into a ball.
Squished into a ball.

And sometimes things fall off the ball
and they go into the trash.
And I really really really love TV
and I hope I can watch it tomorrow morning.

I can’t pull sadness out of my daughter’s brain, so I let her watch TV and hope it ameliorates the ball of sadness.

•••

The desire to “reset” aspects of the human mind is an abiding one. As a society, we dream of reset buttons for the soul and self, of ridding ourselves of phobias, addictions, bad habits, and the miscellaneous accumulated burdens of our lives. The recent rhetoric around Eye Movement Desensitization and Reprocessing therapy, an unorthodox method that has shown promise in treating trauma and phobias, speaks not only of desensitization but of returning the mind to equilibrium, processing and resolving a bug in the system. The system doesn’t stop, nor does it return to a virgin state, but we hope that the network that makes up the human mind can be repaired on the fly.

“On the fly” is also the term used for modifying and fixing a computer program as it runs, without stopping and restarting it. And the process of creating artificial intelligence is coming to seem less a matter of coding up algorithms and more of applying algorithms to a growing system, like pouring water on a plant or like educating a child. Systems such as Google and Facebook are the first genuine digital children.

The desire to “reset” aspects of the human mind is an abiding one. We dream of reset buttons for the soul and self.

The transition from free association to rational explanation that children unknowingly make is a mystery that artificial intelligence has yet to conquer. But if we do indeed create such a general network, it is not clear that any great secret of the nature of intelligence will be revealed. We’ll have created something as complicated and irreducible as a human infant itself. We will be able to watch these networks grow, learn, and mature, but we will not be able to debug them any more than we can debug a child. Nor will we understand how or why they function in the way that we understand how an algorithm functions. To say, “Oh, well, it said ‘Goo’ instead of ‘Ga’ because this set of network weights was not triggered and this one was” is not an explanation. Rather, we will see, as I did with my own daughter, that a complex set of predispositions and behaviors, when encoded into a single creature, results in even more complexity when that creature starts to engage with the world in myriad ways.

As my daughter grows up, I witness her increasingly thinking in ways she has never done before, just as AIs are starting to impress us with their “thinking.” If something acts like it is thinking, that will be good enough for most people. It is no wonder we are desperate to program AIs to love us—but we had better be prepared to love them as well. As my older daughter once asked me, “If we break through a screen, are we in the computer’s life, and do we get to feel what it feels like?”