At the height of the COVID-19 pandemic, Awakening Health Ltd. (AHL), a joint venture between two robotics companies, SingularityNET(SNET) and Hanson Robotics, introduced Grace, the first medical robot to have a lifelike human appearance. Grace provides acute medical and elder care by engaging patients in therapeutic interactions, cognitive stimulation, and gathering and managing patient data. By the end of 2021, Hanson Robotics hopes to be able to mass produce a robot named Sophia into one of its newest units—Grace—for the global market.

What does it mean to take care of another human being?

Though Grace is the first to look so much like a person, she is hardly the first medical robot: like Tommy, Yumi, Stevie, Ava, and Moxi, she is part of a growing cohort of robot caregivers working in hospitals and elder care facilities around the world. They do everything from bedside care and monitoring to stocking medical supplies, welcoming guests, and even cohosting karaoke nights for isolated residents. Together, they have been heralded as a solution to our pandemic woes.

In the last couple of years, the sale of professional service robots has increased by 32 percent ($11.2 billion) worldwide; the sale of assistance robots for the elderly increased by 17 percent ($91 million) between 2018 and 2019 alone. The unique challenges of safely delivering care and services during the COVID-19 pandemic have only increased their appeal. This is evidenced by the increased global reliance on robotic systems to disinfect surfaces, enforce mask-wearing and social distancing protocol, monitor patients’ vital signs, deliver supplies and groceries, conduct virtual tours, and even facilitate commencement ceremonies.

But what does this ramping up of interest and investment in robotics mean for human workers?

In the short term, there is no question that while robots may provide some support to human workers and help minimize their exposure to unsafe conditions, they cannot replace their human counterparts. At the current level of robotics, total replacement would require an impossible degree of predictability of work environments. As one of the pioneers in the field of social robotics, Lucy Suchman, noted, “Robots work best when the world has been arranged in the way that they need it to be arranged.” Robots can function very well in factories and warehouses because assembly-line work provides a uniform environment; in homes and health care facilities, such uniformity is more difficult to achieve.

In the long run, though, robots may not always be so limited. Thus, it is critical that we consider not only whether robots can replace human workers—since someday the answer will surely be “yes”—but also whether they should. Indeed, the very attempt at automation represented by Grace and her cohort not only raises questions about the nature of work in general, but specifically about what it means to do care work. What does it mean to take care of another human being? And, in turn, what does it mean for an algorithm to care?


These questions of whether to employ phenomenally expensive robotic systems are especially poignant given that the field of care work relies heavily on the labor of poor women of color, often immigrants, who have long been told both that civilization depends upon their work and that it is of little monetary value. In this sense, any discussion of the transformative potential of care robots must be tempered by the reality that, as Ai-jen Poo and Palak Shah point out, the foreseeable “future of work” is not automation. The future of work continues to be an “essential,” low-wage workforce that is disproportionately comprised of women of color who often lack a living wage, workplace safety, paid sick and family leave, and adequate health care. In fact, health care workers have been among the hardest hit by the pandemic. The most recent data from the Centers for Disease Control report that, to date, 450,517 health care personnel have been infected by COVID-19, and more than 1,497 have died, mainly people of color. Since there is significant underreporting and not all data are available, these figures are likely higher.

Outsourcing care may have grave consequences if racist and gendered biases in the code go unaddressed.

Against the expectations of many futurists, automation will not automatically generate a more just labor market. That will only happen if labor justice becomes a condition of automation’s adoption. Otherwise, it will merely compound the problem, adding another layer to the inequities experienced by those most socially and economically vulnerable.

This is because algorithms tend to replicate biases that already exist in our world. In recent years this has been documented by critics of artificial intelligence, such as Joy Buolamwini, Safiya Umoja Noble, and Ruha Benjamin who have noted how algorithmic biases are reflected in everything from facial recognition systems’ failure to identify people of color to the technological redlining in search engines of results related to Black subjects. Taken together these amount to being a New Jim Code of systemic racial bias encoded in new technologies. While our knowledge of the inner workings of these systems is often occluded because their algorithms are propriety, the output is clear: the work of Buolamwini, Noble, Benjamin, and others leaves little doubt that racialized regimes undergird computational systems and machine learning. That these new technologies do all this while being perceived to be neutral—because machines are thought incapable of bias—only exacerbates the problem.

To return to the example of Grace—who will be the newer version of the robot Sophia with functionality adjusted to serve the health care sector—her makers claim that these robots promise to provide not only safety but also “human warmth, human connection,” and will serve as an “autonomous extension of human expertise.” By choosing to have Grace look like a white woman, however, the designers broadcast a particular understanding of human expertise that is both racialized and gendered. A similar bias can be seen in the case of a telepresence robot called EngKey, designed to help teachers in Philippine call centers remotely offer English language instruction to elementary school children in South Korea. The EngKey robot wheels around the South Korean classroom with a white avatar face, even though it is a Filipina teacher who is delivering the lessons. Developers of EngKey remark that the rationale for the white avatar is twofold: one, to minimize the confusion of users in terms of who they are interacting with, the human or the robot; and two, to reinforce the perceived global “authority” for teaching English. However, in so doing, roboticists intervene in the geopolitics of labor, reinforcing a global fantasy of the ideal of what a qualified worker looks like while effacing the actual laborer, who does not fit this ideal. And the “robot teachers,” as they call themselves, are forced by this arrangement to operate via a vocabulary of innovation that reinforces whiteness as powerful even as this script exploits their own “Third World” labor. Robot teachers I spoke with articulated a profound sense of dissociation that arose from embodying a blond, white robot face while performing a kind of affective labor that was simultaneously disembodied and embodied, immobile and mobile. Notably, this was done not to guarantee successful language instruction—which certainly did not depend on such a cleaving—but rather in the name of creating a seamless integration between human and machine as demanded by the roboticists.

Designing a care robot that looks like a white woman broadcasts a racialized and gendered understanding of human expertise.

The android Erica, created by Japanese roboticist Hiroshi Ishiguro, is another example of how, in the pursuit of humanness, roboticists can reproduce gendered norms. Ishiguro hopes that Erica, currently still an early prototype, helps lead the way toward a world in which machines and humans coexist, with robots enhancing the lives of humans by doing the most tedious and undesirable kinds of work, including caring for the sick and elderly. But by programming Erica to simulate a “warm, gentle, and caring” affect, Ishiguro’s team has doubled down on the ways in which this kind of care work is gendered. Thus, Erica’s design is premised on the perception that the labor of caring for the world rests primarily on the backs of women who will provide warmth and gentleness, while easing the burden and relieving the suffering of humanity. Erica is also designed to adhere to conventional standards of gendered beauty. As Ishiguro notes, he designed Erica to be the “most beautiful” android, drawing on the amalgamated images of (his perception of) “thirty beautiful women.” Apparently it isn’t enough to be a female caretaking robot; you have to be a “beautiful” one as well.

Concerns about race in robotics extend not only to how robots look but also how they will interact with others. How will robots such as Grace and Erica recognize and interpret a diversity of faces? Will the racist assumptions that are likely baked into Grace’s algorithms define the type of therapeutic interventions that patients will receive? AI facial recognition systems are notoriously bad, for example, at interpreting the emotional responses of people with darker skin tones. Some can’t perceive the faces of people with darker skin at all, let alone understand their expressions. It gets worse from there. Sociologist Ruha Benjamin found that one of the most popular tools used by health insurers to assess health risks defined a theoretical Black patient as having less risk than a white patient with the same health markers because of racial bias in the tool’s algorithms. In other words, what we are seeing is the emergence of yet another oppressive system that structures social relations by advancing a mediated understanding and delivery of care work. Borrowing from Joy Buolamwini’s idea of a “coded gaze” in algorithms, I refer to this racist imbalance in health care AI as coded care.


The idea of coded care gives us a vocabulary for thinking about the potential harm of automating care work. Increasingly, using robots to automate care work is pitched as necessary to assist an aging population, minimize occupational hazards, relieve the burden on caregivers, and address the high turnover and burnout rate among caregivers. One study suggests that by 2030, there will be a shortage of 151,000 paid direct care workers and 3.8 million unpaid family caregivers. But given these concerns about coded care, whether robotic automation is the best way to address this shortage is debatable. Even assuming that empathy, emotional labor, and creativity can be mechanized anytime in the near future—which many roboticists doubt—outsourcing these kinds of care may have grave consequences for those receiving that care if racist and gendered biases in the code are not addressed.

The Algorithmic Justice League reminds us that “we can code a better future.”

Moreover, we must take seriously the question with which we began—what this will mean for human workers—and the insistence that labor justice must be a precondition for the adoption of automation. Because these decisions will disproportionately impact women and communities of color, they are likely to take a back seat to moneyed interests and the well-being of affluent white people. But caring is a unique kind of labor, and when care workers are mistreated, we all lose out. Recall EngKey’s robot teachers, whose sense of disconnectedness from their students translated into affective labor that is mechanized, racialized, and gendered in ways that harm both teachers and students. The work that EngKey performs teaches its Korean students as much about the enviable power of white femininity as it does about English. Likewise will Grace, whose creators promise a technology that will “engage patients naturally and emotionally,” actually deliver comfort, empathy, and kindness to those isolated during the pandemic, or only a simulacra of how its creators imagine femininity?

More than ever, as the pandemic provides justification for expanding the use of care robots such as Grace, we should be mindful of how these interventions are coded. We need to push for more equitable and accountable artificial intelligence, working with collectives such as the Algorithmic Justice League (AJL) to achieve this goal. The work of AJL and others remind us that “who codes matters, how we code matters, and that we can code a better future.” If we are serious about pursuing robot labor, then labor justice must be a precondition for automation. Otherwise the robots will only provide another excuse to ignore the inequities faced by human workers by simply replacing them and automating their labor.

We need to continue to explore the ethics of developing care robots, informed by critiques about current models of automation by researchers such as Pramod P. Khargonekar and Meera Sampath and what they propose as “socially responsible automation.” This model of automation suggests that there are ways that businesses can pursue automation while simultaneously investing in training and building the skills of human workers to adapt to this technology-driven workplace. Thus, the idea is not to simply replace human workers with a more efficient technology but to develop a workplace where robots and human workers can truly coexist.

But more importantly, I propose that the ethics of developing care robots must be based on a framework of labor justice that continues to develop remedies to the structural inequities that govern the lives and labor of essential workers. This can be done by supporting and adopting Senator Elizabeth Warren and Ro Khanna’s proposal for an Essential Workers Bill of Rights. The provisions of this bill would ensure that care workers are not only receiving a living wage and health care security, but that they also have access to child care and paid sick and medical leave.

I do not think that we can imagine a society without both human workers and robots. So, as roboticists work on developing care technologies, we need to attend to how the racialized and gendered perceptions get coded into the design. The guiding principle cannot solely address how best to simulate humanity, but instead concern how to center principles of justice and equity in designs for coding care. Only then will it be possible to produce algorithms that truly care.