Silicon Valley to Liberal Arts Majors: We Want You
June 14, 2017
Jun 14, 2017
13 Min read time
Tech billionaires love to declare the death of liberal arts, but could they instead be the future of Silicon Valley?
The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World
Houghton Mifflin Harcourt, $28 (cloth)
What Algorithms Want: Imagination in the Age of Computing
MIT Press, $29.95 (cloth)
If you are a student of the liberal arts, is there a place for you in our increasingly digital world? Not really, according to many. Bill Gates thinks your programs should be cut in favor of STEM subjects, his fellow tech-billionaire Vinod Khosla says “little of the material taught in liberal arts programs today is relevant to the future,” and Marc Andreesen says you will end up working in a shoe store. Maybe you should just learn to code.
Tech billionaires claim that fuzzies—students of the liberal arts and social sciences—are doomed to working in shoe stores, but two new books pin the future of tech on them.
Or maybe not. Two new books make a case that the technology industry can no longer be driven purely by software engineer hackers, and that you have a critical role to play in guiding it in more ethical and humane directions. That said, their authors differ dramatically about what that role is. Scott Hartley wants you to bring your skills and insights to the world of technology startups, to unlock the full potential of technological innovation. Ed Finn, on the other hand, seeks to hold the technology industry to account: he believes we need “more readers, more critics,” posing questions about who technology serves, and to what ends.
• • •
Hartley’s The Fuzzy and the Techie (“fuzzy” being a Stanford nickname for humanities and social science students) is a clarion call for you to join the world of digital disruption, innovation, and entrepreneurship. The author contends that Silicon Valley needs you if it is to fulfill the next stage of its disruptive vision: your creativity and your skills of “critical thinking, logical argumentation, and complex problem solving” will make for better technology; your insights into our public institutions and what makes us human will guide technology to build a better world.
The book’s early chapters explore how, underappreciated and largely unnoticed, “fuzzy” skills have already proven themselves to be complements to coding talent. Numerous examples, from Palantir’s national security insights to Stitch Fix’s wardrobe recommendations, make the point that combining human and algorithmic intelligences often leads to better results than a purely automated approach. Fuzzy insights provide the driving force for new businesses, whether it is social messaging application Slack or school/parent communication tool Remind. In later chapters, Hartley uses example after example to make the case that the liberal arts’ concerns for ethics, understanding of human motivations, and insights into social dynamics can find a productive home in the technology world.
But there is a critical failure at the heart of The Fuzzy and the Techie: in his eagerness to portray fuzzies doing well by doing good in the technology industry, Hartley too readily accepts Silicon Valley’s flattering self-descriptions of its values and vision for the world. The positivity of entrepreneurship does not sit comfortably with the skeptical outlook that the liberal arts nurture, and Hartley fully embraces entrepreneurship.
It is never about the money in Silicon Valley. The fact is: they have plenty of it.
His choice is surely a product of his own life story and incentives. Hartley tells us he went to Palo Alto High School, in the heart of Silicon Valley, where he was taught by Esther Wojcicki. She is a “pioneer in the field of blended learning” and the mother of three high-profile Silicon Valley women, one of whom owned a garage where Google was created. Hartley went on to study political science at Stanford and is now a venture capitalist and advisor to tech startups. Even if he lives on the East Coast, he is Silicon Valley through and through: it is hardly surprising that he persists in believing in its values.
I reluctantly admit that my own beliefs may also have been shaped by my upbringing. I am, after all, an atheist son of atheist parents, a social democrat from a Labour Party family, and before entering the software industry I was a student of chemistry at a time when the Future was to be shaped by men in lab coats. I would love to think that my disagreements with Hartley are purely the product of my independent search for rational and objective truth—but let’s face it, that seems unlikely.
Still, if you are a budding fuzzy entrepreneur, I have a request: please do not put aside your critical thinking in the way that Hartley himself seems to have done. It is not enough to say technology can help build a better world: we must ask what a better world actually looks like. If you think, as Scott Hartley does, that making the U.S. military more efficient is making the world a better place, and that the only problem with the U.S./Israeli Stuxnet cyberwarfare virus is that “the code could be repurposed by our enemies and turned against our own infrastructure,” then we have different views on what “making our technology more ethical” involves.
For many of us, Silicon Valley’s “Don’t Be Evil” proclamations have lost credibility as the technology industry has grown from scrappy underdog to seemingly unaccountable colossus. Hartley interprets the industry’s stumbles only as well-meaning mistakes rather than any sign of deeper problems. He refers to Cathy O’Neil’s excellent 2016 book Weapons of Math Destruction, which exposed the many ways that algorithms can perpetuate discrimination, and concludes only that “errors in the collection and interpretation of data must be corrected by human analysis, and this is work for which those trained in the humanities and social sciences are well equipped.” Hartley believes that liberal arts insights can right the ship: “We can pair fuzzies and techies to train our algorithms to better sift for, and mitigate, our shared human foibles.” He is optimistic about Facebook’s attempts to solve the “filter bubble” problem by hiring “a vast complement of fuzzies . . . to work alongside its techies.” But Upton Sinclair explained long ago why the right set of perspectives alone will not solve the problem: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Silicon Valley’s “Don’t Be Evil” proclamations have lost credibility as the technology industry has grown from scrappy underdog to seemingly unaccountable colossus.
Take Chapter 5, on “making our technology more ethical.” Hartley tells us about Talkspace, a company whose founders “are on a mission to inspire all those who are suffering from mental pain.” Talkspace is a sort of Uber for therapy: clients gain text-message access to therapists who provide their services through the Talkspace platform. It is an affordable, lightweight, and discreet form of therapy for those who may not otherwise reach out for it, and Hartley enthuses, “Therapists have been as avid about Talkspace as the clients.”
But skepticism is in order. Cat Ferguson painted a different portrait of Talkspace in a lengthy and detailed December 2016 investigation in The Verge. Talkspace therapists do not have access to clients’ real names or contact information, a practice which risks putting them in conflict with their profession’s legal and ethical standards. Ferguson describes how the policy of patient anonymity left one therapist unable to alert authorities in case of emergencies:
I get to live knowing a [young] baby is being driven around by a drunk woman, I have no way to file on them, and Talkspace has put me in this position,” the therapist said in an interview in October, her voice breaking. . . . Multiple former workers told The Verge they had reported a safety concern, and were denied access to client contact information.
Ferguson reports other problems. Talkspace therapists are independent contractors (like Uber drivers), but the company provided therapists with scripts advertising additional Talkspace services, and denied their pay if they did not deliver the script to their clients. Talkspace has punished therapists if too many clients leave the platform (and leaving therapy may, of course, be a good outcome), and pay has apparently been erratic.
Ferguson’s article is careful, well sourced, and serious. For my money, it is a lot more convincing than the Talkspace CEO’s rebuttal. I find it disturbing that Hartley uses Talkspace as an example of the ethical use of technology without even acknowledging that there might be real dangers in providing a for-profit mental health app with uncertain regulation.
Alternative credit is a particular theme of Hartley’s book. It praises Tala’s use of phone data to establish credit ratings and PayJoy’s financing of smartphone purchases. Hartley also praises Lenddo, a credit scoring company that “has created an algorithm that assesses credit risk and verifies identity based on non-traditional sources, such as social networking data. . . . [P]eople who may not have the traditional qualifications to be approved [but Lenddo can find] signals in the data that broaden access to life’s bounty.”
If we learned anything in 2008 it was that easy credit is not an unambiguous social good. We are also learning that using social media data to assess character and reputation is a terrible idea. Lenddo is exactly the kind of algorithm Cathy O’Neil warned about, prone to bias and redlining (by friend network rather than by zip code). The involvement of “fuzzies” in the company does nothing to change that.
This view of the world is post-political: powerful technologies can fix the world’s biggest problems if only well-meaning people design the right kind of algorithm.
Hartley also leaves Lenddo’s unorthodox “social enforcement” techniques unmentioned, but surely he read in The Economist that “the credit scores of those who have vouched for a borrower” from Lenddo “are damaged if he or she fails to repay.” PayJoy also has a history of coercive enforcement. You do not have to be a cynic to at least wonder about the terms under which these companies are broadening access to life’s bounty.
In the end, Hartley’s view of the world is consistently post-political: powerful technologies can fix the world’s biggest problems if only well-meaning people design the right kind of algorithm. He falls into the “technical-fix” line of thinking that Evgeny Morozov has called “solutionism,” a technological successor to the idea of “incentive-compatible mechanisms” that has intrigued public-choice economists. The idea is that if we could only be clever enough to design systems such that participants have no incentive to deceive, then we could make good collective decisions without the need for central government. A post-politics world could then become a reality.
Twenty years ago, political scientists Gary Miller and Thomas Hammond noted a fly in the solutionist ointment. Their 1994 paper Why Politics is More Fundamental Than Economics (admittedly not an unexpected belief for a pair of political scientists) argues that “incentive-compatible mechanisms are not credible” because “the central decision-maker who implements the incentive-compatible mechanism will have a stake (and presumably an opportunity) for self-serving perversion of the system.”
Miller and Hammond were not writing about Internet platforms, but their argument helps to answer Ed Finn’s question: “What do algorithms want?” Rephrased as “What kind of behavior must commercial algorithms exhibit to survive?” the answer is simple: algorithms must make money. If Google’s revenue falls, the company’s AdWords and AdSense algorithms will go quiet, along with the other ambitious projects with higher motivations. If Lenddo’s alternative credit scoring does not make money, it too will vanish.
In What Algorithms Want Finn shows how algorithms make money by placing themselves in the middle of our interactions, taking a small slice out of each. In doing so, algorithms have moved the place where serious money is made “from end result to process.” Monetary value is no longer attached to the content we are searching for, but to the search itself: Uber has moved value from the taxi drive to the hailing process, Apple from music making to music distribution. In a detailed analysis, Finn shows how Bitcoin’s digital currency algorithms make computation a form of financial value, storing transactions on its distributed ledger and creating new coins by digital “mining.”
There’s a lot in this book, including detailed analyses of the blurring boundary between work and play, and of how computers are becoming increasingly our intimate companions. Finn takes imaginative creations such as the Star Trek computer or Netflix’s House of Cards as starting points to explore the increasingly tangled worlds of computing and culture. He leads us along some fascinating pathways, but whereas Hartley is optimistic about the emerging sophistication of user interface design (to take one topic of overlap), Finn does not trust it: “Behind the façade of the facile, friendly computational interface, there is a world of labor translation, remediation, and exploitation at work.” The ideas are often provocative, but Finn’s dense academic prose and reliance on abstract metaphor sometimes gets in the way not just of clear explication but, I fear, of clear thinking.
Finn’s concern about platforms and their algorithms comes down to politics and conflicting interests: the algorithm designers and platform owners have an incentive to distort their mechanisms in their own favor. One form this distortion takes is “corrupt personalization”: Netflix inserts its own shows prominently into its recommendations, Facebook promotes products “liked” by our friends. More dramatically, Uber uses the data it has about its drivers to nudge their behavior, distorting the incentives around riding and driving in the interest of its own bottom line. A lot of criticism has been directed at Uber’s values as a company, but Uber’s algorithmic incentives are the root of the problem.
Algorithms have ensured that monetary value is no longer attached to the content we are searching for, but to the search itself.
Corrupt personalization has no place in Hartley’s book save as a mistake to be corrected by liberal arts majors, who are particularly “driven to explore how our families and public institutions could . . . operate better.” His lack of skepticism when startups claim to build a better world takes me back to one of my favorite quotations from Bertolt Brecht: “Amongst the highly-placed / It is considered low to talk about food. / The fact is: they have / Already eaten.”
It is never about the money in Silicon Valley. The fact is: they have plenty of it.
• • •
Both The Fuzzy and the Techie and What Algorithms Want explore the broader question of work in a digital world. It is an old argument, whether technology will replace human labor or complement it, but it has been given new life by the recent dramatic developments in artificial intelligence.
In one of the better chapters in The Fuzzy and the Techie, Hartley argues that liberal arts are in general a complement to technology, not a substitute. The more tech we have, the more work there is those trained in the humanities. Hartley shows that the demand for many STEM careers is actually falling in the United States: the big growth is in areas such as health care and personal care, which are heavily social and full of non-routine, difficult-to-specify tasks—in other words, fields that are not well-suited for algorithmic solutions.
Finn is more worried that much work falls into what he calls an “implementation gap”: doing what machines cannot yet do but will do sooner or later. Sometimes the boundary between the two perspectives seems to vanish: Facebook and YouTube moderators and Uber drivers owe their jobs to technology platforms, even as they train their digital replacements by their own efforts.
There is no doubt that humans will become increasingly dependent on machines. Even when it comes to creative work, Finn describes how Netflix’s algorithms and detailed metrics frame the creative efforts of auteurs. But as they look to the future, Finn and Hartley part company on the nature of that dependence. Hartley sees technological innovation as an opportunity for the sciences and humanities to join together in harmony. Finn hopes for a more tension-filled relationship, in which “the role of the curator, the editor, and the critic is more important than ever.” I am with Finn, but then, I would be.
While we have you...
...we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
June 14, 2017
13 Min read time