The movement for free college has gained considerable momentum in the past year, in no small part thanks to the sad state in which many college graduates currently find themselves. For decades, we have told young cohorts entering the labor market that if they only get the right skills, they will find steady, rewarding, and remunerative work. But we have not been able to keep that promise, in large part because our understanding of the labor market’s dysfunction is built on a theory of human capital that has very little to do with reality. We are now living with the consequences: today’s students pay more for degrees in the hope of landing a job for which they are overqualified because the alternative is worse: no job at all.

What we need is a federal policy of desegregation that ensures that access to higher education is available regardless of race or class.

Meanwhile, as tuition continues to rise, accumulated student debt increasingly constitutes its own economic burden, above and beyond a labor market offering stagnant wages and insufficient, precarious work. Student debt is especially onerous for racial minorities, as the current system relegates those with the least family and community resources to the worst higher education institutions, exacerbating inequality. Couple this with continuing cuts to public higher education and, as David Leonhardt recently put it in the New York Times, “It’s as if our society were deliberately trying to restrict opportunities and worsen income inequality.”

The United States has never had free, high-quality college education. But that does not mean we can’t. In the past, we have included world-class public education in our understanding of public goods, and we have successfully expanded public education on the premise that society as a whole benefits from a well-educated population. Previous generations and social movements fought hard to create good educational institutions at public expense. The current generation is discovering why that matters.


Between 1910 and 1940, the share of eighteen-year-olds with a secondary education increased from 10 to over 50 percent. In their book The Race Between Education and Technology (2007), economists Claudia Goldin and Lawrence Katz credit America’s economic advances in the twentieth century to this uniquely American “high school movement.” As they describe it: “The public high school was recreated in the early 1900s to be a quintessentially American institution: open, forgiving, gender neutral, practical but academic, universal, and often egalitarian. It was reinvented in a manner that moved it away from its nineteenth-century elitist European origins.”

Goldin and Katz also emphasize that the “high school movement” was, on the whole, locally funded and directed. There was no national movement toward universal secondary education, or even systematic federal funding available to states to create their own programs. The only impetus for the high school movement at the national level came from federal land grants to states, which set up agricultural colleges for the higher education of farmers and the professional class. Many states guaranteed undergraduate admission to these institutions to anyone with a high school diploma, which spurred many local school districts to expand their provision of public education from primary to secondary school.

For the most part, though, the high school movement happened because local school districts—hundreds of thousands of them—taxed themselves to build and staff free public high schools.

By 1960 California was attempting a similar educational revolution in the realm of higher education. As the rising baby boom generation seemed poised to demand more higher education than any of its predecessors, the state wanted to make high-quality college more accessible. The resulting Master Plan for Higher Education, devised by the University of California’s president, Clark Kerr, vastly expanded the University of California system and created the California State system from the state’s teachers colleges. It also expanded access to community colleges for remedial education and to aid the transition to traditional higher education. The Master Plan set out to make educational advancement solely a matter of individual proficiency, not family background or ability to pay. The result paralleled what Goldin and Katz observe about the high school movement: that it was open, forgiving, practical but academic, and, above all, egalitarian.

When higher education is a prerequisite to getting a job that pays more than minimum wage, we cannot stop until the sector is recast not as a way of preserving and amplifying cross-generational inequality, but of mitigating it.

But while California’s model was widely lauded and enacted in other states and cities, albeit with a less unified and ambitious vision, the federal government chose a different route with the Higher Education Act of 1965—a decision with reasons and repercussions that form a major part of the background for today’s student debt crisis. Instead of funding institutions, the federal government funded students. Why?

The main reason was race. At the time, the federal government already had its hands full enforcing the Supreme Court’s mandate for integrated elementary and secondary education. As the 1960s turned into the 1970s, the political difficulties enforcing that mandate with court-ordered integration plans and busing became ever more severe, making racial integration seem structurally impossible. For many, trying to add higher education to the mix was a bridge too far.

Moreover, while one of the ultimate goals of the civil rights movement was to integrate the grand public edifices created by the Progressive and New Deal eras, the potent backlash to that goal ended up eroding those same public goods for everyone. Once it became politically and rhetorically impossible to note the existence of racial exclusion in the public sphere, a new ideology of economic individualism came to dominate federal and state policymaking. This included geographic relocation—suburbanization—as a method of avoiding integrated schools and neighborhoods, evading the reach of the federal judiciary and a cautious Congress. Indeed, subsidized mortgage lending in all-white neighborhoods ensured that even as one political movement integrated the economy and society, another resegregated it.

In this respect, Title IV of the Higher Education Act of 1965 led the way; it was individualistic from the start. Reflective of the “human capital” ideology and economic theory of the day, the Higher Education Act facilitated individual choice in selecting (and gaining admission to) institutions that operated within an already-stratified system. Rather than funding institutions and telling them to provide education of a certain standard for all comers (subject to entrance requirements)—what economists would call a “pooling equilibrium”—it funded students, who could then be sorted into a “separating equilibrium,” effectively stratifying the sector by race and class.

The aforementioned theory of “human capital” behind these policies holds that students would choose their level of educational attainment by comparing earnings, net of tuition, and opportunity cost. In this story, the policy failure in higher education comes about if students are unable to secure financing for their education before their career starts, thanks to the impossibility of collateralizing human capital, and the resulting high cost of borrowing with unsecured debt. Therefore students cannot undertake a profitable investment in their future earnings unless their families can support them. The solution to this “market failure” was to supply government-guaranteed student loans, thus ensuring access to higher education that will pay off ex post, both for borrowers and for the lender.

The thinking at the time maintained that if students had access to loans, they had access to education, thus negating the need to create the grand edifices in the public sector that characterized earlier eras and led to the civil rights conflicts over universal access. The “quintessentially American” model of education had changed from free and equal high-quality public education to private or privatized institutions and student debt. While government-guaranteed student loans solved a narrow policy problem—an incomplete capital market for financing higher education—they carried the implication that there was no other problem to be solved.


This conception of higher education thrived for the next several decades, as student populations became larger, more diverse by race and gender, and, simultaneously, more indebted. However, things started to go seriously wrong in the mid-2000s, as state budget crises following the 2000–1 recession led to the decline of state funding for higher education and the concurrent rise of the for-profit higher education sector.

Previously serving only a few technical niches with narrow credentialing mechanisms, for-profit chains such as the University of Phoenix found a massively expanded market by offering flexible, non-traditional degree options suited to older students, as well as a wider variety of degree offerings for those seeking service sector employment. The deregulation of accreditation standards in the mid-2000s also helped aid the boom, since for-profit schools were suddenly eligible for federally guaranteed student loans.

Instead of funding institutions, the federal government funded students. Why? The main reason was race.

This vast expansion of the federal student loan program can be interpreted as the most ambitious federal labor market policy of the past several decades. Although it did not really start in terms of sheer numbers until the 2000s, its roots can be found in part in economic scholarship and popular discussion of the economy from the 1990s. The story told at the time was as follows: sectoral transformation in the economy increased the need for workers with high human capital, which corresponded with high educational attainment in the form of a college degree. According to this interpretation, the reason why wage inequality rose in the 1980s and ’90s was that rising demand for skilled workers confronted a relative slow increase in the supply of skilled workers—hence higher wages for the skilled and rising inequality overall. This theory was even tweaked and extended to explain overall macroeconomic growth dynamics through the lens of aggregate human capital.

This human capital–oriented approach to the labor market gradually morphed into a normative claim: to increase wages and economic growth, we should increase human capital by expanding higher education. The federal student loan program, in conjunction with increased enrollment, became the policy mechanism for accomplishing this. The normative implication was even extended to individual workers: if you want higher wages, increase your educational attainment and take on debt to do so. The debt would “pay for itself” with the increased earnings available to those with more education. But this theory was premised on the idea that the value of higher education credentials remains constant or increases, even as more people obtain them, because wages are set by worker productivity and productivity is increased by more education. That assumption proved false.

Formal and informal credentialization played a key role in driving would-be workers to acquire more debt-funded education, at all levels. For example, the reforms enacted by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 required welfare recipients to either have a job or be in “re-training.” This drove recipients to seek out certificates from overpriced, predatory institutions, as Tressie McMillan Cottom notes in her book Lower Ed (2017).

States also enacted laws requiring teachers to obtain master’s degrees in their field of instruction and early education professionals to have bachelor’s degrees, both as part of the “standards” movement in education reform—in many cases, without salary increases commensurate with the debt required. The rising prevalence of state-level “occupational licensing” often meant enacting similar attainment requirements to practice as professionals in an increasing number of fields. These formal examples of credentialization through overt policy do not remotely encompass its full impact, which is often achieved informally: when jobs are scarce, employment tends to go to those with the highest educational attainment, leading educational credentials to filter down to lower-paying jobs.

The theory of human capital also provided a convenient pretext for cuts to state higher education budgets. Because college was seen as a good investment in future earnings, state legislatures averse to tax increases saw no problem in shifting education expenditures from their budgets to individual students as demand for higher education rose. And the federal government, its apparatus of subsidized and guaranteed loans now fully developed, was ready to pick up the slack. Since an expansion of human capital was thought to foster economic growth, the long-term, aggregate gain from expanding the stock of outstanding debt and filtering it down the wage distribution to people who would previously have gotten their start in the labor market without higher education (or with less of it) apparently outweighed the risks.

Student debt is especially onerous for racial minorities, as the current system relegates those with the least family and community resources to the worst higher education institutions, exacerbating inequality.

In this sense, federal student debt policy looks a lot like federal home mortgage policy during the inflation of the housing bubble. And as the financial crisis of 2007–8 revealed, there are indeed risks associated with debt-financed assets—they do not continue to increase in value indefinitely. Ironically, though, the end of a dramatic expansion of secured loans in the form of home mortgages was the beginning of the heyday of unsecured loans in the form of student debt. The huge increase in demand for higher education belied the widespread sense, again, that security in the labor market was to be found in credentials that ensured access to the jobs of the future. Since 2000 student debt has clearly followed cycles in the labor market: there are large increases in student debt when enrollment expands during recessions, and a leveling off when the economy partly recovers.

The problem, though, is that each of the last two labor market recoveries has been slow and inadequate compared to those that came before. Consequently once student debt has accumulated, it is increasingly difficult to pay off. The repayment trajectories for successive cohorts of borrowers entering repayment have worsened, to the point that those who theoretically started repayment in 2013 actually have more debt now than when they started, thanks to deferred interest, forbearance, re-enrollment, income-based repayment, and outright delinquency.

These problems are particularly acute for minority borrowers, who are more likely to end up in for-profit, high-tuition institutions that offer poor job prospects; who face discrimination in the labor and credit markets; and who have less family wealth to draw on either in financing higher education upfront or in cushioning the burden of student debt. Holding other demographic variables constant, minority students take on more debt and use it to buy more education than their white counterparts, suggesting that “extra” education—and its accompanying debt—is a prerequisite for minorities to beat the competition for scarce jobs in a discriminatory labor market.


Free college offers a solution to this sad state of affairs. So long as it was regulated in a way that ensures options for non-traditional students, free higher education would all but end the predatory for-profit sector. In addition, by acting as a “public option,” free higher education would serve as a check on the market as a whole. Similar to electric utilities (or banking, health care, and now Internet access), public options offer a compelling vision for disciplining the market to serve, rather than exploit, its participants. Finally, free higher education would also level the racial playing field, mitigating the disparities that arise from inequality in parental and household wealth.

It may seem counterintuitive to suggest that free college would address the problem of runaway credentialization within the labor market. Wouldn’t making higher education free also make it more abundant—and hence even less valuable than it already is? This interpretation, however, fails to understand the actual role higher education is currently playing in the labor market: as a tollbooth to decent jobs. That tollbooth is currently expensive and discriminatory, whereas free college would be much cheaper and reduce racial inequalities in access to high-quality institutions.

But by itself, free college will not solve racial inequality in higher education. The public higher education sector is already highly segregated, with minority-serving institutions having borne a disproportionate share of recent state austerity. In too many cases, flagship universities offer de facto preferential admission to white and out-of-state students, especially after recent Supreme Court rulings curtailed their ability to mitigate these inequalities through explicit race-based admissions policies.

What we need, then, is a Brown v. Board of Education for higher education: a federal policy of desegregation that would ensure not just that some option in the public system exists regardless of race, but that access to the entire system is available regardless of race, and that the system as a whole is less stratified. This would necessarily reduce inequality within American higher education. In this era of credentialization, when higher education is an absolute prerequisite to getting a job that pays better than minimum wage, we cannot stop until the sector is recast not as a way of preserving and amplifying cross-generational inequality, but of mitigating it.

The heartening news is that we have done this before, and the battle was won through the logic of public goods.

The heartening news is that we have done this before. While the high school movement really was a magnificent achievement, many southern states lagged behind the rest of the country in providing public secondary education because of racism. The whole concept of public goods was threatening to the South, a region of the country that used discrimination to uphold racial hierarchy at all levels of government and throughout its economy. And yet, the high school movement did eventually expand in the South—most significantly due to the federally led desegregation of southern public education following Brown v. Board of Education (1954) and the long battle waged throughout the 1950s and ’60s to have the decision enforced in deeply hostile territory.

That battle was won through the logic of public goods. Once that logic was abandoned for an individualistic interpretation of education, those grand edifices were hollowed out, as those able to secure what they wanted with private means decamped for the suburbs and for private schools and universities. From this vantage point, they were happy to see the old system crumble.

As we look back to the first half of the twentieth century to rediscover the logic of public goods, it is crucial to remember two things: public goods do not survive when we let the privileged opt out, and if we make them racially inclusive, the pressure for opt-outs intensifies. Given these antagonistic truths, we cannot pretend public options automatically sustain themselves politically. Success will require an unfailing commitment to universal access, even to the point of prohibiting the privileged from taking their business elsewhere.