Specialty care is the pride of the American health-care system. In fact, it dominates the system: only about a third of all physicians in the United States are primary-care physicians, compared with roughly half in most other industrialized countries.

Yet the overall health of Americans is relatively poor. In 2000 the United States ranked 23rd in the world for male and female life expectancy. Many people—even public-health experts—attribute this to disparities in health care among racial, ethnic, and income groups, and to the self-destructive behaviors of individuals. But none of these explanations really adds up. Even the white American population has a lower life expectancy than the population of any country in Western Europe. As for behaviors, the American population smokes less than people in most countries and is less likely, on average, to drink alcohol. And while it is true that residents of the U.S. states with the most social inequality have the poorest health, international comparisons suggest that differences in social inequality alone are not to blame.

Why, then, is the American health-care system so bad for our health? There is no single or simple answer, but a large part of the story—and a part that is commonly overlooked—is precisely the predominance of specialist care over primary care.

Primary care deals with most health problems for most people most of the time. Its priorities are to be accessible as health needs arise; to focus on individuals over the long term; to offer comprehensive care for all common problems; and to coordinate services when care from elsewhere is needed.

There is lots of evidence that a good relationship with a freely chosen primary-care doctor, preferably over several years, is associated with better care, more appropriate care, better health, and much lower health costs. In contrast, little is known about most of the benefits of specialty care, although we do know that the greater the supply of specialists, the greater the rates of visits to specialists. We also know that when specialists care for problems outside their main area of expertise, the results are not as good as with primary care. Since most people with health problems have more than one ailment, it makes sense to have a primary-care practitioner who can help decide when specialist care is appropriate.

It is not surprising, then, that areas with more primary-care physicians have better health, even after demographic differences (such as age distribution and income levels) are taken into account. A nationally representative survey showed that adults who reported having a primary-care physician rather than a specialist as their regular source of care had lower subsequent five-year mortality rates, regardless of their initial health or various demographic characteristics.

Furthermore, areas with higher ratios of primary-care physicians to population had much lower total health-care costs than other areas, possibly because of the better preventive care and lower hospitalization rates that accompany good primary care. Care for illnesses common in the population—for example, community-acquired pneumonia—was more expensive if provided by specialists rather than generalists, with no difference in outcomes.

In short, primary-care physicians do at least as well as specialists in caring for specific common diseases, and they do better overall when the measures of quality are generic.


Even if America’s focus on specialty care were recognized as a problem, other problems in our health policy stand in the way of redressing the imbalance. First, our system is distinguished by an overwhelming market orientation that places demand for services before need for services. For example, health professionals are permitted to practice wherever they choose. The effect is that wealthy and upper-middle-class communities have too many doctors, while poorer communities have too few. The federal government also does little to regulate health services apart from the Medicare program, for the elderly (65 and over) and seriously disabled. This means that policies for health services devolve to the state level—and most states limit their policy-making to decisions about the Medicaid program, for the poor. Although states may regulate private insurance, in most states very little is done. Only Minnesota restricts the ability of profit-making insurance companies to market insurance in the state. Few if any states now require community rating, which prohibited insurance companies from charging higher premiums to sicker people.

Every other industrialized country has recognized that the health-insurance program must be either run or regulated by a publicly accountable body.

The United States is now one of only two industrialized countries to permit direct advertising of medications (the second, New Zealand, is in the process of doing away with it). No wonder the American public is so enamored of medical technology and drugs, and so likely to regard the regulatory agencies as interfering with the release of innovative treatments. Direct-to-consumer advertising greatly increases the unnecessary use of medications. For example, more than half of those who ask their doctor to prescribe the antidepressant Paxil to treat their mild and temporary feelings of depression are given an antidepressant (two thirds of them the requested drug), as compared with only ten percent with the same symptoms but who do not request medication. Such drugs carry a risk of major side effects—a serious problem when the medication is not appropriate in the first place. And although most advertising now mentions side effects, it says nothing about how common they are. The power of these advertisements is even more notable in the face of evidence that the vast majority of people mistrust pharmaceutical companies.

Second, our market-oriented system drives up the cost of insurance by inflating administrative costs—in particular, the costs of paperwork and marketing. Administrative costs now account for almost one of every three dollars spent in the health-services system—well over one thousand dollars per person per year, for a total of almost 300 billion dollars per year (or $7,000 per uninsured person).

A final problem—one inseparable from the high costs of health care—is access to care. The United States is the only industrialized nation that lacks universal coverage for health services. At any point, 45 million people (about 16 percent of the population) are uninsured; twice as many are uninsured at some point during a one-year period. A conservative estimate is that 35 million people cannot afford to purchase insurance and are otherwise unable to obtain it, either from the government or from employers. Meanwhile, many studies have demonstrated that a lack of insurance (either private or governmental) is associated with much poorer health.

In addition, about 36 million people—one in eight Americans—do not have a regular health-care provider because there are no primary-care doctors in the communities in which they live. People in some states are far worse off than those in others; in two states (Louisiana and Mississippi), one of every three people has no access to a regular source of care; in ten other states, at least one in every five residents has no regular provider of care. Overall, the percentage of people with a regular source of care is falling.


These failures of organization, combined with excessive specialization, have created a health-care system in which even the insured are denied necessary services or are given unnecessary or inappropriate services.

Errors of omission occur when clinical treatments that should be provided, according to current evidence of their effectiveness, are not provided. At least 45 percent of recommended procedures to prevent and control disease are not provided even to people enrolled in health plans because of problems with organization and records. Errors of commission result from superfluous or misguided treatments. Medical tests are imperfect. Sometimes results of tests suggest that a disease is present when it is not. These “false positives” are particularly likely to lead to unnecessary care in the absence of strong primary care: a primary doctor is in the best position to notice when a patient is unlikely to have the disease for which she tested positive, when symptoms that have only recently appeared are likely to disappear on their own, and when tests are unnecessary in the first place.

The desire of Americans for more and more health services seems to be based on the belief that specialized care means better care. But inappropriate care can be harmful. Specialists are more likely to do tests because they have been trained in settings where people have a high likelihood of disease. The probability that a positive result of a test is accurate differs according to where the test is done: if in the community, the probability is one in 1,000; if in a primary-care practice, one in 50; and if done in a specialist practice, one in three. If people are referred too early to specialist care or seek it on their own, they are more likely to suffer harm than they are to benefit from the cascade of tests and procedures that are set in motion from just one false-positive test result. For example, prospective joggers who (either by choice or by following a recommendation) have a cardiac workup before embarking on their jogging program have higher rates of death than those who simply jog. False-positive tests lead to more tests and more treatments, and each of these runs the risk of harm, even death.

An estimated one third of interventions (surgical and medical) are unnecessary. Although the medical literature does not dwell on the damage caused by inappropriate care, several studies have shown that the third leading cause of death in the United States, after heart disease and cancer, is medical intervention, including both tests and therapies. Over the past few years, the annual number of reports of adverse effects from prescribed medications (including deaths) has been increasing. A conservative estimate of the percentage of deaths in the United States that result from adverse effects of medical treatment is ten percent. In other words, an estimated 275,000 of the total of 2.5 million deaths that are annually attributed to specific diseases are really a result of harm from interventions.

Harm from medical care is as likely to affect white Americans as members of minority racial or ethnic groups, and as likely to affect the rich as the poor.

Many risky interventions are much more available in the United States than elsewhere, far surpassing the health needs of the population. The United States has, for example, almost twice the number of MRI machines as the average for industrialized countries. Coronary artery bypass procedures are done at four times the rate of other countries overall, and between six and seven times more than in most Western European countries. Coronary-artery angioplasty procedures are done at a much higher rate than in any other country—six times the overall average. The same degrees of excess are found for other procedures, such as renal dialysis and bone-marrow transplants.

Americans might take these statistics as signs of the superiority of their health-care system, but this would be a misinterpretation. The American system has not produced a healthier population, and excessive use of medical technology has led to inappropriate use. A strengthened primary-care system would help to ensure that technology is only used when it is needed.


How can our system be fixed? We all know that attempts to make major changes to our health-care system get stymied by the political process. The most practical approach is for us to organize services around strengthened primary care and ensure everyone access.

As a first step the government needs to offer incentives to encourage primary-care practice. Without them, an increasing proportion of aspiring physicians will specialize in a non-primary-care field. In the 1960s, the federal government began to fund community health centers (primary-care centers in areas lacking primary-care physicians) and offered loan repayment programs to primary-care physicians willing to work in them. In the 1970s, federal law funded the training of medical students, residents, and fellows with interest in primary care. While the community health center program still exists and may even be expanded, federal support for primary-care training is periodically threatened, even as much more generous support for specialist training has been maintained. Recently available funding for community health centers allowed only one in three qualified applications for new health centers to be approved for funding. Incentives for specialist training have increasingly taken priority in American health policy.

Health services rank fourth in people’s priorities, after war, the economy, and Social Security.

Increasing financial support for training primary-care practitioners is an easy step in the right direction, since it amounts to the continuation of a successful 30-year-old program—although never one of sufficient magnitude to supply even the majority of communities with the primary-care practitioners they need.

Improving the compensation for primary-care practice is another practical option. Primary-care practitioners generally earn much less than specialists; reimbursement rates for their services should be increased. Savings from insurance reform could be used to do this.

Support should also be increased for research in primary care; this would increase the incentives to practice and also encourage medical schools to contribute to developments in primary care. While most research funding is directed at understanding specific diseases, four out of five visits to primary-care doctors are made by people with combinations of health problems, many of them not associated with any particular disease. Research to learn about better management of common problems, such as arthritis, is very much needed. Another strategy would be to fund hospital and medical-school training programs in primary care that would place medical students and residents in primary-care practices outside of the hospital. Currently, funding for graduate medical education is limited to the care of hospitalized patients.

But if primary care is to improve significantly, the health-insurance system must also be reformed. The principal benefit of health insurance in the United States is facilitating access to primary care. Socially deprived populations that do not have health insurance are less likely to have a source of primary care and thus have less access to the entire health-care system. Every other industrialized country in the world has solved this problem. Not all have adopted government health insurance, but all have recognized that the health-insurance program must be either run or regulated by a publicly accountable body. Health insurance should be designed to include all non-discretionary health services for everyone, at the same price for everyone. Universal insurance is more efficient and effective when overseen by the government, at least partly because high standards for quality of care are more easily set and enforced. It is possible that a reasonable compromise, involving more accountability of private insurance, might be a transitional strategy.

In addition to these major changes—strengthening the quality of primary care and easing access to it—there are other incremental steps we can take to buttress the delivery of high-quality primary care. They won’t solve all the problems, but they are achievable and can be tackled one at a time.

  • Restricting advertising. Direct-to-consumer marketing of drugs and devices should be replaced by provision of accurate information on both risks and benefits of medical interventions, using sources of information that are readily available when people need them, rather than generating artificial demand through public media such as TV and magazines. For example, direct-to-consumer advertising of prescription medications should be banned, and there should be an ongoing public-information campaign about the dangers of medications and other treatments.
  • Holding doctors accountable. Focusing on people’s improvement after treatment rather than just documenting that a procedure was done (which is now the basis for quality assessments) would provide information about both the good and the ill effects of treatment and would enable physicians and patients to make informed decisions about the benefits and risks of undertaking specific courses of action in managing health problems. Tools are available to do this, and there is no practical justification for not doing it. Doctors and practice organizations should monitor improvement by sending questionnaires to patients after they are treated.
  • Reforming standards for drugs and devices. At present, the majority of consultants and advisers to agencies such as the Food and Drug Administration have ties to the industries whose products are being considered for approval. These potential conflicts of interest should be recognized and steps taken to remove them from influence over agency decisions. The influence of such consultants should be limited to expert testimony; and advisory committees should be composed of individuals without vested interests in the outcome of deliberations.
  • Unifying advocacy. All consumer groups face problems of inadequate insurance, inadequate information about the dangers and benefits of treatments, and insufficient access to needed services. Unifying the action of the many different consumer and advocacy groups would go a long way toward enabling people to choose their medical services according to evidence of its dangers and benefits, and would provide a much better vehicle for the distribution of effective services.

 

National polls show that most Americans are more worried about health services than about losing their jobs, paying their rent or mortgage, losing money in the stock market, or becoming the victim of a terrorist attack. Health services rank fourth in people’s priorities, after war, the economy, and Social Security. Half of the population is very worried about the costs of care, and one third is very worried about the quality of care—many more than are worried about care for specific medical problems.

We can change health-care policy; we have done it before. The Social Security program was responsible for a great increase in the longevity of retirees. Programs enacted in the mid-1960s led to great gains in health. Countries such as the UK, Canada, and Australia that have debated increasing the government’s role in health services have now embraced it with broad political support. Even in this country, there is popular support for the government-administered programs Medicare and Medicaid. If the government’s abdication of responsibility for the health-care system has not caused a public outcry, this can only mean that the public is not fully aware of the extent of the system’s failure. It is at our peril as a nation that we allow it to continue.