Promoting and Using Comparative Research
What Are the Promises and Pitfalls of a New Federal Effort?

Click here to view this Outlook as an Adobe Acrobat PDF.

No. 2, February 2009

Federal efforts to underwrite and promote research comparing different drugs and medical devices continue apace, even in the stimulus plan passed by the House of Representatives.[1] These policies are aimed at furnishing government programs like Medicare with better data on the clinical and cost considerations that inform the agencies' coverage decisions. At face value, it makes perfect sense that the government should want more information on the "comparative effectiveness" of the medical products it purchases. But like many other seductively simple ideas, enthusiasm for comparative effectiveness research (CER) outpaces its practical promise and obscures the downside of having governments take on these sorts of studies and the clinical considerations that go into them. Additionally, the government is no ordinary payer. Its size and authority mean it often sets the entire market. Government decisions on coverage of new medical products reverberate through the entire health care market. This kind of influence demands that the government agencies take great care in how they approach decisions to pay for medical products and afford access to them. Thus far, the discussion around creating a federally directed CER effort is not being handled with appropriate prudence or precision.

As we embark on what is likely to become a multibillion dollar federal effort to sponsor CER, we need an honest discussion of what we can and cannot learn from this sort of science. We also need to consider carefully the problems that arise when these comparative studies are done cheaply (as seems likely under the new federal effort), the difficulty of doing these studies well, and the adverse effects of poorly constructed studies on timely access to important new treatments. Although there is widespread agreement that medical practice benefits from good CER, the proper design of and focus for these studies are getting short shrift in political discourse.

That discussion should start--and perhaps end--with a close examination of the federal government's previous efforts to underwrite CER, with all of its promise and shortcomings, and the lessons we should have learned about doing this science the right way. We should also consider alternative models of promoting more of this research, especially models that make better use of private scientific efforts already underway.

In the 1990s, the federal government launched arguably the two largest medical studies ever undertaken to compare different approaches to treating two common medical problems. The first of these comparative effectiveness studies, the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), compared newer and older drugs for treating high blood pressure.[2] The second study, the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), compared different medicines for treating psychosis.[3] In each study, the authors concluded--among other things--that newer and more expensive medicines were no better than the older and cheaper alternatives.[4] In each case, however, these sweeping conclusions were revised--and in selected cases deemed wrong--as the CATIE and ALLHAT data underwent closer scrutiny and additional studies were released examining similar questions.[5]

Learning Lessons from ALLHAT and CATIE

For proponents of CER who believe that the results of an isolated, single study can become a basis for changing clinical practice, ALLHAT and CATIE should temper enthusiasm.

For one thing, ALLHAT and CATIE probably did not ask the questions that were most relevant to practicing doctors. ALLHAT, for example, showed how hard it is to accommodate all of the clinical diversity that informs medical choices. But it covered only a narrow slice of the ways doctors approach the initial treatment of hypertension--and only some of the treatment choices. Even though the trial was well done, there were real controversies about the study's design and subgroups of patients that were examined. In the case of CATIE, researchers went to great length to incorporate practical medical issues into the study's design and to measure the need to tailor treatments to some particular subgroups of patients. But CATIE probably did not follow patients long enough to unmask the safety differences between newer and older drugs that drive clinical choices to use one drug over another.[7]

Both of these studies reaffirm an enduring fact about medicine. A single study, even when it is executed with rigor and care, is rarely sufficient to change medical practice. It is the great exception that a single study is so definitive as to have that sort of impact. It often takes multiple studies, each one confirming earlier results while building incremental knowledge, to shape medical practice. This is how the medical profession has traditionally resolved important comparative questions, and with good reason. In the case of ALLHAT, for example, subsequent studies reached different conclusions. A single study like ALLHAT could not possibly take the measure of the clinical circumstances steering doctors to a particular option. Clinicians using different treatments often had reasons for doing so that were well grounded in considerations of their patients and the totality of the evidence. Often a single study is too narrow to inform all of the reasons a doctor may settle on a particular treatment. Sometimes an isolated study is simply wrong. That is why the Food and Drug Administration (FDA) requires multiple studies--each confirming earlier findings--as its basis for approval decisions.

ALLHAT and CATIE were also challenged because of the narrow scope of the questions they asked. This is a practical limitation that accompanies many comparative studies--it means that results are rarely broadly applicable given the variability in biology, disease, and patient preferences. Results of any single study seldom generate the kinds of binary answers that can become the basis for discrete policy decisions. But policymakers who promote CER as a means of cost containment cling to the false assumption that we are--in many settings of clinical practice--just a single study away from a discrete answer that can settle protracted medical questions. That is unrealistic, and there is a vast history of medical research to debunk that simplistic construction.

Moreover, even when a comparative study generates compelling findings that become the basis for immediate change in medical practice, other advances in both evidence and treatment options are constantly emerging. The results of studies are always being made obsolete by new science. This is also a problem in that the political and regulatory process moves much more slowly than science. By the time we fashion a binary policy decision to cover a particular product on the basis of government-directed study results, the underlying research may become irrelevant. The treatment options, how doctors use them, and even cost change over time, often significantly.

ALLHAT and CATIE illustrate another lesson of CER. For results of a single study to influence medical practitioners, the underlying study needs to be done rigorously. The study and its results need to be convincing. Doctors are typically slow to adapt their practice, especially on the basis of preliminary research. And definitive comparative studies are particularly hard to execute. They often require very large samples and long periods of time to discover small differences between two "active" treatments--that is, two treatments that work, but one that may work better than the other.

ALLHAT and CATIE were probably the largest studies that will ever be done to compare different drugs for hypertension and psychosis.

Finally, the full measure of a new medical product's benefit is rarely known at the time of its introduction. This was certainly true for the new classes of antipsychotics that were evaluated in CATIE, as well as for the newer generations of pills for treating high blood pressure evaluated in ALLHAT. Yet proposals for "rating" the comparative effectiveness of new--and presumably more expensive--products against older and cheaper alternatives as a way to control or restrict market access wrongly assumes that these comparisons can be accurately done at all, especially early in the life cycle of a new product. If we shift to a system that demands incontrovertible proof of superior efficacy through comparative studies prior to covering a drug, device, or procedure, the impact on access and subsequent innovation would be large. This impact has not been measured. It is being assumed away. Many products, from cholesterol-lowering medicines to drugs for the treatment of HIV,[10] would not have passed such a hurdle at the precise point at which they were introduced.

This is especially true for cancer drugs, many of whose most important benefits are discovered after FDA approval--in many cases, as drugs approved for end-stage cancers are discovered to have significant benefits when moved earlier into the treatment of frontline tumors. That sort of learning is impossible to do at the time of approval since the FDA--as well as the large cooperative cancer trial networks--in many cases simply does not allow sponsors to test new cancer drugs first in earlier-staged cancers. In many cases, sponsors have to first develop new drugs for late-stage cancers. Only subsequent research from real-world use of these products reveals their full benefits. Yet, under the political schemes currently being considered--including the House provisions contained in the fiscal stimulus plan--CER is envisioned to become what is called a gating factor for coverage and therefore market entry. It sets up a Catch-22: the government will not allow companies to test new cancer drugs for early-stage cancers against which the drugs can potentially show more benefit. But agencies are increasingly reluctant to pay for drugs for later-stage cancers, for which the magnitude of the benefit is less pronounced owing to the advanced nature of the underlying disease. Far from creating incentives for sponsors to undertake additional studies of new products, the exact opposite will be true. With CER as the gating factor to market entry, subsequent clinical science will not occur because the sort of practical learning that occurs through routine use will never occur.

Improving on Current Proposals

These real-world challenges should inform Congress and the Obama administration as they embark on the creation of a new federal body dedicated to CER. Unfortunately, many ideas for promoting CER give short shrift to the practical lessons from ALLHAT and CATIE. The proposals start with studies that are designed and implemented with rigor and then considered in a political framework that does not make rote coverage choices based on the results. It means a process that respects clinical diversity, preserves the prerogative of doctors to use emerging science to tailor care to individual patients, and recognizes the importance of maintaining incentives for subsequent research that can refine coverage choices and improve clinical decisions. In the end, rigorous comparative studies can provide public health benefits--but probably not by the magnitude or through the process currently envisioned.[11]

Under current plans, the Department of Health and Human Services has a lot of latitude in how the CER effort is shaped.[12] But it starts with a plan to spend several hundred million dollars or more to establish a federal program for conducting the studies. The key decisions about how this takes shape and whether it promotes or impedes public health will be made by the current administration.[13] But whatever decisions are made, it is hard to envision how any efforts of a new CER entity will lead to meaningful cost savings without incorporating some explicit requirement that Medicare use the research findings to make cost-based decisions about coverage. The CBO has said as much. And this seems to be an inevitable risk of any CER enterprise. The original House stimulus bill--which included $1.1 billion for studies comparing different drugs and devices to "save money and lives"--betrayed these intentions, saying that "more expensive" medical products "will no longer be prescribed." The language was ultimately revised in the Senate, but the House's wording should stand as an honest personification of congressional intent on this broader issue. That the blunter language fell victim to political pragmatism does not change the underlying calculus at work here.[14]

Access to important new medical treatments could be denied to patients on the basis of preliminary, isolated, or poorly designed studies.

The current proposals do not allow for generation of the kind of rigorous science that will have independent stature and influence on the profession. The biggest shortcoming is that many of the clinical trials anticipated by this new entity simply will not be designed with the capability of providing reliable and rigorous answers to difficult comparative medical questions. It raises the real risk that sweeping decisions will be predicated on unreliable studies. Instead of large, prospective clinical trials, the new CER entity would rely mostly on less rigorous, shorter, and less expensive research studies. These include "systematic reviews," in which scientists consolidate the existing information about a question and then reach a firm conclusion by looking systematically at the data.[15] It also includes registries (in which large groups of patients are followed for a period of time but not randomized to receive the different interventions that are being compared) and retrospective analysis of epidemiological sets of data.

This creates some peculiar situations. First, many of these research methods are not creating new knowledge per se. They are designed to generate a binary answer from the existing science. They assume there is a plethora of reliable comparative data already available. Politicians deny this; indeed, they posit the lack of comparative data as the impetus for creating a new CER entity in the first place. Second, it is worth noting that because these research methods do not approach the rigor of prospective, randomized trials, none of these kinds of studies would come close to providing the FDA with the scientific basis to change a medical product's label. This creates an odd paradox. Most, if not all, of the efficacy data generated by a new CER program will be ignored by the FDA and surely would not lead to labeling changes.[16] That means that medical product makers would be legally barred from talking about the CER study results, even though the CER would be sponsored and conducted by the federal government. Only the federal government would be able to discuss and publicize the results of its own studies.

What can be done to promote CER that improves public health, that is implemented and used with care, and that incorporates lessons from ALLHAT and CATIE? There have been plenty of proposals on how to edit the legislative language defining the current efforts. But even apart from these legislative efforts, how can the medical profession itself pursue this kind of research and make sure the results of these studies are used with prudence when it comes to decisions on access?

Proponents of a government effort to promote CER say there is not enough CER because drug companies have little incentive to take on these studies.[17] There is always a risk a sponsor will reveal that its newer products are no better than older and cheaper alternatives. The far more prominent influence is a regulatory failure that eliminates much of the incentive for companies to undertake these comparative studies in the first place. Under current drug regulations, when manufacturers conduct CER, they are rarely allowed to talk about the results--even to payers. This is a consequence of the FDA's very stringent requirements when it comes to allowing comparative claims on drug labels. Because the competing technologies are so similar in their general effect for many drugs, doing a comparative study that will pass muster with the FDA and lead to new label claims is simply impossible. The agency requires two well-powered, randomized, prospective trials that show a clinically important effect. Since the studies are often comparing two "active" treatments that each provide a clinical benefit in their own right, trying to discern small differences and then measure whether these differences lead to different patient outcomes can require very long and large trials.[18]

Why is this important? Since companies cannot get comparative claims into drug labels, that means they cannot talk about the results of the studies, even with experts charged with purchasing medical products for Medicare or other big health plans. The FDA has been stepping up its enforcement of discussions between drug firms and payers, even issuing a warning letter to one company that was presenting data to the state of Maryland.[19] (One irony is that many of the same government officials arguing for CER in order to drive more efficient purchasing decisions also support regulatory prohibitions on the ability of companies to share results of studies and support the FDA's getting involved in discussions between payers and product developers.)

Taking Steps to Leverage Private Research Efforts

There are opportunities to leverage CER being done by private groups, as well as to create more incentives for companies to undertake this kind of scientific work. But that is going to require some new policy steps. First, we need to reform the regulatory barriers that prevent companies from sponsoring more CER. The FDA should be directed to develop a guidance document that creates a "safe harbor" for representations that drug and device companies make to expert purchasing authorities based on the results of company-sponsored CER. The agency could do this as a matter of its enforcement discretion.[20] It could also articulate in guidance the kinds of studies that would qualify for the safe harbor--establishing parameters to make sure the research is sufficiently rigorous. There is no compelling public health reason why the FDA should insinuate itself into discussions that manufacturers have with the expert groups making purchasing decisions for large networks. On the contrary, public health prerogatives should demand policies that encourage the exchange of as much information as possible in these settings. These expert groups are savvy purchasers, and they are well equipped to provide a full evaluation of the information being delivered to them. The standard that ought to guide drug and device manufacturers in these discussions should be the Federal Trade Commission standard: that information be truthful and nonmisleading.

Second, the biggest risk from the current proposals for CER is that access will be constrained on the basis of studies that are not definitive--that is, studies that fall short of the sort of science that has traditionally instigated changes in medical practice. For these reasons, it is important that Medicare have a process for careful consideration of the results. Medicare coverage decisions must not be fashioned directly from CER without consideration of the limitations of the research. To these ends, Medicare should be instructed to bring its coverage decisions before standing committees of clinical experts, similar to the FDA advisory committee process. This could apply to any decision in which Medicare is narrowing coverage for a medical product or service.

The advisory committees should be focused on clinical areas that reflect the type of product and condition in question. As in the FDA advisory committee process, Medicare need not be bound by the decisions of the committees, but it should take their views into consideration. Expert advisory committees to inform Medicare decisions are especially important because Medicare does not have a large clinical staff. It has few physicians focused on the therapeutic areas for which it renders the majority of its coverage decisions. At any time, Medicare has about twenty doctors and forty total clinicians (including nurses) in its coverage office and fewer than a dozen in the office that sets payment rates.[21] Medicare does not even have a single oncologist on staff, but since 2000, the program has issued 165 restrictions and directives on the use of cancer drugs and diagnostic tools.[22] Expert advisory committees, focused on relevant clinical areas, could inform Medicare's decisions with practical clinical perspectives and help ensure that the agency's decisions conform to sound clinical judgment.

Third, there are opportunities to employ evaluations already being done by independent, expert clinical groups that are charged with looking at comparative medical questions and reaching consensus recommendations. Here, I am referring to the guideline-writing committees maintained by the leading medical professional societies (the American College of Cardiology, the American Psychiatric Association, the American Society of Clinical Oncologists, and the like). Medicare could be directed to rely on expert guidelines promulgated by these medical professional societies when the agency is weighing competing medical products. These guidelines routinely address comparative medical questions and are based on systematic reviews of the most up-to-date science. The guidelines are written by leading clinicians. To address concerns that these expert bodies are "influenced" by drug and device companies, these groups could be asked to adhere to voluntary guidelines that address issues of transparency and conflicts of interest.

CER has generated so much political interest as a means for curbing health spending that its complexities--and limitations--are being given too little attention. ALLHAT and CATIE demonstrated many of these challenges. CER will not easily allow for an approach that could discriminate between the worthwhile and less helpful advances. CER would also lead to slower adoption of effective technologies, hinder the discovery of new benefits from existing products, and halt investment in novel research. Limits on access to new medical products that are based on assumptions about the cost to a population will also deny access to individuals who can nonetheless benefit from the medical product.

Comparisons between efforts to promote CER here in the United States and the British system--where the central authority, the National Institute for Health and Clinical Excellence (NICE), rates new medical products on the basis of their cost-effectiveness before patients can access these treatments--are within bounds. Some proponents of CER in the United States advocate a NICE-like path. In that respect, efforts to promote the creation of a central authority here that sponsors and considers studies on the comparative effectiveness of new medical products is part and parcel of a broader trend whereby Medicare takes on a more prominent role in considering and rating the "value" of new treatments before the agency provides coverage for them. The problem: Medicare is not good at evaluating the clinical promise of new products, and CER is unlikely to make them better.

Where is this heading? There is increasing concern in Britain that many effective drugs are inaccessible because of overt concerns about cost-effectiveness. This is especially true for cancer drugs, for which mounting evidence shows that these restrictions are harming public health.[23] Short of crude (and politically unpopular) measures that would tie CER results in the United States directly to Medicare's coverage choices, proponents of CER stand to be disappointed by the cost savings achieved by their current proposals. That is not to say the research itself will not provide value--incremental clinical information can always better inform the mosaic upon which clinicians make decisions. The comparative research will not be definitive, no matter how much its supporters on Capitol Hill might wish it so. As for the clinical impact, incorporating an explicit tie between the results of research and coverage decisions will put us squarely on a path that more closely resembles the process used in Britain--with all its shortcomings on access, innovation, and health outcomes. These represent the downside to creating a centralized decision-maker to evaluate the value of new medical products.

Scott Gottlieb, M.D., is a resident fellow at AEI.

Click here to view this Outlook as an Adobe Acrobat PDF.

Notes

1. American Recovery and Reinvestment Act of 2009, HR 1, 111th Cong., 1st sess., passed by the House of Representatives on January 28, 2009.
2. ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group, "Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic, the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)," Journal of the American Medical Association 288 (2002): 2981-97; and ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group, "Major Outcomes in Moderately Hypercholesterolemic, Hypertensive Patients Randomized to Pravastatin vs. Usual Care, the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT-LLT)," Journal of the American Medical Association 288 (2002): 2998-3007.
3. T. Scott Stroup et al., "The National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Project: Schizophrenia Trial Design and Protocol Development," Schizophrenia Bulletin 29, no. 1 (2003): 15-31; and Jeffrey A. Lieberman et al., "Effectiveness of Antipsychotic Drugs in Patients with Chronic Schizophrenia," New England Journal of Medicine 353 (2005): 1209-1223.
4. Lawrence J. Appel, "The Verdict from ALLHAT: Thiazide Diuretics Are the Preferred Initial Therapy for Hypertension," Journal of the American Medical Association 288 (2002): 3039-42; Richard C. Pasternak, "The ALLHAT Lipid Lowering Trial: Less Is Less," Journal of the American Medical Association 288 (2002): 3042-44; Joseph P. McEvoy et al. for the CATIE Investigators, "Effectiveness of Clozapine versus Olanzapine, Quetiapine, and Risperidone in Patients with Chronic Schizophrenia Who Did Not Respond to Prior Atypical Antipsychotic Treatment," American Journal of Psychiatry 163 (2006): 600-610; and T. Scott Stroup et al. for the CATIE Investigators, "Effectiveness of Olanzapine, Quetiapine, Risperidone and Ziprasidone in Patients with Chronic Schizophrenia Following Discontinuation of a Previous Atypical Antipsychotic," American Journal of Psychiatry 163 (2006): 611-22.
5. Michael A. Weber, "The ALLHAT Report: A Case of Information and Misinformation," Journal of Clinical Hypertension 5, no. 1 (2007): 9-13; and Frank H. Messerli and Michael A. Weber, "ALLHAT: All Hit or All Miss? Key Questions Still Remain," American Journal of Cardiology 92, no. 3 (August 2003): 280-81.
6. Andrew Pollack, "The Minimal Impact of a Big Hypertension Study," New York Times, November 27, 2008.
7. T. Scott Stroup et al., "The National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Project: Schizophrenia Trial Design and Protocol Development"; and Scott Gottlieb, "The War on (Expensive) Drugs," Wall Street Journal, August 30, 2007, available at www.aei.org/publication26718.
8. The exceptions were the inclusion of safety data generated from CATIE added to the labels of antipsychotic drugs more than a year after CATIE's completion and class labeling on hypertension drugs that grew out of the results of ALLHAT that said all blood pressure medicines help prevent heart disease by lowering blood pressure.
9. Peter R. Orszag and Philip Ellis, "Addressing Rising Health Care Costs: A View from the Congressional Budget Office," New England Journal of Medicine 357 (2007): 1885-87.
10. Tomas J. Philipson and Anupam B. Jena, "Who Benefits from New Medical Technologies? Estimates of Consumer and Producer Surpluses from HIV/AIDS Drugs," Forum for Health Economics and Policy 9, no. 2 (2006), available at www.bepress.com/fhep/biomedical_research/3/ (accessed January 29, 2009).
11. Scott Gottlieb, "Congress Wants to Restrict Drug Access," Wall Street Journal, January 20, 2009, available at www.aei.org/publication29219.
12. Congressional Budget Office, Budget Options, vol. 1, Health Care (Washington, DC: Congressional Budget Office, December 2008), available at www.cbo.gov/doc.cfm?index=9925 (accessed January 29, 2009).
13. Gail R. Wilensky, "Developing a Center for Comparative Effectiveness Information," Health Affairs 25 (November 7, 2006): w572-85.
14. The Senate changed the draft report language, with Jennifer Mullin, a spokesperson for Senator Tom Harkin (D-Iowa), telling Congressional Quarterly that it "did not accurately reflect the intent of the appropriation language." See "Obey's Healthcare Language in Stimulus Targeted in Senate," CongressDaily, January 23, 2009.
15. Lisa A. Bero et al., "Closing the Gap between Research and Practice: An Overview of Systematic Reviews of Interventions to Promote the Implementation of Research Findings," British Medical Journal 317 (1998): 465-68; Steven H. Woolf, "The Need for Perspective in Evidence-Based Medicine," Journal of the American Medical Association 282 (1999): 2358-65; Cynthia D. Mulrow, Deborah J. Cook, and Frank Davidoff, "Systematic Reviews: Critical Links in the Great Chain of Evidence," Annals of Internal Medicine 126, no. 5 (1997): 389-91; Amit X. Garg, Dan Hackam, and Marcello Tonelli, "Systematic Review and Meta-Analysis: When One Study Is Just Not Enough," Clinical Journal of the American Society of Nephrology 3, no. 1 (2008): 253-60; and Mark A. Crowther and Deborah J. Cook, "Trials and Tribulations of Systematic Reviews and Meta-Analyses," Hematology (2007): 493-97.
16. It is also unlikely that practice guidelines gleaned from these studies will similarly affect clinical practice if the underlying science is from less rigorous approaches to evidence. The Agency for Healthcare Research and Quality (AHRQ) regularly conducts its own systematic reviews of competing medical products and routinely publishes its findings. The work is high-quality, and it is generally based on existing clinical trial data--precisely the model many envision for the new CER entity. Yet, there is no meaningful empirical evidence to suggest that AHRQ's efforts have directly affected clinical practice. If AHRQ provides perhaps the best model for the work product that would be generated by a new CER effort, what is to suggest that the new effort will have any greater independent impact?
17. Gail R. Wilensky, "Developing a Center for Comparative Effectiveness Information."
18. The kinds of research proposed by President Obama and included in the House bill--registries, systematic reviews, and the like--certainly cannot leap this hurdle.
19. Food and Drug Administration (FDA), warning letter to Cephalon, Inc., re. "NDA #10-717, Provigil (modafinil) Tablets [C-IV], MACMIS #14707," February 27, 2007, available at www.pharmcast.com/WarningLetters/Yr2007/Feb2007/Cephalon0207.htm (accessed January 29, 2009).
20. A similar approach to using enforcement discretion to create a safe harbor for distribution of certain information is used in at least one recent guidance document. See FDA, "Guidance for Industry: Good Reprint Practices for the Distribution of Medical Journal Articles and Medical or Scientific Reference Publications on Unapproved New Uses of Approved Drugs and Approved or Cleared Medical Devices," January 2009, available at www.fda.gov/oc/op/goodreprint.html (accessed January 29, 2009).
21. By comparison, Aetna has more than 140 physicians and 3,300 nurses, pharmacists, and other clinicians across its health plans. WellPoint has 4,000 clinicians across its different businesses, including 125 doctors and 3,180 nurses. That works out to one clinician for every 9,000 people covered. UnitedHealthcare employs about 600 doctors and 12,000 clinicians across all of its health plans and various health care businesses.
22. Scott Gottlieb, "What's at Stake in the Medicare Showdown," Wall Street Journal, June 24, 2008, available at www.aei.org/publication28178.
23. See, for example, Mary Babaloba, "Charities Lobby Nice over Cancer Drug," Guardian (London), October 29, 2008; "Woman's Cancer Drug Appeal Fails," BBC News, April 7, 2008; "Cancer Woman in Drugs Fight Wins," BBC News, February 2, 2008; Nick Triggle, "How the NHS Places a Value on Life," BBC News, August 21, 2006; Mike Richards, Improving Access to Medicines for NHS Patients (London: Secretary of State for Health, November 2008), available at www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_089927 (accessed January 29, 2009); and "The Drugs the NHS Won't Give You," Daily Telegraph (London), May 11, 2007.

Also Visit
AEIdeas Blog The American Magazine
About the Author

 

Scott
Gottlieb

What's new on AEI

AEI Election Watch 2014: What will happen and why it matters
image A nation divided by marriage
image Teaching reform
image Socialist party pushing $20 minimum wage defends $13-an-hour job listing
AEI on Facebook
Events Calendar
  • 20
    MON
  • 21
    TUE
  • 22
    WED
  • 23
    THU
  • 24
    FRI
Monday, October 20, 2014 | 2:00 p.m. – 3:30 p.m.
Warfare beneath the waves: The undersea domain in Asia

We welcome you to join us for a panel discussion of the undersea military competition occurring in Asia and what it means for the United States and its allies.

Tuesday, October 21, 2014 | 8:30 a.m. – 10:00 a.m.
AEI Election Watch 2014: What will happen and why it matters

AEI’s Election Watch is back! Please join us for two sessions of the longest-running election program in Washington, DC. 

Event Registration is Closed
Wednesday, October 22, 2014 | 1:00 p.m. – 2:30 p.m.
What now for the Common Core?

We welcome you to join us at AEI for a discussion of what’s next for the Common Core.

Thursday, October 23, 2014 | 10:00 a.m. – 11:00 a.m.
Brazil’s presidential election: Real challenges, real choices

Please join AEI for a discussion examining each candidate’s platform and prospects for victory and the impact that a possible shift toward free-market policies in Brazil might have on South America as a whole.

No events scheduled this day.
No events scheduled this day.
No events scheduled this day.
No events scheduled this day.