Feature

A Tool to Make FDA Drug Approval Practices Transparent

Researchers aim to help the agency, drug companies and patients better understand the complex authorization process

The popular perception of the U.S. Food and Drug Administration’s drug approval process — an image largely fostered by agency critics and companies that want faster and positive reviews — is that it is rigid. Only one clinical trial to test a new therapy? Deny. Findings statistically under target? Deny. Hit every cited milestone? Approve.

The real process makes a lot more sense and is a whole lot messier. The FDA often makes exceptions to its own approval guidelines, sometimes deeply frustrating patients and drug sponsors that do not understand why one drug passed while another failed. The agency takes pains to defend its decisions, but it’s not easy when the rules differ on what seems to be a case-by-case basis.

Yet this discretion — this broad, potentially unfair power to ease or tighten approval standards for each drug candidate — most certainly extends lives and reduces suffering. With flexibility, the agency can adjust the acceptable risk of failure (ineffectiveness) in a new drug according to the needs of the patients it might help. As a result, seriously ill people who have exhausted all other treatment options can try drugs that newly diagnosed patients might consider too risky. Patients suffering from conditions with few existing treatments can get drugs that may help, even if the drugs are likely to fail at a greater rate than the FDA usually deems acceptable. But explaining this complexity is, well, complex.

Opt In to the Review Monthly Email Update.

Three UCLA Anderson researchers have devised a model that could guide the FDA in their decision making in a scientific way, without changing the discretionary criteria used to make those decisions. Described in a working paper by UCLA Anderson’s Fernanda Bravo, Taylor Corcoran and Elisa Long, the model applies a quantitative framework to issues the FDA considers qualitatively, such as the severity of the disease to be treated, the availability of alternative treatments and the likelihood that a better option might come up for review soon. It illuminates precisely how each of 14 disease-specific factors affects approval.

The model sets different efficacy targets for drug candidates based on the conditions they would treat. Currently, FDA guidelines suggest that trials for any drug candidate show significance levels of 2.5 percent or below. The significance level is a statistical threshold for tolerating what is known as a type I error: the probability of approving an ineffective drug. In their paper, the researchers determine the optimal significance level to account for subjective factors the FDA considers. Drugs intended to treat life-threatening conditions for which there are few existing treatments, for example, would be given a much more lenient (higher) significance level than a candidate drug for a common headache.

To illustrate, the researchers ran the model for drugs that treat HIV, hypertension and breast cancer. The results recommend a significance level of 6.3 percent for drugs to treat HIV, a life-threatening condition for which there are relatively few new drug applications in the pipeline. Drugs to treat breast cancer, another life-threatening disease, but one with more potential treatment options in development, received a significance level of 4.6 percent. Hypertension drugs scored a stricter significance level of 2.3 percent: Clinical trials for hypertension drugs require less time than most, enabling relatively quick advancement of more therapies if needed.

The model includes variables that cover a drug’s life from development to FDA acceptance or rejection, through its declining popularity on the market. With customized significance targets, the study proposes, the FDA can transparently maximize both health and economic benefits of new drugs. The system might also help the agency better explain how to allocate scarce drug approval resources.

Unacceptable Risk, or Somebody’s Cure?

Since 1962, the FDA has been tasked with ensuring the efficacy of approved drugs and medical devices as well as their safety. Like safety, efficacy carries risk. Drugs that fail to treat conditions as intended prolong suffering, and sometimes leave their patients ineligible for treatments that might have worked earlier. Although few drugs are 100 percent effective, the agency aims to approve only drugs with acceptable risks of failure.

But “acceptable risk” is a subjective concept. A newly diagnosed cancer patient may want the FDA to protect her from wasting precious time on a treatment with a low success rate. A patient with Alzheimer’s disease, a condition with few treatment options, may be more willing to take a drug with limited proof of success. For many seriously ill people, or those with as yet untreatable conditions, the bigger risk is that strident FDA policies reject drugs that might help them.

The FDA handles some of these risk tolerance differences through special programs for targeted therapies. Designations such as Fast Track for serious, undertreated diseases, or Breakthrough Therapy for candidates that may substantially improve patient health, aim to shorten the typical 10-to-15-year timeline from drug development to FDA approval. They offer some shortcuts in the approval process, such as intensive guidance on how their clinical trials can pass muster, or consideration of surrogate endpoints, such as blood biomarkers, for demonstrating efficacy.

Most drug candidates do not meet the strict criteria for expedited programs. For these applicants, the FDA doles out uniform guidance for acceptable clinical trials, regardless of the conditions they will treat. But a study published in JAMA found that the FDA often approves drugs whose testing did not live up to its own guidance.

For example, the FDA recommends that drugs should be tested against control groups that take a placebo or against a similar drug already on the market. Yet more than 12 percent of new drugs the FDA approved between 2005 and 2012 did not get compared to either type of control group, according to the JAMA study. FDA guidelines recommend results from at least two trials that were randomized (they included control groups) and double-blind (participants were not told what kind of drug, if any, they were getting). The study’s authors found that more than 63 percent of new drugs were approved on the basis of a single trial, 10 percent of trials were not randomized and 20 percent were not double-blind.

Abandoning the Standard Target

The FDA’s universal threshold of 2.5 percent statistical significance is neatly measurable and impartial. It is also a completely arbitrary goal. “Significance” is a purely statistical measurement of how often the effects seen in testing could be caused by something besides the drug.

A 2.5 percent significance level does not mean that the drug is ineffective 2.5 percent of the time. Rather, 2.5 percent represents the chance of observing — purely by chance — the results seen in testing when the drug is, in reality, ineffective.

Every field of scientific study sets a specific significance level to prove its hypotheses. And in every field, the “fixed” nature of this value is controversial. It’s not uncommon for scholars to bemoan the randomness of their industry’s significance level, or to offer ostensibly fairer formulas with fluctuating significance levels for showing proof.

The UCLA Anderson team found that by allowing the significance level required for drug trial results to vary by indication, they could officially incorporate the FDA’s disease-specific considerations in the approval process. A model that weighted the FDA’s seemingly incalculable concerns and made efficacy requirements more or less lenient, accordingly, would create a quantifiable, transparent process for practices already in place.

Each component of the model represents a factor in approval that relates to the specifics of the disease to be treated. Any one factor that makes the number higher than the 2.5 percent standard calls for more leniency; a factor that makes the number lower calls for a stricter standard. Taken together, these ups and downs lead to a significance level recommendation tailored to the underlying condition the drug would treat.

The severity of the disease, both for quality and length of life, is a key model component. In general, the researchers found that drugs aiming to treat diseases with high severity or high prevalence in the population should face more lenient approval standards. (For rare diseases, the FDA incentivizes drug development through a separate program under the Orphan Drug Act.)

The status of drugs already on the market also plays a role in the model’s recommendations. Do patients have many options for treatment, or only a few? Will the drug candidate substantially reduce suffering or extend life, or will it provide the same benefits as something already approved?

The amount of time that it takes for an approved drug to subsequently be identified as ineffective, and removed from the market, leads to more leniency in the approval process because the risk is short-lived. The effectiveness of hypertension drugs, for example, may be measured by a drug store blood pressure cuff, making it easier for patients to identify drugs that do not control their blood pressure. The repercussions of an ineffective cancer drug, meanwhile, may be more threatening. Effectiveness may not be adequately assessed until after weeks or months of continued use. A failed treatment often means that the patient is much sicker, and therefore harder to save, when the treatment failure is eventually discovered. When the researchers model breast cancer drugs, this factor contributes to a more lenient — and compassionate — approval threshold, as many patients do not have precious months to wait for a new therapy.

Some factors are more nuanced. A high number of drug applications in the pipeline for a single condition generally leads to greater proof of efficacy recommendations; the agency can be pickier with a bigger pool. The model relaxes the efficacy requirement for drugs that involve lengthy clinical trials.

Likewise, a high rate of attrition during the R&D phase (many drug trials for the condition never reach an approval application) leads to more leniency in the ultimate significance level recommendation for drugs that advance to the approval application stage. A high rate of clinical trial abandonment means the sponsoring pharmaceutical company might shift resources away from experimental drugs to candidates with higher chances of success.

In addition to accounting for health effects, the UCLA Anderson researchers also sought to incorporate the economic effects of approving effective and ineffective drugs. Drug company share prices usually rise when a new drug gains FDA approval and fall when a drug is rejected or withdrawn from the market. The researchers followed these changes in market capitalization of drug companies, to approximate the net monetary costs and benefits associated with approving or rejecting a drug, or having a drug withdrawn from the market, such as the 2011 decision by the FDA to withdraw approval of Genentech’s drug Avastin for the treatment of breast cancer.

About Those Expedited Approval Programs…

With a model that can adjust the rules to any specific condition, would the FDA still need special programs to expedite approval for much-needed treatments? Using published studies of actual Fast Track drugs, the researchers compared the health and economic effects of a standard versus expedited approval process.

The researchers found that expedited programs such as Fast Track can have large economic and health benefits that cannot be achieved solely by changing the significance level for drug approval. They suggest running similar comparisons between their model and other special approval programs in future research.

The researchers sought other alternatives for expediting drug approvals but found little scholarly analysis on the process. However, they note a study published in 2017 in JAMA Oncology that proposed a fluctuating significance level to allow for more or less leniency for 23 types of cancer. The study finds that the 2.5 percent significance rate is too stringent for rare cancers with few treatment options and short survival times, and too high for common cancers with more drugs. The study recommends a major relaxation of efficacy requirements for breast cancer drug candidates by setting a significance level at 17.6 percent.

The UCLA Anderson model recommends a 4.6 percent level for breast cancer, a much more modest relaxation of the standard. Bravo, Corcoran and Long suggest that the large difference comes from their inclusions of systemwide effects of the adjusted approval process over time. Their model, for example, attempts to calculate for the time it would take an effective drug to replace an ineffective one that went to market under looser requirements.

A lack of transparency in the FDA’s approval process is often cited as a key source of slowdown in drug development, and the agency makes ongoing efforts toward more openness. Hard-and-fast rules for acceptable drug trials would fix the transparency problem but likely result in even fewer new drug approvals. Modeling the current process offers a way for the agency to defend the kinds of approvals that many people hope they will continue to make.

Featured Faculty

  • Fernanda Bravo

    Assistant Professor of Decisions, Operations and Technology Management

  • Elisa Long

    Associate Professor of Decisions, Operations, and Technology Management

About the Research

Bravo, F., Corcoran, T.C., & Long, E.F. (2018). Flexible FDA approval policies.

Montazerhodjat, V., Chaudhuri, S.E., Sargent, D.J., & Lo, A.W. (2017). Use of Bayesian decision analysis to minimize harm in patient-centered randomized clinical trials in oncologyJAMA Oncology, 3(9). doi: 10.1001/jamaoncol.2017.0123

Downing, N.S., Aminawung, J.A., Shah, N.D., Krumholz, H.M., & Ross, J.S. (2014). Clinical trial evidence supporting FDA approval of novel therapeutic agents, 2005–2012JAMA, 311(4), 368–377. doi: 10.1001/jama.2013.282034

Perez, M-E., & Pericchi, L.R. (2014). Changing statistical significance with the amount of information: The adaptive α significance levelStatistics and Probability Letters, 85, 20–24. doi: 10.1016/j.spl.2013.10.018

Related Articles

Vacuum cleaner sucking money on white background. Research Brief / Health Care

Nursing Home Industry Profits Obscured by Related-Party Transactions

Management and real estate deals to owners’ firms siphon off most profits

Bottles of pills arranged to represent a bar graph showing the rising cost of medicine. Research Brief / Health Care

$52.6 Billion: Extra Cost to Consumers of Add-On Drug Patents

The figure is a subset, not covering huge expense of extended patents on high-priced biologics like Humira

Computer model of chromosomes unwinding Feature / Health Care

BRCA Mutation: New Model Quantifies How Surgeries Reduce Cancer Risk

Informed by personal experience, a researcher parses data to help those mulling mastectomy and gynecological surgeries

Clean operating room Research Brief / Health Care

Two-Tier Pricing: Reducing Risk for ACOs and Specialist Practices

Higher prices for the first few procedures, followed by a volume discount, may help balance risks and rewards