Why, despite steady scientific advancements and enormous technological progress in lab instruments and equipment, has the productivity of drug discovery in the pharmaceutical industry declined?
That is the question explored by Jack W. Scannell and Jim Bosley in an article published February 10, 2016, in PLoS ONE titled “When Quality Beats Quantity.” The authors, who are pharmaceutical industry consultants who also have academic positions, at Oxford University and the University of Edinburgh, respectively, note the spectacular progress in the tools of biopharmaceutical discovery and research over the past several decades, such as combinatorial chemistry, DNA sequencing, x-ray crystallography, and high-throughput screening. In all of these areas, efficiency has increased and costs have come down. Yet R&D costs per approved drug roughly doubled every nine years or so between 1950 and 2010, and drug candidates are more likely to fail in the clinical development phase today than was the case in the 1970s. In fact, the authors note that it is the high cost of clinical failures that has led to the decline in R&D productivity. What accounts for this contrast?
Decisions Based on Flawed Models
The authors hypothesize that it is due to a decline in the predictive validity of the screening and disease models being used in research and development. In other words, based on a variety of preclinical studies, when companies make a decision to advance a compound into clinical development, the reason why so many of such compounds subsequently fail is because the screens and models used are not sufficiently predictive of clinical safety and efficacy. Using hypothetical statistics, the authors present a highly technical model based on decision theory. They simplify the R&D process into just four stages, including the discovery model, the preclinical model, clinical trials, and the FDA’s review. At each stage, the results of the models or trials are reviewed, and a decision is made on whether to advance the compound. Obviously, if the models used at each stage were perfectly predictive of clinical safety and efficacy, any compound meeting early-stage thresholds for safety and efficacy would be approved.
We all know, however, that the models used are far from perfectly predictive. For example, we may have reason to think that the target we have chosen to inhibit is a valid target, and we may think that a candidate compound that reaches a sufficient binding affinity for that target in a high-throughput drug screen should be advanced for further testing, but most of such candidates will subsequently be abandoned even before they reach clinical testing. Likewise, everyone is aware of the limitations in the predictive validity of animal models, but some animal models of disease are more predictive of clinical safety and efficacy than others.
What the authors demonstrate with their decision theory model is that even very small declines in the predictive validity of a model used to make go/no go decisions can overwhelm the efficiency gains achieved from being able to screen an unprecedented number of candidate compounds and synthesize analogs of the hits at much lower costs than before.