It takes on average up to 12 years and at least $1 billion to develop a therapy and get it approved. Getting a therapy to stand out from the competition, prove efficacy, reach endpoints, progress through clinical stages and get approved is an extremely challenging reality for developers great and small. But where do we stand in terms of success rates?
The average failure rate of therapeutic candidates is 90%. This is even worse in oncology, where only 1 in 10,000 preclinical candidates see light at the end of the tunnel. Over the years, this failure rate has not seen significant improvement, which begs the question – why?
There is certainly no one right answer and distilling such a multifactorial issue into several causal factors is overly derivative. Here we will discuss a few ideas which we leave as open questions. Should developers and the overall industry be considering these factors more? What are our collective blind spots? In this blog, we investigate the limited success of clinical development and what we should be doing differently to address it.
Better biomarkers and wider nets
The term “biomarker” became a household term in clinical drug development only relatively recently (20-25 years ago). Predictive biomarkers can help developers understand a patient’s response to a therapy, determine drug efficacy, address safety and in some cases, determine the success or failure of a candidate. Many biomarkers can be considered straightforward and easily derived. If a drug acts upon a certain biological pathway, the associated or downstream biomarkers may be predictive of a treatment response. But what about adjacent or even previously undefined biological links?
The widespread use of multi-omics technology today allows us to cast wider nets to discover associative biomarkers which paint a bigger and more detailed picture of a disease progression or treatment response. Proteomics, next-generation sequencing, RNAseq and spatial transcriptomics are a few examples. Critically though, throwing a mountain of data at a bioinformatics package won’t always give you the golden bullet you may be looking for. As we look to apply multi-omics technology more broadly in biomarker discovery, a simple consideration is key to ensuring that big data can actually be useful. This includes how patients are characterised and stratified. Gathering good phenotypic data that accompanies large -omic data sets allows us to contextualise and more appropriately apply associations in data sets. Finding a random association with an otherwise irrelevant protein will remain irrelevant, unless it can be biologically linked back to the patient. Therefore, in this section we ask:
Do we need to apply multi-omics more broadly to get better insights?
Of mice and men
The rhetoric around the inherent differences between a 1-ounce mouse and a 180-pound human has been very well emphasised before and not something we will re-tread here. To address the use of animals, the FDA has established the Modernisation Act 2.0 to herald in the age of sophisticated preclinical models including spheroids, organoids and organ/body-on-chip systems. The development around these technologies does hold significant promise and tailoring them to more accurately reflect patient biology intends to grant developers key insights before a candidate begins that costly clinical journey. Somewhat adjacent to this, is how we interpret preclinical data.
In the previous section, we spoke briefly about the biological relevance of certain biomarkers. Applying new approaches to preclinical data interpretation may help further contextualise which pathways and proteins tell the more accurate story of a drug’s effect and which should be even further studied. Machine learning and AI algorithms have shown some utility here. Training them with rich and well-defined data sets is crucial to attaining biologically relevant data from validatable and scalable preclinical models.
This latter aspect is the main roadblock on the path forward for 3D cultured preclinical models. Their complexity doesn’t always lend itself to an ease in scalability, validation and harmonisation. Much of the effort in the coming years should then be focused on not only finding the best model to mimic patient biology, but also one that consistently recapitulates this. For this section we now ask:
Will preclinical development continue to require better mouse/small animal models or should we be working more on non-animal models like organoids?
Falling with style
The story told to investors generally always has a positive spin and some companies expend great efforts in positioning clinical candidates as being successful and worthy of investment, for obvious reasons. Failure is not something often regarded as positive in many industries. Perhaps in the face of so much failure, the pharma industry could be focusing effort into “effective” or better yet, “timely” failure. Killing a candidate early saves enormous funds and effort and prioritising preclinical, translational or early clinical phase studies to very quickly weed out those candidates that aren’t destined for approval is a crucial exercise.
In this instance, applying the principals already discussed, such as biologically relevant biomarkers, precise preclinical models and data interpretation techniques can help guide decision making early and derisk the developmental process as soon as possible. In this section we ask:
Do you place importance on failing an ineffective candidate early?
Including the right patients
Defining the therapeutic area and representative patient groups is a critical process in clinical development. Developers often rely on extensive exclusion criteria to ensure that there is the best chance of getting a response. This begs the obvious question; how representative are the patients being enrolled in any early phase clinical studies? The issue of including diverse participants in clinical trials has been at the forefront of regulatory bodies in the last few years and the days of exhaustive exclusion criteria and narrow patient pools may be coming to an end. Diverse populations not only provide better biological relevance but also give developers much clearer insights into how effective a candidate will be in the “real world”.
Whilst not without its challenges, diverse patient enrollment could be another way of addressing drug failure and when embraced early may allow ineffective candidates to be de-prioritised effectively. As a result we ask:
How easy is it to include diverse participants in your clinical programs?
Maximising sample use
Many would agree that the most precious resource in a clinical trial are the samples collected. If handled correctly, they are the source of almost all data in a clinical study. Samples are specifically allocated for various endpoints and a variety of matrices are collected where appropriate. Seeing samples as valuable scientific resources is crucial for development and research. Often after a study concludes or when a candidate needs to be killed or pivoted, the best resource to turn to in order to guide decision making can often be the samples collected from the previous trial. Using the tools we have spoken of (such as proteomics and sequencing) new targets and insights into treatment response and disease progression can be ascertained by re-analysing some samples. Using minimally invasive collection devices like dried blood spots may further permit better sampling from patients and maximise their use. In this final section, we ask:
Do your clinical protocols and consent provide for retrospective analysis?
Concluding remarks
By no means exhaustive, this blog highlights a few areas worth considering in the face of continued candidate failure. AI development, new analytical technologies and preclinical development bring the hope of increasing drug approval. There is no one-solution-fits-all and it is worth acknowledging that any potential solution is highly dependent on many factors. These can include the disease, the specifics and novelty of the treatment and company strategy. At Synexa, we are excited to continue to support our clients in this ever-evolving and complex space. Centering on sound science has always been our approach and will continue to be how work together to profile better treatments for the betterment of patient health.