Clinical trials for accelerating pandemic vaccines

…is a factually-correct if rather uninspired title of my new paper out in Oxford Review of Econ Policy. This is just a short cross-post from a Twitter thread I wrote here, making it a bit more readable. The premise is simple: Covid-19 vaccine trials were a big success. But how close to the optimal speed were they? What about human challenge trials?

The paper is just a small contribution to complement new issue of Oxford Review of Economic Policy on economics of pandemic vaccines, with loads of really amazing authors including Susan Athey, Gita Gopinath, Alex Tabarrok, Chris Snyder, and dozens more. I focussed on vaccine trials, their timelines and what was achieved during the pandemic. So it’s not about R&D, manufacturing, or regulatory approvals, but these things are obviously contingent on each other. I also ignore therapeutics. Other special issue authors cover these topics so check it out.

So, the paper. In spring 2020 the experts thought that 12-18 month vaccine timeline was optimistic. So we did very well relative to expectations. On the other hand, we know that a lot of public health decision-making was bad (so bad) and non-vaccine trials were often badly designed messes. So were trials really so great?

Pandemic definitely accelerated some positive trends, like acceptability of adaptive platform trials. What are they, roughly: you have a single trial into which you can incorporate many sub-trials, dynmically adjust who gets recruited and then stop the trial if evidence of effect/no effect is strong enough. These trials were a big deal for Covid therapeutics. For vaccines a platform trial was a bridge too far (there was a very delayed effort by WHO that did not make any difference), but the trials did make use of adaptive elements. For example, Pfizer trial was built with interim analyses starting at 32 cases. That’s good! (we’ll get grumpy in a sec)

Also in the unexciting-but-useful department: managing and analysing clinical trial data is a Kafka-meets-Library-of-Babel kind of experience. And here we are talking about recruiting and monitoring 50,000 people in several countries, in the middle of a pandemic. A myriad of small innovations by research organizations made this small miracle possible.

Back to vaccines. A very rough timeline for Pfizer/Moderna/AZ was 2 months design, 2 months to start phase II trial, 2 months to start phase III, 4 months to generate results, 3-4 weeks to get approval. So what seems to matter most for speed are these 8 months in the middle. Some of the delays have been in manufacturing of vaccine to give to test subjects. Moderna took 42 days to do this for phase I: understandable. But good news is that over 2020 manufacturers got much faster at this.

An alternative to accelerate efficacy testing was using human challenge trials. Orgs like 1Day Sooner and many others advocated for doing this. There are many technological barriers to doing this, but in theory an HCT could generate data on efficacy in a manner of weeks and require only 100-200 volunteers. Regardless of whether they were a feasible target, the debate over HCTs was not really grounded in a risk-benefit analysis. I summarise the debate in the paper.

But here’s the kicker: in the end the speed with which efficacy data were generated did not determine speed of approval. That’s because in Oct 2020 FDA told vaccine makers they must collect two months of safety data for half of trial participants (why half? why not quarter? or all of them? and why in October?)

But generating this type of safety data does not necessarily have to happen at the last stage. For example, a proposal by Marc Lipsitch, Nir Eyal, and Peter Smith had an HCT for efficacy and a separate safety trial. This type of idea generalises beyond HCTs. A separate safety trial is expensive, but in 2020 the social value of a dose of vaccine was between 100x and 1000x its price. The issue, however, is not HCT vs “traditional” RCT. Or any other binary choice like that. Pandemics cost trillions and RoI on any vaccine trials are likely to be very high. So it’s probably cost-efficient to just pay for them all. Information they provide may be complementary and/or a form of insurance. What if a world-saving trial fails due to, hmm, accidentally giving people the wrong dose?

OK, this thread is getting too long, but one more thing: the trials were pretty far from real-world decision making. Pretty quickly we learned that gaps between two doses could be modified and that lower doses could work as well as large doses. But trials did not generate data. We discuss why manufacturers are poorly incentivised for doing any of this in another paper (Testing fractional doses of COVID-19 vaccines).

The obvious idea would be to recruit another 10,000 subjects into the phase III trial and give them smaller doses or test longer gaps between two doses. If you wait to conduct a separate trial, you’re too late (though in our PNAS paper we argued it was still worth doing it in early 2022—but it’s peanuts compared to the value of this information in autumn 2020).

By the way, my paper started out as a mini lit review. I was trying to find papers that estimate relative benefits (e.g. lives saved) of different trial designs and speedier timelines. There is a paper by Berry et al. but it’s pretty dated. I wrote this ~12 months ago so maybe there are new papers on this. But overall it’s surprising more people are not calculating this post-hoc. I am working with a few colleagues on an estimate like that. New paper by Alex Tabarrok has some calculations specifically for care homes in the US and calculates 14,000 deaths averted from 5 week acceleration in care homes alone.

OK, that’s a wrap, for a paper summary this is already too much info.

This post is licensed under CC BY 4.0 by the author.