A search was undertaken to identify articles that compared rapid reviews to systematic reviews, further articles were identified following feedback on a list promoted via the evidence-based health mail list and various forms of social media. The list of identified articles can be found here.
Without a clear appreciation of the best way to summarise the documents, I’ve gone with a number of lessons I’ve observed from the literature combined with some personal observations. Your feedback and suggestions for improvements would be appreciated.
Lesson 1: The notion of a rapid-review is ill-defined. However, introducing one methodology isn’t necessarily appropriate. What is important is transparency behind the process.
Observation 1: The methodology behind systematic reviews varies a great deal as well. Also, what constitutes rapid? In the literature it was typically less than 5 weeks. A lot of my work is undertaken in less than 5 hours. So, I’m very supportive of the notion of transparency.
Lesson 2: The tension between speed and accuracy is a common theme.
Observation 2: While it may appear obvious it’s important that it’s made explicit.
Lesson 3: Rapid reviews tend to look at a focused question while systematic reviews will typically look at broader topics. Also, they tend to focus on efficacy or effectiveness while not be used to examine safety, economics or ethics.
Observation 3: I’m not sure how accurate this statement is. However, I do know the broader the question the less likely it is be answerable quickly.
Lesson 4: Meta-analyses are often not undertaken in rapid reviews, so no effect sizes given – typically just a sign of an interventions effect. Any results are less generalisable and less certain.
Observation 4: A rapid review might be able to say if a treatment is likely to be better than another, it’s less able to say how much better it is. This may or may not be be important.
Lesson 5: Trial quality assessment is important, poor quality studies are likely to overestimate the benefits of a therapy or the value of a test.
Observation 5: Again, this is linked to the time factor. If you only have two days to return a response what should you do? For our ultra-rapid reviews it seems sensible to be transparent and make explicit the short-cuts and possible effects. In our ultra-rapid reviews we aim to use secondary studies but we will use abstracts of primary research as well. One paper suggested that a moderately robust summary of the evidence is better than no evidence.
Lesson 6: The conclusions between a rapid review and a systematic review do not – typically – differ. The extra effort undertaken by carrying out a systematic review may not greatly impact the final conclusions.
Observation 6: Unsurprising, but needs to be taken in the context of the points raised above. Also, an understanding of why they don’t agree is needed.
Lesson 7: Rapid reviews, when compared with systematic reviews occasionally differ. In the papers that compared the rates of difference between rapid reviews and systematic reviews were 4/39, 1/14, and 1/6.
The study that reported 4 differences in conclusion out of 39 reviews compared NICE and BUPA judgements around funding. This may well have reflected genuine differences, semantic differences (ie BUPA used a different classification system than NICE), difference in the year the review was taken (BUPA typically published their reviews earlier than NICE) and genuine judgement differences e.g. BUPA reported that percutaneous vertebroblasty for osteoporosis said it should be used in ‘trial only’ while NICE said ‘evidence adequate’ (but added caveats).
The same paper reported another study showing 1/14 differences but I was unable to ascertain the reason for the difference due to poor referencing.
In the 1/6 case the rapid review reported that the intervention was experimental while the large cost-effectiveness study indicated that the intervention was safe and efficacious. No reason was supplied for the discrepancy.
Observation 7: Clearly more research is needed to understand differences and I’d be very keen to see how ultra-rapid (less than 1 day) reviews compare with rapid and systematic reviews.
Conclusion: This is a fascinating topic that needs more research to make robust conclusions. I looked into this topic due to my work in ultra-rapid reviews and wanting to know how they might stack up against more robust methods. There appears to be no evidence on the matter. I have two forms of comfort:
- In my time me and my various teams have published over 10,000 questions and many of our answers have been viewed over five thousand times. In that time I am only aware of one serious problem with an answer.
- I have always said that what we do is not a systematic review but we invariably do better than most rushed clinicians when searching the evidence for an answer. If our service is ‘wrong’ then it suggests providing evidence resources to clinicians (knowing they’ll do a worse job) is also wrong.
Transparency is the key message for me. Being clear in communicating the methods used and also in communicating the likely effect of the methodological short-cuts.
Recent Comments