Long-term readers of this blog will know I struggle with many aspects of the systematic review process.  At the time of writing, my ‘A critique of the Cochrane Collaboration‘ has been viewed over 18,300 times and ‘Ultra-rapid reviews, first test results‘ nearly 10,000 times.

I believe the main justification given for conducting systematic reviews is to obtain a really accurate assessment of the effectiveness (or ‘worth’) of an intervention.  So, the thinking goes that spending 12-24 months is worth the cost (financial, opportunity, etc) due to the accuracy of the prediction it then gives.

My immediate response is that is demonstrably false. In my article ‘Some additional thoughts on systematic reviews‘ (just under 5,000 views) the evidence is clear that if you rely on published journal articles to ‘inform’ your systematic reviews (which is the case in the vast majority of systematic reviews) there is approximately a 50% chance that the effect size is likely to be out by over 10%.

But, even if we suspend being evidence-based and believe that systematic reviews can be relied upon to give us an accurate estimate of an effect size, is everything fine? I don’t think so and the image below illustrates my thinking.

It’s an hourglass!  At the top are all the unsynthesised trials, all floating around and the uncertainty is moderate.  Someone then spends 12-24 months pulling these together in a systematic review (likely of published trials and therefore ‘a bit dodgy’) and the certainty is reduced at the aperture of the hourglass.  But then, when you apply it to the real world of patient care, the uncertainty flares out again.  In the above example the intervention has a NNT of 6, so the intervention needs to be given to 6 people to obtain the desired outcome in 1 person.  Which is the 1 person?  Where’s the certainty?

If we were to spend significantly less time doing a review it might indicate a wider hourglass aperture (perhaps suggesting an NNT of 5-7).  In what situations does that matter?  I don’t think we’ve even started to explore these issues. In other words, when is it appropriate to spend 12-24 months on an systematic review and when is a significantly less resource intensive approach ‘ok’?

Is it irony that the reality is the type of review (systematic versus ‘rapid’) doesn’t alter the effectiveness of an intervention?  After all the compound remains the same, untroubled by the efforts of trialists.  Sorry, getting sociological there – must be time to sign off for now.