Wow, a systematic review in five minutes, surely not.
Well, behind the hyperbole, we’re not actually claiming that. What we’re saying is that we’ve created a system to rapidly ‘examine’ multiple controlled trials and to attempt to approximate the effectiveness of an intervention.
So, how does it work? It’s surprisingly straightforward:
- You press the ‘Trip Rapid Review’ button and two search boxes appear, one is for the population (e.g. diabetes, acne) and the other is for the intervention (e.g. metformin, diet). You enter the appropriate terms and press search. NOTE: The current system is optimised for placebo-controlled trials or trials versus ‘normal care’.
- The user is then presented with a list of controlled trials which correspond to the search terms and the user selects those they feel are appropriate and then press the ‘Analyse’ button.
- Our system then machine reads the abstracts and tries to understand if the article favours the intervention or not. It also tries to ascertain the sample size. It reads the article using two separate machine reading systems (decision tree and naive bayes) and if they both agree about the result we accept them but if they disagree we display the conclusion and ask that you manually tell us the result(s). A positive trial scores +1, a negative trial scores -1.
- Once we have all the information we perform a simple adjustment (based on sample size) so that a small trial is reduced by 75% (so scoring +0.25 or -0.25) and if it’s medium the adjustment is 50% (+0.5 or -0.5). Large trials are kept at +1 or -1.
- All these scores are averaged out to give a figure, the overall score for the intervention.
In most situations this will take 5 minutes, but if a user locates lots of trials it’ll take a bit longer.
A really important message to get across is that our system has not been formally validated (although you may be interested in these in-house results)! We have done lots of in-house testing (comparing our results with systematic reviews) but we’re committed to evaluating this approach, but until then use suitable caution.
One reason for releasing it – unvalidated – is to allow the Trip users to try it and see how they get on and if the results are plausible. In most cases we think they will be, but we’re particularly interested in cases where our system fails – as these are the learning points for us. So, if you try it, find it to not work – please let us know.
Another issue, relating to users trying it, is to try and understand what the numerical data means. While an overall score of +1 indicates a really positive intervention and -1 a really negative, what does a score of 0.25, 0.35 or 0.45 indicate. These are the things that require feedback to get right – so keep it coming.
We’re also hoping our approach will stimulate other people to improve on our system – to be inspired. We already have plans for a version 2 with numerous improvements but we’d be delighted if other groups do even better – as long as they keep it free for others to use!!
But, the best way to understand the system is to try it for yourself, it’s free and really easy to use. So, try it now and let me know how you get on!
July 19, 2015 at 5:35 pm
Hi Jon
What are the limits of the sample sizes used by TRIP Rapid Review to classify them as small, medium or large?
LikeLike
July 19, 2015 at 5:49 pm
Small = <100
Medium = 100-999
Large = 1000+
We're about to start overhauling the system so feel free to comment and/or suggest improvements.
LikeLike
July 26, 2015 at 10:43 am
Thank you.
LikeLike
November 2, 2015 at 1:39 pm
John – might be neat if everything is sweet. One of the reasons we are still working on methods of SR and MA after donkey's years is that we keep finding more things that are wrong with trials or their reporting. In cancer pain, for example, we are looking for “no worse than mild pain within 14 days” – difficult to extract from data in studies designed to show that drug A is not worse than drug B. And in this example, two have EERW design. For an insght as to how awkward that can make it, see Pain. 2015 Aug;156(8):1382-95.
Andrew Moore
LikeLike
November 2, 2015 at 1:56 pm
Hi Andrew,
Great to hear from you! I'm not trying to do much other than to agitate on what is a highly problematic area. I get no sense we're (the EBM community) 'winning', in fact it's more the other way. So, we need to try other approaches. The one size fits all of doing a SR on published journal articles is not sustainable. I think we need to be more nuanced as to when we apply high-cost methods such as 'full' SRs. By full I mean more the tamiflu approach than the standard Cochrane.
If my work and posts can open the debate, then I've achieved something (other than my own enjoyment of the work).
Best wishes
jon
LikeLike