Wow, a systematic review in five minutes, surely not.

Well, behind the hyperbole, we’re not actually claiming that.  What we’re saying is that we’ve created a system to rapidly ‘examine’ multiple controlled trials and to attempt to approximate the effectiveness of an intervention.

So, how does it work?  It’s surprisingly straightforward:

  1. You press the ‘Trip Rapid Review’ button and two search boxes appear, one is for the population (e.g. diabetes, acne) and the other is for the intervention (e.g. metformin, diet).  You enter the appropriate terms and press search. NOTE: The current system is optimised for placebo-controlled trials or trials versus ‘normal care’.
  2. The user is then presented with a list of controlled trials which correspond to the search terms and the user selects those they feel are appropriate and then press the ‘Analyse’ button.
  3. Our system then machine reads the abstracts and tries to understand if the article favours the intervention or not.  It also tries to ascertain the sample size.  It reads the article using two separate machine reading systems (decision tree and naive bayes) and if they both agree about the result we accept them but if they disagree we display the conclusion and ask that you manually tell us the result(s).  A positive trial scores +1, a negative trial scores -1.
  4. Once we have all the information we perform a simple adjustment (based on sample size) so that a small trial is reduced by 75% (so scoring +0.25 or -0.25) and if it’s medium the adjustment is 50% (+0.5 or -0.5).  Large trials are kept at +1 or -1.  
  5. All these scores are averaged out to give a figure, the overall score for the intervention.

In most situations this will take 5 minutes, but if a user locates lots of trials it’ll take a bit longer.

A really important message to get across is that our system has not been formally validated (although you may be interested in these in-house results)!  We have done lots of in-house testing (comparing our results with systematic reviews) but we’re committed to evaluating this approach, but until then use suitable caution.

One reason for releasing it – unvalidated – is to allow the Trip users to try it and see how they get on and if the results are plausible.  In most cases we think they will be, but we’re particularly interested in cases where our system fails – as these are the learning points for us.  So, if you try it, find it to not work – please let us know.

Another issue, relating to users trying it, is to try and understand what the numerical data means.  While an overall score of +1 indicates a really positive intervention and -1 a really negative, what does a score of 0.25, 0.35 or 0.45 indicate.  These are the things that require feedback to get right – so keep it coming.

We’re also hoping our approach will stimulate other people to improve on our system – to be inspired.  We already have plans for a version 2 with numerous improvements but we’d be delighted if other groups do even better – as long as they keep it free for others to use!!

But, the best way to understand the system is to try it for yourself, it’s free and really easy to use.  So, try it now and let me know how you get on!