Trip Database Blog

Liberating the literature



Automated reviews, something to show…

It’s all hands on deck here as we rush to get a robust prototype model of our automated review system ready for viewing by the EU (next week in Luxembourg).  Much of this work has been funded as part of our participation in the Horizon 2020 funded KConnect project (led by TUW, Vienna) and the EU like to see what they’re getting for their money.

Now we’ve had change to play with the data generated by our systems, two things are apparent:

  • This should, broadly, be a viable approach
  • Full-automation is still a way off (Note, we’re not attempting to reproduce manual systematic reviews).  We think the automation stage will get you ‘so far’ but it’ll still require a second level of ‘polish’ to increase the robustness.  We’re hoping this ‘polish’ stage will take no more than 5-10 minutes.

I’m going to share one image below:

In the image above we are showing a number of things, all generated automatically:

  • Each trial or systematic review is shown as a single ‘blob’.
  • Classification (x-axis) of trials based on perceived efficacy (does the drug work or not).
  • Sample size – bigger the ‘blob’ the bigger the trial.
  • Intervention name (y-axis).

What you’re not seeing is the fact that each trial is being automatically assessed for bias via RobotReviewer.

For next week we need to keep improving the data quality and – for each intervention – create a single estimate of effectiveness (our version of meta-analysis) and make it look nice!


King’s Fund will shortly be searchable on Trip

The King’s Fund collection of publications has been the most requested addition to our index over the years.  I’m very pleased to report that, as of the around the 20th September, their content will be searchable via Trip.

The King’s Fund is a UK think tank and is involved in work relating to the health system in England (mainly). It’s an important source of grey literature on policy and health systems here in the UK.

So, that’s great news.

But any other sources we should include?  If you know of any great sources of high-quality content that you think would improve Trip then PLEASE let us know.  Your suggestion could make Trip even better and result in more clinical questions being answered robustly!

Don’t be shy 🙂

Automated rapid review, we’re getting there

I recently gave an update on the progress of the system (see Automated rapid reviews).  In this I highlight the variables we’ll be able to automatically assess for each paper (RCT or systematic review):

  • P – population/disease
  • I – intervention
  • C – comparison (if there is one)
  • Sentiment – does the trial favour the intervention or not
  • Sample size – is this a large or small trial
  • Risk of Bias – via RobotReviewer, which is already on the site

As our systems processes all the articles we have to figure out how to create an output.  The outline brief I’ve suggested to our designers is:

Clearly I’m no designer! But I hope you get the picture!  The design works on two levels:

  • Top level – for a given condition each ‘blob’ will consist of a single intervention.  Size of blob will indicate the size of the evidence (based on sample size of trials), horizontal axis will represent the date of the first trial for that particular intervention, while the vertical axis will indicate likely effectiveness,
  • Second level – If a user clicks on a blob in the top level, this will be unpacked to break down each intervention in to the component trials.  Again, using similar plotting methods (sample size = size of blob, date of individual trial of horizontal access and effectiveness on the vertical).

It will look nicer and we’re exploring other visualisation techniques such as this one.

This needs to be ready by the end of September, so just over three weeks!

Automated rapid reviews

As part of the KConnect work (EU funded Horizon 2020 project) we have been doing a fair bit of work exploring the automatic extraction of various elements from RCTs and systematic reviews.  If we can automatically understand what a paper is about it can open up all sorts of avenues with regard search and evidence synthesis.

The KConnect output is virtually ready for Trip to use and it will allow us (with decent, but not perfect accuracy) the following elements from a RCT or systematic review:

  • P – population/disease
  • I – intervention
  • C – comparison (if there is one)
  • Sentiment – does the trial favour the intervention or not
  • Sample size – is this a large or small trial
  • Risk of Bias – via RobotReviewer, which is already on the site (see this post)

So, what can we do with this?  A few examples:

  • For a given condition we can identify all the trials in this area and what the interventions are.
  • We can rank the interventions on likely effectiveness
  • For a given intervention we can look at what conditions it’s been used it.
  • We could present graphic like Information is Beautiful’s Snake Oil for a given condition and/or intervention.
  • We can massively increase the coverage of our Answer Engine.

Also, all this will be fully automatic, as new trials are added to Trip they will get processed and added to the system.

We’ve got a few technical issues to go (integrating the various systems) but we are so close. You will have no idea how long I’ve fantasised about the system.  And, even though it won’t be perfect, it should stand as a very good proof of concept.

Cochrane records on Trip

Last month we reported repeated problems keeping our Cochrane records up to date and asked users to decide what we should do.  Overwhelmingly people said we should stop linking to Wiley’s Cochrane Library domain and link to PubMed.

We have now moved all the links over:

So, Trip now has all the Cochrane systematic reviews and, given our experience of PubMed, we’re confident we’ll continue to properly reflect Cochrane’s records from now on.

For those of you who miss the direct links to the Cochrane Library, you can still easily get there via PubMed:

Evidence Live: Community Rapid Review

The Community Rapid Review idea has been discussed for a while now and the final stage, before we move to production, is coming very soon.  Next week I will be running a workshop at Evidence Live on the idea.  It’ll be an interactive exploration of the thinking behind the idea and will hopefully see some final constructive criticism to guide the final product.

If you’re going to Evidence Live you can reserve a place via this link.

Medicines information coming to the Answer Engine

The Answer Engine started less than 6 months ago and has firmly established itself as a well-loved feature on Trip.  Currently, the answers are mainly linked to intervention efficacy style questions.  But, in around 4-6 weeks, we’ll be rolling out medicines information.  So, for a given drug we’ll allow users to easily see answers to questions about contraindications, warnings, interactions etc.  For instance:


Cochrane records on Trip – opinions please

Cochrane is one of the most popular resources in Trip.

Historically we have received automated updates from Wiley with the intention of ensuring we were always up to date with the records. It has come to our attention that there had been some problems with the process and that we were missing a number of key Cochrane reviews.

We have reached out to Wiley to help us resolve these issues.  Unfortunately, this is taking far too long for my liking and so we’re faced with a dilemma:

  • Continue – indefinitely – waiting for Wiley to support Cochrane’s inclusion in Trip = not up to date records of Cochrane.
  • Move to Cochrane’s records on PubMed = up to date records but not as seamless access to full-text as if it were Wiley’s.

I’d welcome peoples views as to which is the better option, please let us know in the vote below:



The evidence pyramid

A great explanation of the evidence pyramid (taken from this Walden University site):

We use this concept in Trip to help users navigate the results (helping indicate the likely reliability of the evidence):

The main place to see it is slightly to the right of the result.  Here you see the pyramid, a colour-banded representation of where the evidence lies in the pyramid and a phrase describing the content (e.g. systematic review, guideline etc). You’ll note the colour coding which can be found to the left of each result and it is carried through to the ‘Refine by’ area.

We’ve just had a large usability study of the site and this is what the report states:

  • Users from research or information management backgrounds understood the pyramid icon, but users who were new to it wondered what it was.
  • When they figured out what it was, or I talked them through it, they appreciated it and it appeared to add to their experience.
  • Recommend adding a mouse-over explanation or some kind of ‘Introducing the pyramid of evidence’ box somewhere, as part of onboarding or in help.

Bottom line: it’s useful but only when you know what it means!  With no understanding of the concept it’s just confusing.

I need to go and sit on the ‘naughty step’ and contemplate why I fell in to the trap of assuming users know all this sort of stuff (oh yes, and to fix it).



Blog at

Up ↑