Trip Database Blog

Liberating the literature


April 2012

TILT – survey time

Following on from my recent post about TILT (click here) I’ve decided to get the wisdom of the crowd to try and improve things – I really don’t want to give up on the idea.

So, if you can spare five minutes then please take the survey – click here.


Donation update

At the time of writing this our PayPal account has £1,388.98 – which is great (if you’ve not given you still can via this link – please do!).  Donations ranged from £1 to £250 and seeing them come in was very humbling for me.  A massive thanks to Ben Goldacre (yes, that Ben Goldacre – Bad Science fame) who tweeted the following to his 189,000 followers on twitter.

can u think of a way that @JRBtrip can fund the excellent TRIP database? Vastly cheaper than NHS Evidence, better imho.

In addition, I also asked people who didn’t donate why they didn’t donate and here’s the response:

  • I hardly use TRIP – 52.6%
  • I like TRIP but not enough to pay for it – 26.3%
  • I can’t afford it – 24.6%
  • I want TRIP to continue and grow but I’m hoping other people will pay for it! – 19.3%

 I’m not sure what I gained from asking this, just curious I suppose!

As mentioned above we’re still interested in generating more income, for more of an idea of our plans, click here.

Rapid versus systematic reviews – part 2

A search was undertaken to identify articles that compared rapid reviews to systematic reviews, further articles were identified following feedback on a list promoted via the evidence-based health mail list and various forms of social media. The list of identified articles can be found here.

Without a clear appreciation of the best way to summarise the documents, I’ve gone with a number of lessons I’ve observed from the literature combined with some personal observations.  Your feedback and suggestions for improvements would be appreciated.

Lesson 1: The notion of a rapid-review is ill-defined. However, introducing one methodology isn’t necessarily appropriate. What is important is transparency behind the process.
Observation 1: The methodology behind systematic reviews varies a great deal as well. Also, what constitutes rapid? In the literature it was typically less than 5 weeks. A lot of my work is undertaken in less than 5 hours. So, I’m very supportive of the notion of transparency.

Lesson 2: The tension between speed and accuracy is a common theme.
Observation 2: While it may appear obvious it’s important that it’s made explicit.

Lesson 3: Rapid reviews tend to look at a focused question while systematic reviews will typically look at broader topics. Also, they tend to focus on efficacy or effectiveness while not be used to examine safety, economics or ethics.
Observation 3: I’m not sure how accurate this statement is. However, I do know the broader the question the less likely it is be answerable quickly.

Lesson 4: Meta-analyses are often not undertaken in rapid reviews, so no effect sizes given – typically just a sign of an interventions effect. Any results are less generalisable and less certain.
Observation 4: A rapid review might be able to say if a treatment is likely to be better than another, it’s less able to say how much better it is. This may or may not be be important.

Lesson 5: Trial quality assessment is important, poor quality studies are likely to overestimate the benefits of a therapy or the value of a test.
Observation 5: Again, this is linked to the time factor. If you only have two days to return a response what should you do? For our ultra-rapid reviews it seems sensible to be transparent and make explicit the short-cuts and possible effects. In our ultra-rapid reviews we aim to use secondary studies but we will use abstracts of primary research as well. One paper suggested that a moderately robust summary of the evidence is better than no evidence.

Lesson 6: The conclusions between a rapid review and a systematic review do not – typically – differ. The extra effort undertaken by carrying out a systematic review may not greatly impact the final conclusions.
Observation 6: Unsurprising, but needs to be taken in the context of the points raised above. Also, an understanding of why they don’t agree is needed.

Lesson 7: Rapid reviews, when compared with systematic reviews occasionally differ. In the papers that compared the rates of difference between rapid reviews and systematic reviews were 4/39, 1/14, and 1/6.

The study that reported 4 differences in conclusion out of 39 reviews compared NICE and BUPA judgements around funding. This may well have reflected genuine differences, semantic differences (ie BUPA used a different classification system than NICE), difference in the year the review was taken (BUPA typically published their reviews earlier than NICE) and genuine judgement differences e.g. BUPA reported that percutaneous vertebroblasty for osteoporosis said it should be used in ‘trial only’ while NICE said ‘evidence adequate’ (but added caveats).
The same paper reported another study showing 1/14 differences but I was unable to ascertain the reason for the difference due to poor referencing.
In the 1/6 case the rapid review reported that the intervention was experimental while the large cost-effectiveness study indicated that the intervention was safe and efficacious. No reason was supplied for the discrepancy.
Observation 7: Clearly more research is needed to understand differences and I’d be very keen to see how ultra-rapid (less than 1 day) reviews compare with rapid and systematic reviews.

Conclusion: This is a fascinating topic that needs more research to make robust conclusions.  I looked into this topic due to my work in ultra-rapid reviews and wanting to know how they might stack up against more robust methods.  There appears to be no evidence on the matter.  I have two forms of comfort:

  • In my time me and my various teams have published over 10,000 questions and many of our answers have been viewed over five thousand times.  In that time I am only aware of one serious problem with an answer.
  • I have always said that what we do is not a systematic review but we invariably do better than most rushed clinicians when searching the evidence for an answer.  If our service is ‘wrong’ then it suggests providing evidence resources to clinicians (knowing they’ll do a worse job) is also wrong. 

Transparency is the key message for me.  Being clear in communicating the methods used and also in communicating the likely effect of the methodological short-cuts.

Do you value TRIP?

TRIP is free-access, there are no charges to use it and we don’t want to restrict access.  We also want to develop TRIP, to make it more useful so more clinicians and patients can benefit from high quality evidence to support their practice/care.  We’re low cost, with myself (Jon Brassey) being the only (part-time) paid employee.  Aside from that we have server costs, insurance and technical support – it all works out at about £3,000 per month. We generate this money from a variety of sources and occasionally have ‘spare’ that we can use for improvements to the TRIP site or various experiments (e.g. TILT, Blitter and our developing world initiative). 

TRIP has had a massive global impact, helping in millions of episodes of care (estimated at over 20 million times). We want to have an even bigger impact, we have the ideas but we need help to achieve our aims. 

We have drawn up a list of improvements we’d like to see – based on our massive user survey last year (click here for the main results).  But the main improvements are:

  • Improve the transparency of TRIP, for instance what each of the categories mean, how the results are worked out etc.
  • Search refinement.  We’ve got lots of ideas to make it easier for users to refine their search including an auto-refine feature.
  • Increasing the number of 3rd party databases we link to while reducing the clutter on the results page. A challenge, but we’re confident we have a solution.
  • Introducing an experimental feature, launching the answer engine – a really exciting feature.
  • A design overhaul.  Making everything clearer and easier to use.
  • HTML5.  We’d love to redesign our site using HTML5 to make it work better on mobile devices and tablets.  A separate mobile optimised version would be wonderful.
  • Numerous minor things that just need fixing or tidying up.

These are ambitious changes and the answer engine has massive potential.  I estimate that these changes will cost somewhere between £15-25,000 ($24-40,000).

We know that TRIP is well used and we know people have a lot of affection/love for TRIP.  I’m simply asking users to give something back.  Please consider making a donation to support TRIP.  If you value TRIP and want to see it continue and to grow please don’t leave it to someone else to donate. If you decide not to donate can you please answer these two questions to better understand the reasons – click here.

Donate via this link.


TILT (Today I Learnt That) is a concept I still love.  I thought we’d done it right, used a panel of clinicians to pilot the idea, when that went well we built it.  I think it’s fair to say that it has failed, but after looking over it again after a few months of inaction I’m still enthusiastic about TILT.

The basic premise of TILT is that you record anything you’ve recently learnt (clinically).  As well as being a record of learning (useful for revalidation in the UK) it was also shared with the wider community.  The community could learn from your endeavours.  If you read someone else’s learning a simple click of a button added it to your own portfolio.  In many ways it’s like a twitter meets shared learning.

But why has it failed? I think the reasons are numerous:

  • It was over-complicated!  KISS (Keep it simple stupid) – I should have known.  As well as the core concept (which we piloted) we added a few extra layers e.g. the ability to follow other users, the ability to create groups.  I also think the input fields could have been perceived as daunting.  Although optional we had/have fields for tags, time spent learning, reflections etc.  
  • Availability of TILT.  Basically it was on the website and we also allowed people to automatically add content from twitter (by the addition of the #TILT hashtag).  But it needed to be easier for people to add TILTs – perhaps toolbars, bookmarklets, partnering with other sites to add TILT functionality. 
  • Design.  While I don’t think it’s bad, I feel it could be significantly improved upon.
  • Marketing.  Something TRIP isn’t great at and hence getting the word out didn’t really help.  In my naive mind I thought it’d be so good that word of mouth would see it diffuse.  However, this required it to be perfect which – in hindsight – it wasn’t.

To reiterate the concept it great and I still love it.  It’s a bottom-up form of learning – clinician’s read an academic paper (for instance) and distill what they’ve learnt into 2-3 sentences.  They only TILT when they’ve learnt something.  Brilliant.

To make it work I’m thinking the following would help:

  • Simplify.  Remove lots of the non-core bits (tags, groups, following etc.) while the site grows and worry about these when the site gets to a size that makes it an issue!  But build up the critical mass required first!
  • Availability. Better improve the integration with social media, create bookmarklets etc.  I also think re-writing the site in HTML5 would be great, making it work well on smartphones, tablets etc.
  • As part of the HTML5 work I’d be really tempted to improve how the site works and how people can use it.
  • Marketing – need I say anything about this!?

If anyone has any further thoughts feel free to share them!

UPDATE: I posted links to the blog on Twitter and the TRIP Facebook page and have received two comments so far:

  • “Is TILT still there? Is it still a feature? I never understood what it was or how to use it?” – A good one this, our inability to communicate clearly what TILT is and what we’re trying to acheive.
  • Make login mandatory” – less clear this one as you need to login to add a TILT.  But I guess if you were always logged in it’d be easier to use.  The comment also links to being logged into TRIP.
  • It is not clear what the value is for the user to use it. To my mind the value is one of being part of an altruistic community which makes learning easier.  Perhaps I’m too idealistic!

Rapid versus systematic reviews

While systematic reviews remain the gold standard for synthesising evidence, they are typically costly and take months and months.  In a healthcare setting, both time and money are under heavy restraint – so what are the options?

I have been undertaking rapid reviews (in the form of clinical Q&As) for nearly 15 years e.g. ATTRACT.  Me and my various teams have answered well over 10,000 questions – the majority taking less than 4 hours.  So, there are clear difference between what we do and what a systematic review does. I have typically justified my outputs by not claiming to do a systematic review but to be transparent about what we do and also, the hope, that we would do better than an individual clinician.  In addition we have published all our answers on the web and many have been viewed over 5,000 times – most pass without comment.  I feel moderately reassured by this post-publication ‘peer review’.  In fact, we have only had one major alert where we clearly made a serious error.  It was around the time of the Cox-2 issues and we relied on pre-crisis documentation!

But, it’d be complacent to think our methods are perfect.  So, I recently asked the EBHC mail-list for any literature on the subject (rapid versus systematic reviews) and the results are below, if you know of any others then please let me know.

Search refinement

The ability to search TRIP and then filter by publication type (e.g. systematic reviews, guidelines) is often cited as something that’s really positive about TRIP.  Aside from the addition of a few new categories there have been no real changes to it since it was introduced (probably ten years ago).

While it works really well, I’m wondering if it can be improved.  In the image below (a mock up, it’s not real) you’ll see a potential feature is displayed if a user clicks on the ‘Systematic Review’ filter – it allows you to select separate publications with the systematic review category.

Do people like this?

Gutting news

When you hear bad news sometimes you’ve got to get it out of your system.  We were in the early stages of being acquired but things were looking very positive as both sides appeared to want it to happen.  However, we heard today that it isn’t happening – so I’m flat.  Not for any personal reason based on me getting a big fat cheque (I never expected that).  What I thought was that with this new partner, giving us some money to invest, we could make a much bigger difference.

I’ve got a massive backlog of ideas I want to implement, all making TRIP so much better.  To introduce these needs investment, not much, but enough to do it properly. 

If you look at what TRIP has achieved (see this post & this one) with our amazingly low budget (around £25,000 per year) just think what we could achieve with a decent investment.

However, it’s not only the money, it’s help with other things – such as publicity – to help get the message of TRIP across.

So, my disappointment is that I really did feel I could have helped change the world.  For the time being that’s been taken from me – so flat as the proverbial pancake.

Rapid reviews and TRIP

The TRIP Database was designed to help me answer questions for the ATTRACT service, a service I’m still involved in.  The ATTRACT methodology involves receiving a clinical question from a clinician and answering it, using the best available evidence, within 4-6 hours.  It’s clear to see that this is not a systematic review!  However, it’s unclear what the effects are so I’ve been doing some searching for evaluations of rapid reviews versus systematic reviews.  It’s early days in the review but a couple of interesting papers are:

If you know of others please let me know.

Frequently, when reading around a subject I get side-tracked in thoughts and started thinking about carrying out rapid reviews from within TRIP.  It might work something like this:

  1. The user starts by using the TRIP PICO search (click here to see it).
  2. A user selects those articles that s/he wants to view.
  3. These are then all opened up in new windows framed by a special TRIP frame (so we can do some clever stuff highlighted below).
  4. The user can read the articles, highlight passages they want including in the review and ‘send’ them to a review builder on the TRIP site (by simply pressing a button)
  5. After this process is carried out the user goes to the review builder, add a narrative to link the passages and TRIP publishes the reviews (after also adding references, etc)

We could even monitor new content added to TRIP and alert the original creator when new articles on the topic are published (arguably we could add them to the actual review as well in a ‘new evidence’ section!

    It’d work for ATTRACT and I’m guessing it might work elsewhere…!

    Blog at

    Up ↑