Search

Trip Database Blog

Liberating the literature

Author

jrbtrip

Networks and TRIP

For a number of years I’ve been pondering the numerous relationships contained within TRIP.  These are numerous and a few examples include:

  • Relationships between articles e.g. via citations, by being in the same publication
  • Relationships between users e.g. linked by clinical interests, by geographic location
  • Relationships between articles and users e.g. looking at the same articles

I can’t help feeling there is value in these links and I do not mean in the financial sense.

Take a really simple example (click on image to enlarge it)

This is an imaginary search undertaken by 5 users and a line signifies which papers they looked at.  We can deduce some things:

  • Papers six and seven weren’t liked.
  • Paper two looks the most popular
  • Users 2 and 3 appear similar/close (both looking at two of the same papers)

Now, if we add an extra level of data:

In this image the rounder reddish boxes signify doctors while the green boxes signify nurses.  Do these inferences seem reasonable?

  • Paper five is really suited to nurses while papers one (to a point), two and three are more ‘doctor’ focused
  • Paper four has mixed interest.

Imagine if you can add extra detail (different types of doctors, different geographic location) and lots of data (something we have lots of in TRIP) you might be able to generate a really powerful system.  Could it inform search results?

What do people think? I’ve really simplified things to make a point and I doubt the data will ever be as clear cut. 

The next upgrade to TRIP

Are you in for a treat?  I’d like to think so.

I had a long but rewarding meeting with Phil (our genius techie) and Reuben (our new and wonderful design guy) to thrash out the final details of the next upgrade to TRIP.  I am so excited by the proposed developments which are:

  • 3 major innovations – for me really really important developments
  • A handful of significant improvements
  • A load of minor improvements
  • A design overhaul

I’m being deliberately vague with the details for now.  But as things develop and designs get drawn, features get available I may well share them here and on Facebook (if you didn’t know we had a Facebook page, click here).

No firm timetable but I’d like to think it’ll be ready in 3 months.

Also, I’ve started planning the next upgrade, the one after this one, but much depends on how the innovations from this one take off.  If you have any suggestions then feel free to let me know.

An explosion of ideas

It’s less than a month ago when I heard the gutting news that a potential purchaser of TRIP had pulled out (with no reason) from purchasing TRIP.

But every cloud has a silver lining.  While waiting for the acquisition issue I’d not given a huge amount of thought to the next updates of TRIP, not since the survey of last year (click here for details).  But not now – WOW – it’s been a great two weeks of reflection.

I’ve met with Phil (our genius techie) to discuss the updates from last year’s survey and they seem all straightforward(ish) to implement.  We had our request for donations (click here, it’s not too late) which has generated a good amount.

I really like being open about what I’ve been thinking recently, but it’s so special (at least I think so) that I need to keep it under wraps.  It’s built on our social learning tool called TILT but goes way beyond it.  One possible offshoot of this idea is to have organisational accounts of TRIP.  This would allow organisations to upload their own documents to TRIP and then these would be searchable via TRIP.  So, the University Hospital of Bristol might create an account and upload documents (local guidelines, protocols, antibiotic resistance data, clinic opening times – whatever they wish).  A local doctor or nurse could link their profile to the University Hospital of Bristol’s profile and when they search they’d see local documents.  In addition to local documents the organisational account might add their link-resolver details – making linking to full-text documents so much easier.

The big issue for me is needing to make it as painless as possible for organisations to upload their documents.  Also, it needs to be easy for individuals to find their institution.  Both shouldn’t be too problematic

So, feel free to comment or add any feature you think would make it even more powerful.

As mentioned this is a relatively small offshoot of a bigger idea which I hope to reveal gradually over the next month or so,

TILT – survey time

Following on from my recent post about TILT (click here) I’ve decided to get the wisdom of the crowd to try and improve things – I really don’t want to give up on the idea.

So, if you can spare five minutes then please take the survey – click here.

Thanks!

Donation update

At the time of writing this our PayPal account has £1,388.98 – which is great (if you’ve not given you still can via this link – please do!).  Donations ranged from £1 to £250 and seeing them come in was very humbling for me.  A massive thanks to Ben Goldacre (yes, that Ben Goldacre – Bad Science fame) who tweeted the following to his 189,000 followers on twitter.

can u think of a way that @JRBtrip can fund the excellent TRIP database? Vastly cheaper than NHS Evidence, better imho.

In addition, I also asked people who didn’t donate why they didn’t donate and here’s the response:

  • I hardly use TRIP – 52.6%
  • I like TRIP but not enough to pay for it – 26.3%
  • I can’t afford it – 24.6%
  • I want TRIP to continue and grow but I’m hoping other people will pay for it! – 19.3%

 I’m not sure what I gained from asking this, just curious I suppose!

As mentioned above we’re still interested in generating more income, for more of an idea of our plans, click here.

Rapid versus systematic reviews – part 2

A search was undertaken to identify articles that compared rapid reviews to systematic reviews, further articles were identified following feedback on a list promoted via the evidence-based health mail list and various forms of social media. The list of identified articles can be found here.

Without a clear appreciation of the best way to summarise the documents, I’ve gone with a number of lessons I’ve observed from the literature combined with some personal observations.  Your feedback and suggestions for improvements would be appreciated.

Lesson 1: The notion of a rapid-review is ill-defined. However, introducing one methodology isn’t necessarily appropriate. What is important is transparency behind the process.
Observation 1: The methodology behind systematic reviews varies a great deal as well. Also, what constitutes rapid? In the literature it was typically less than 5 weeks. A lot of my work is undertaken in less than 5 hours. So, I’m very supportive of the notion of transparency.

Lesson 2: The tension between speed and accuracy is a common theme.
Observation 2: While it may appear obvious it’s important that it’s made explicit.

Lesson 3: Rapid reviews tend to look at a focused question while systematic reviews will typically look at broader topics. Also, they tend to focus on efficacy or effectiveness while not be used to examine safety, economics or ethics.
Observation 3: I’m not sure how accurate this statement is. However, I do know the broader the question the less likely it is be answerable quickly.

Lesson 4: Meta-analyses are often not undertaken in rapid reviews, so no effect sizes given – typically just a sign of an interventions effect. Any results are less generalisable and less certain.
Observation 4: A rapid review might be able to say if a treatment is likely to be better than another, it’s less able to say how much better it is. This may or may not be be important.

Lesson 5: Trial quality assessment is important, poor quality studies are likely to overestimate the benefits of a therapy or the value of a test.
Observation 5: Again, this is linked to the time factor. If you only have two days to return a response what should you do? For our ultra-rapid reviews it seems sensible to be transparent and make explicit the short-cuts and possible effects. In our ultra-rapid reviews we aim to use secondary studies but we will use abstracts of primary research as well. One paper suggested that a moderately robust summary of the evidence is better than no evidence.

Lesson 6: The conclusions between a rapid review and a systematic review do not – typically – differ. The extra effort undertaken by carrying out a systematic review may not greatly impact the final conclusions.
Observation 6: Unsurprising, but needs to be taken in the context of the points raised above. Also, an understanding of why they don’t agree is needed.

Lesson 7: Rapid reviews, when compared with systematic reviews occasionally differ. In the papers that compared the rates of difference between rapid reviews and systematic reviews were 4/39, 1/14, and 1/6.

The study that reported 4 differences in conclusion out of 39 reviews compared NICE and BUPA judgements around funding. This may well have reflected genuine differences, semantic differences (ie BUPA used a different classification system than NICE), difference in the year the review was taken (BUPA typically published their reviews earlier than NICE) and genuine judgement differences e.g. BUPA reported that percutaneous vertebroblasty for osteoporosis said it should be used in ‘trial only’ while NICE said ‘evidence adequate’ (but added caveats).
The same paper reported another study showing 1/14 differences but I was unable to ascertain the reason for the difference due to poor referencing.
In the 1/6 case the rapid review reported that the intervention was experimental while the large cost-effectiveness study indicated that the intervention was safe and efficacious. No reason was supplied for the discrepancy.
Observation 7: Clearly more research is needed to understand differences and I’d be very keen to see how ultra-rapid (less than 1 day) reviews compare with rapid and systematic reviews.

Conclusion: This is a fascinating topic that needs more research to make robust conclusions.  I looked into this topic due to my work in ultra-rapid reviews and wanting to know how they might stack up against more robust methods.  There appears to be no evidence on the matter.  I have two forms of comfort:

  • In my time me and my various teams have published over 10,000 questions and many of our answers have been viewed over five thousand times.  In that time I am only aware of one serious problem with an answer.
  • I have always said that what we do is not a systematic review but we invariably do better than most rushed clinicians when searching the evidence for an answer.  If our service is ‘wrong’ then it suggests providing evidence resources to clinicians (knowing they’ll do a worse job) is also wrong. 

Transparency is the key message for me.  Being clear in communicating the methods used and also in communicating the likely effect of the methodological short-cuts.

Do you value TRIP?

TRIP is free-access, there are no charges to use it and we don’t want to restrict access.  We also want to develop TRIP, to make it more useful so more clinicians and patients can benefit from high quality evidence to support their practice/care.  We’re low cost, with myself (Jon Brassey) being the only (part-time) paid employee.  Aside from that we have server costs, insurance and technical support – it all works out at about £3,000 per month. We generate this money from a variety of sources and occasionally have ‘spare’ that we can use for improvements to the TRIP site or various experiments (e.g. TILT, Blitter and our developing world initiative). 

TRIP has had a massive global impact, helping in millions of episodes of care (estimated at over 20 million times). We want to have an even bigger impact, we have the ideas but we need help to achieve our aims. 

We have drawn up a list of improvements we’d like to see – based on our massive user survey last year (click here for the main results).  But the main improvements are:

  • Improve the transparency of TRIP, for instance what each of the categories mean, how the results are worked out etc.
  • Search refinement.  We’ve got lots of ideas to make it easier for users to refine their search including an auto-refine feature.
  • Increasing the number of 3rd party databases we link to while reducing the clutter on the results page. A challenge, but we’re confident we have a solution.
  • Introducing an experimental feature, launching the answer engine – a really exciting feature.
  • A design overhaul.  Making everything clearer and easier to use.
  • HTML5.  We’d love to redesign our site using HTML5 to make it work better on mobile devices and tablets.  A separate mobile optimised version would be wonderful.
  • Numerous minor things that just need fixing or tidying up.

These are ambitious changes and the answer engine has massive potential.  I estimate that these changes will cost somewhere between £15-25,000 ($24-40,000).

We know that TRIP is well used and we know people have a lot of affection/love for TRIP.  I’m simply asking users to give something back.  Please consider making a donation to support TRIP.  If you value TRIP and want to see it continue and to grow please don’t leave it to someone else to donate. If you decide not to donate can you please answer these two questions to better understand the reasons – click here.

Donate via this link.

TILT

TILT (Today I Learnt That) is a concept I still love.  I thought we’d done it right, used a panel of clinicians to pilot the idea, when that went well we built it.  I think it’s fair to say that it has failed, but after looking over it again after a few months of inaction I’m still enthusiastic about TILT.

The basic premise of TILT is that you record anything you’ve recently learnt (clinically).  As well as being a record of learning (useful for revalidation in the UK) it was also shared with the wider community.  The community could learn from your endeavours.  If you read someone else’s learning a simple click of a button added it to your own portfolio.  In many ways it’s like a twitter meets shared learning.

But why has it failed? I think the reasons are numerous:

  • It was over-complicated!  KISS (Keep it simple stupid) – I should have known.  As well as the core concept (which we piloted) we added a few extra layers e.g. the ability to follow other users, the ability to create groups.  I also think the input fields could have been perceived as daunting.  Although optional we had/have fields for tags, time spent learning, reflections etc.  
  • Availability of TILT.  Basically it was on the website and we also allowed people to automatically add content from twitter (by the addition of the #TILT hashtag).  But it needed to be easier for people to add TILTs – perhaps toolbars, bookmarklets, partnering with other sites to add TILT functionality. 
  • Design.  While I don’t think it’s bad, I feel it could be significantly improved upon.
  • Marketing.  Something TRIP isn’t great at and hence getting the word out didn’t really help.  In my naive mind I thought it’d be so good that word of mouth would see it diffuse.  However, this required it to be perfect which – in hindsight – it wasn’t.

To reiterate the concept it great and I still love it.  It’s a bottom-up form of learning – clinician’s read an academic paper (for instance) and distill what they’ve learnt into 2-3 sentences.  They only TILT when they’ve learnt something.  Brilliant.

To make it work I’m thinking the following would help:

  • Simplify.  Remove lots of the non-core bits (tags, groups, following etc.) while the site grows and worry about these when the site gets to a size that makes it an issue!  But build up the critical mass required first!
  • Availability. Better improve the integration with social media, create bookmarklets etc.  I also think re-writing the site in HTML5 would be great, making it work well on smartphones, tablets etc.
  • As part of the HTML5 work I’d be really tempted to improve how the site works and how people can use it.
  • Marketing – need I say anything about this!?

If anyone has any further thoughts feel free to share them!

UPDATE: I posted links to the blog on Twitter and the TRIP Facebook page and have received two comments so far:

  • “Is TILT still there? Is it still a feature? I never understood what it was or how to use it?” – A good one this, our inability to communicate clearly what TILT is and what we’re trying to acheive.
  • Make login mandatory” – less clear this one as you need to login to add a TILT.  But I guess if you were always logged in it’d be easier to use.  The comment also links to being logged into TRIP.
  • It is not clear what the value is for the user to use it. To my mind the value is one of being part of an altruistic community which makes learning easier.  Perhaps I’m too idealistic!

Rapid versus systematic reviews

While systematic reviews remain the gold standard for synthesising evidence, they are typically costly and take months and months.  In a healthcare setting, both time and money are under heavy restraint – so what are the options?

I have been undertaking rapid reviews (in the form of clinical Q&As) for nearly 15 years e.g. ATTRACT.  Me and my various teams have answered well over 10,000 questions – the majority taking less than 4 hours.  So, there are clear difference between what we do and what a systematic review does. I have typically justified my outputs by not claiming to do a systematic review but to be transparent about what we do and also, the hope, that we would do better than an individual clinician.  In addition we have published all our answers on the web and many have been viewed over 5,000 times – most pass without comment.  I feel moderately reassured by this post-publication ‘peer review’.  In fact, we have only had one major alert where we clearly made a serious error.  It was around the time of the Cox-2 issues and we relied on pre-crisis documentation!

But, it’d be complacent to think our methods are perfect.  So, I recently asked the EBHC mail-list for any literature on the subject (rapid versus systematic reviews) and the results are below, if you know of any others then please let me know.

Blog at WordPress.com.

Up ↑