Trip Database Blog

Liberating the literature

Grading guidelines

At the end of last year we posted Quality and guidelines which set out our thinking around grading guidelines with a view of improving the experience for our users. Since then we’ve done a great deal of work exploring this issue and have arrived at a modified version of the Institute of Medicine’s Clinical Practice Guidelines We Can Trust scoring system.

Firstly, an important distinction to highlight, is that we are not able to grade individual guidelines. Trip has over 10,000 clinical guidelines and that’s simply impractical from a resource perspective. So, the plan is to grade each guideline publisher. The idea is that each publisher will be independently visited by two people (Trip staff and volunteers) and they will score them based on these questions:

  • Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
  • Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
  • Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
  • Are they clear about funding? No = 0, Yes = 1
  • Do they mention how they handle conflict of interest? No = 0, Yes = 1

The best score being 8!  Our work has shown that the above results give very good approximations to the more formal methods, hence we’re using this simpler approach. The idea is to start displaying these scores alongside each result (we’ll work on a graphic to display it and allow users to easily see how we’ve scored them).

I mentioned volunteers above and we’ve recruited a number via emails from Trip. But if you’ve missed them and are interested in helping out then please send an email to

Search tip: Phrase searching, ironing out an anomaly

I had an email relating to the phrase searching, it highlighted that a search for “e-learning” was generating huge amounts of irrelevant results.  It appears that the hyphen, within the phrase searching, causes confusion!

After a bit of trial an error the appropriate way appears to ditch the hyphen and simply search for “e learning”.  The result:

I expect most will agree this is a better, more manageable, result!

Thanks Feargus for highlighting the issue!

Quality and guidelines

In 2011 the Institute of Medicines published Clinical Practice Guidelines We Can Trust and it produced 8 standards:

  1. Establishing transparency
  2. Management of conflict of interest (COI)
  3. Guideline development group composition
  4. Clinical practice guideline–systematic review intersection
  5. Establishing evidence foundations for and rating strength of recommendations
  6. Articulation of recommendations
  7. External review
  8. Updating

There are other checklists available (e.g. see this recent comparision A Comparison of AGREE and RIGHT: which Clinical Practice Guideline Reporting Checklist Should Be Followed by Guideline Developers?).

I raise all this as I wonder if we, at Trip, could automatically approximate quality of guidelines based on the IoM’s 8 point checklist. Given it needs to be automatic it would need a number of rules that could help understand the likely quality.  Taking the 8 standards I could see us approximating the following:

  1. Transparency – does it mention funding? This is doable via text-mining.
  2. Conflict of interest – does it mention conflict of interest within the guideline? This is doable via text-mining.
  3. Guideline development group composition – does it mention a multidisciplinary team and/or patient involvement? Potentially doable, but not convinced.
  4. Clinical practice guideline–systematic review intersection – does it mention systematic reviews (a bit more nuanced in reality)? This is doable via text-mining.
  5. Establishing evidence foundations for and rating strength of recommendations – does it rate the strength of evidence? This is probably doable via text-mining.
  6. Articulation of recommendations – does it clearly list recommendations? Potentially doable, but not convinced.
  7. External review – does it mention the review process? Potentially doable, but not convinced.
  8. Updating – does it mention the date and/or updating date? This is doable via text-mining.

So, what I could see us doing is checking each guideline for the following:

  1. Does it mention funding? Y/N
  2. Does it discuss conflict of interest? Y/N
  3. Does it mention systematic reviews? Y/N
  4. Does it discuss the strength of evidence? Y/N
  5. Does it mention recommendations? Y/N
  6. Does it have a date within the guideline? Y/N
  7. Does it mention updating Y/N

Do, we could scan each of the guidelines for either all 7 items (although it may just be 5, as items 4 and 5 are potentially problematic).  So, if we go for the ‘simple’ 5 we would be able to rate each guideline on a 5 point scale.

The question becomes if a guideline mentions funding, conflict of interest etc is that a good indicator (or approximation) for the quality of a guideline? I think it seems fairly reasonable (as long as recommendations are clear) but what do others think?  How might it be improved?


Risk of Bias scores for controlled trials

We’ve been working with RobotReviewer for a number of years. They do two things for us:

  • Highlight all the controlled trials in PubMed with a high degree of accuracy
  • Assess these trials for bias using their amazing automated systems (see this earlier blog when we first started working with them).

RobotReviewer have improved their systems making bias and trial identification even better and to ‘celebrate’ this we’ve made some changes to Trip.  We’ve altered the way bias scores are displayed in Trip and we’ve now created a filter so you can choose to only show those trials with a low estimated risk of bias (labelled “Controlled trial quality: predicted high”):


This is a big improvement in helping people easily locate high quality evidence, as such we’re delighted.

Oh yes, for the data nerds, as of a few days ago there were 552,463 controlled trials in PubMed!

Changes on the results page

We’ve had a bit of a re-jig of the results page.  This is the old format (note the Q&A reference at the top and the search suggestions ‘Trip users also search for’ towards the bottom):


The new format has reversed the position of these:

The rationale is simple, the search refinement section has been moved to the top as users might well see a large number of results and want to refine straightaway. The Q&A, where it was, was confusing and appeared too early in the search ‘journey’. It makes more sense when a user has gone through some results and is thinking there may be no answer.

Oh yes, the search refinement area (at the top) is a rollover – if you rollover it expands:

As ever, comments welcome!

When old news makes the headlines

I do some work for BMJ Evidence-Based Medicine and one aspect of this is looking for practice changing research. Part of this is exploring the Altmetrics for each article I find.

One article, with an exceptionally high Altmetric score (1629 as I type this) is Bedtime hypertension treatment improves cardiovascular risk reduction: the Hygia Chronotherapy Trial. The aim of the trial was “to test whether bedtime in comparison to usual upon awakening hypertension therapy exerts better cardiovascular disease (CVD) risk reduction.”  The trial concluded:

Routine ingestion by hypertensive patients of ≥1 prescribed BP-lowering medications at bedtime, as opposed to upon waking, results in improved ABP control (significantly enhanced decrease in asleep BP and increased sleep-time relative BP decline, i.e. BP dipping) and, most importantly, markedly diminished occurrence of major CVD events.

This reminded me of a question we’d been asked in the past. A bit of rummaging around and I found “am or pm what is the optimal time to take once a day antihypertensives?” which we answered in 2010.  In short the evidence found – back then – that bedtime was typically the better time to take this anti-hypertensives: giving better blood pressure control and reduction in events.

Why am I highlighting this?  One angle could be to question the necessity of the trial. I’m acutely aware of the work by the likes of Iain Chalmers and Paul Glasziou on research waste (eg Research waste is still a scandal). However, this is not something I’m in a position to comment on, I feel I don’t have the expertise.

But my fascination is why is got such a high Altmetric score. I see Altmetrics as a sign of interest and/or newsworthiness. But to me this is not new knowledge. It might now have more ‘power’ but I had thought that bedtime consumption of anti-hypertensives was, typically, the best thing to do.

So, to me, this demonstrates the difficulty of spreading knowledge. The knowledge has been known for years. The Altmetric score demonstrates there was a real hunger for the knowledge. Yet, somehow, until this new paper came out the knowledge was either inaccessible or clinicians didn’t articulate the question and search for the evidence! Slightly depressing either way….

Interesting search problem!

We’ve had a really interesting problem, one I’m at a loss to answer…

The user was looking for articles on ‘lead’ (the metal) and was clearly getting lots of noise from articles that use the not metal version of the term, such as:

  • Lead-I ECG devices for detecting sym….
  • …Bone Loss and Leads to Faster…

So, how might we get around this?

I tried this lead* NOT leading and it got better results (18,295 from 372,066) but I can’t construct a working search that adds additional terms (eg lead* NOT (leading OR leadership)).

Any solutions?

Unblocking Trip’s indexing ‘fatberg’

To get content in to Trip it needs to be indexed – another term for processing.  This starts when we add basic records to the site, this can be as simple as document title, URL and year of publication.  Our systems then process the records before they arrive in the searchable index of records.

The basic records get in to our system in two main ways:

  • A monthly manual upload of new content.
  • Automatic grabbing from sites such as PubMed.

Recently we noticed that records from PubMed were not getting in to Trip in a timely manner, caused by an indexing ‘fatberg‘. To cut a long story short we have unblocked our indexing pipes and we’re now all back, ship shape and Bristol fashion!

Questions in need of answers… social media addiction and risk of sedative medication

Our community Q&A has two questions needing an answer:

  • What is the best scale for social media addiction?
  • Is there any cross sectional study about the risk of uses sedative medication?

If you know of any relevant literature then please let us know.

As a reminder the Trip Community Q&A tries to link unanswered clinical questions with users likely to know the answer – the hope being that they are best placed to answer it. The idea has been inspired by sites such as Quora.

Blog at

Up ↑