Search

Trip Database Blog

Liberating the literature

Quality and guidelines

In 2011 the Institute of Medicines published Clinical Practice Guidelines We Can Trust and it produced 8 standards:

  1. Establishing transparency
  2. Management of conflict of interest (COI)
  3. Guideline development group composition
  4. Clinical practice guideline–systematic review intersection
  5. Establishing evidence foundations for and rating strength of recommendations
  6. Articulation of recommendations
  7. External review
  8. Updating

There are other checklists available (e.g. see this recent comparision A Comparison of AGREE and RIGHT: which Clinical Practice Guideline Reporting Checklist Should Be Followed by Guideline Developers?).

I raise all this as I wonder if we, at Trip, could automatically approximate quality of guidelines based on the IoM’s 8 point checklist. Given it needs to be automatic it would need a number of rules that could help understand the likely quality.  Taking the 8 standards I could see us approximating the following:

  1. Transparency – does it mention funding? This is doable via text-mining.
  2. Conflict of interest – does it mention conflict of interest within the guideline? This is doable via text-mining.
  3. Guideline development group composition – does it mention a multidisciplinary team and/or patient involvement? Potentially doable, but not convinced.
  4. Clinical practice guideline–systematic review intersection – does it mention systematic reviews (a bit more nuanced in reality)? This is doable via text-mining.
  5. Establishing evidence foundations for and rating strength of recommendations – does it rate the strength of evidence? This is probably doable via text-mining.
  6. Articulation of recommendations – does it clearly list recommendations? Potentially doable, but not convinced.
  7. External review – does it mention the review process? Potentially doable, but not convinced.
  8. Updating – does it mention the date and/or updating date? This is doable via text-mining.

So, what I could see us doing is checking each guideline for the following:

  1. Does it mention funding? Y/N
  2. Does it discuss conflict of interest? Y/N
  3. Does it mention systematic reviews? Y/N
  4. Does it discuss the strength of evidence? Y/N
  5. Does it mention recommendations? Y/N
  6. Does it have a date within the guideline? Y/N
  7. Does it mention updating Y/N

Do, we could scan each of the guidelines for either all 7 items (although it may just be 5, as items 4 and 5 are potentially problematic).  So, if we go for the ‘simple’ 5 we would be able to rate each guideline on a 5 point scale.

The question becomes if a guideline mentions funding, conflict of interest etc is that a good indicator (or approximation) for the quality of a guideline? I think it seems fairly reasonable (as long as recommendations are clear) but what do others think?  How might it be improved?

 

Risk of Bias scores for controlled trials

We’ve been working with RobotReviewer for a number of years. They do two things for us:

  • Highlight all the controlled trials in PubMed with a high degree of accuracy
  • Assess these trials for bias using their amazing automated systems (see this earlier blog when we first started working with them).

RobotReviewer have improved their systems making bias and trial identification even better and to ‘celebrate’ this we’ve made some changes to Trip.  We’ve altered the way bias scores are displayed in Trip and we’ve now created a filter so you can choose to only show those trials with a low estimated risk of bias (labelled “Controlled trial quality: predicted high”):

 

This is a big improvement in helping people easily locate high quality evidence, as such we’re delighted.

Oh yes, for the data nerds, as of a few days ago there were 552,463 controlled trials in PubMed!

Changes on the results page

We’ve had a bit of a re-jig of the results page.  This is the old format (note the Q&A reference at the top and the search suggestions ‘Trip users also search for’ towards the bottom):

 

The new format has reversed the position of these:

The rationale is simple, the search refinement section has been moved to the top as users might well see a large number of results and want to refine straightaway. The Q&A, where it was, was confusing and appeared too early in the search ‘journey’. It makes more sense when a user has gone through some results and is thinking there may be no answer.

Oh yes, the search refinement area (at the top) is a rollover – if you rollover it expands:

As ever, comments welcome!

When old news makes the headlines

I do some work for BMJ Evidence-Based Medicine and one aspect of this is looking for practice changing research. Part of this is exploring the Altmetrics for each article I find.

One article, with an exceptionally high Altmetric score (1629 as I type this) is Bedtime hypertension treatment improves cardiovascular risk reduction: the Hygia Chronotherapy Trial. The aim of the trial was “to test whether bedtime in comparison to usual upon awakening hypertension therapy exerts better cardiovascular disease (CVD) risk reduction.”  The trial concluded:

Routine ingestion by hypertensive patients of ≥1 prescribed BP-lowering medications at bedtime, as opposed to upon waking, results in improved ABP control (significantly enhanced decrease in asleep BP and increased sleep-time relative BP decline, i.e. BP dipping) and, most importantly, markedly diminished occurrence of major CVD events.

This reminded me of a question we’d been asked in the past. A bit of rummaging around and I found “am or pm what is the optimal time to take once a day antihypertensives?” which we answered in 2010.  In short the evidence found – back then – that bedtime was typically the better time to take this anti-hypertensives: giving better blood pressure control and reduction in events.

Why am I highlighting this?  One angle could be to question the necessity of the trial. I’m acutely aware of the work by the likes of Iain Chalmers and Paul Glasziou on research waste (eg Research waste is still a scandal). However, this is not something I’m in a position to comment on, I feel I don’t have the expertise.

But my fascination is why is got such a high Altmetric score. I see Altmetrics as a sign of interest and/or newsworthiness. But to me this is not new knowledge. It might now have more ‘power’ but I had thought that bedtime consumption of anti-hypertensives was, typically, the best thing to do.

So, to me, this demonstrates the difficulty of spreading knowledge. The knowledge has been known for years. The Altmetric score demonstrates there was a real hunger for the knowledge. Yet, somehow, until this new paper came out the knowledge was either inaccessible or clinicians didn’t articulate the question and search for the evidence! Slightly depressing either way….

Interesting search problem!

We’ve had a really interesting problem, one I’m at a loss to answer…

The user was looking for articles on ‘lead’ (the metal) and was clearly getting lots of noise from articles that use the not metal version of the term, such as:

  • Lead-I ECG devices for detecting sym….
  • …Bone Loss and Leads to Faster…

So, how might we get around this?

I tried this lead* NOT leading and it got better results (18,295 from 372,066) but I can’t construct a working search that adds additional terms (eg lead* NOT (leading OR leadership)).

Any solutions?

Unblocking Trip’s indexing ‘fatberg’

To get content in to Trip it needs to be indexed – another term for processing.  This starts when we add basic records to the site, this can be as simple as document title, URL and year of publication.  Our systems then process the records before they arrive in the searchable index of records.

The basic records get in to our system in two main ways:

  • A monthly manual upload of new content.
  • Automatic grabbing from sites such as PubMed.

Recently we noticed that records from PubMed were not getting in to Trip in a timely manner, caused by an indexing ‘fatberg‘. To cut a long story short we have unblocked our indexing pipes and we’re now all back, ship shape and Bristol fashion!

Questions in need of answers… social media addiction and risk of sedative medication

Our community Q&A has two questions needing an answer:

  • What is the best scale for social media addiction?
  • Is there any cross sectional study about the risk of uses sedative medication?

If you know of any relevant literature then please let us know.

As a reminder the Trip Community Q&A tries to link unanswered clinical questions with users likely to know the answer – the hope being that they are best placed to answer it. The idea has been inspired by sites such as Quora.

Anatomy of a clinical question

When starting the community Q&A I was worried that people would be lazy, not search themselves and simply ask the question. This has not been the case. The overwhelming majority have been complex and legitimately not fully answerable via Trip!  Here are a few examples with my interpretation underneath:

In children with pharyngitis, how useful are prediction tools to select whom to perform microbiological tests?

This was difficult to answer for a couple of reasons:

  • For the person asking the question: He was Spanish and therefore I wonder about the knowledge of synonyms. He asked about ‘prediction tools’ whereas the (English) literature tends to use the phrase ‘decision rules’. So, when searching that starts from a point of great disadvantage!
  • From Trip’s perspective we simply did not index the main document, from CADTH.  We tend to grab all the content from them so I’ll need to understand why it’s not in our index!

The adult patient developed insomnia due to metoprolol, is there B-blocker less associated with causing insomnia?

The main issue is the lack of clarity of evidence – as mentioned in the answer it is often contradictory. So, was the motivation of the questioner to try to get a more definitive answer than suggested by the literature?

What is the possible effect for giving an ACE +NSID+diuretics in elderly patients?

We’ve actually been asked this question twice!  This is a problematic search due to the issue of synonyms. But, for me, the issue of the disparate nature of the literature is the key problem. There are a number of references, none are definitive. So, that induces an uncertainty all of its own.  Clearly a review is needed, and this is what the community Q&A provided, but without it, it’s tough for a health professional to start to make sense of the literature.

I feel there should be more fundamental lessons but I clearly need more thinking time!

Community Q&A, an update

This has been such an interesting experiment with loads of learning.

For those unfamiliar with the community Q&A, the idea is, if you have a clinical question and you can’t find the answer on Trip, then leave the question (link found here). We then encourage other Trip users to answer it. The idea is simple, a Q might be difficult for the user but for an expert in the field it could well be easy. So, Trip’s role is to match the Q to the expert.

Some reflections after a couple of months…:

Errors

  • Probably the biggest error is how we ‘present’ the system, it appears to be confusing so people don’t really get it. And, by that, when you first encounter it (on the top of the results) people aren’t really sure what to make of it. So, we need to improve how we explain the system (any help out there?)

Learning points

  • People often don’t leave structured clinical questions which can lead to ambiguous questions – hardly ideal.
  • So far, very few people – apart from in-house Trip – have answered the questions. We’ve had input via Twitter and email, but we need to do more to make it scalable.
  • People have tended to ask difficult questions, which is what the system is for. It’s great learning for us, as it helps remind us of the complexity of answering clinical questions.

Examples

It’s not gone as planned or anticipated, but this is something we can take our time with. We’ll hopefully be pushing out some changes in the near future.

If you have any thoughts on how we might improve it, please let us know.

Blog at WordPress.com.

Up ↑