Search

Trip Database Blog

Liberating the literature

Understanding (bad) search results

Given the recent interest in guidelines and Trip we’ve had a number of new conversations with users from the USA.  I’ll reproduce (well, slightly edited) one below as I think it’s broadly useful for others to understand a fairly common search problem. In this case the person highlighted how frustrating searching for guidelines can be, so here’s my response:

 

Firstly, as previously mentioned, guidelines are particularly problematic as they typically long and cover a broad range of topics. This creates two different problems – missing them and noise!

Missing them

So, you might have a guideline ‘The diagnosis and management of hypertension‘ which covers loads of areas. It might mention gestational hypertension as a chapter, but nothing more. If you searched for gestational hypertension this guideline would not feature highly as gestational is not mentioned in the title (our system favours matches in the title). So, it appears low down in the search results (even though that chapter might be the best result there is for the user). This is frustrating and we have a few things we’re going to try in the near future to improve this! But, your problem is more the noise…

Noise

Take one of your example search safety mechanical ventilation, when you search on the main Trip (so no refinement) you get the following results:

I’ve highlighted the scores – which relate to how relevant the document is to the search terms (BTW only I get to see those scores). But when you refine to USA Guidelines it gets noisy:

You’ll see, from result 3 the relevancy goes right down. In fact, only 2 (of 85 USA guidelines we return) look reasonable.

This comes back to the size of guidelines. They’re long and invariably the search terms appear within them – but they might mention safety, mechanical and ventilation all in different contexts. But as they include the terms they are counted as ‘hits’ and returned. Search can be stupid!!

The thing that we think we’ll introduce is a relevancy cut-off so we could say only return documents with a relevancy score above 0.1, 0.2 (we’d need to test) but allow users to ‘see ALL documents’ – if they want the noise!

Advertisements

Latest and greatest

This is an often overlooked feature of Trip, so it’s about time I highlighted it.

Latest and greatest takes a topic and looks at the latest evidence for the topic and also the ‘greatest’ – by that, the articles that have been clicked on most for the last 12 months.  A list of topics can be found here but you can access the latest and greatest for any topic via the link at the top of any particular search, for example:

We particularly like the ‘greatest’ side of the feature as it allows uses to easily see the articles deemed most useful/interesting for a given topic.  A bit like a topic-based clinical zeitgeist!

Free and easy to use.

Some examples below:

Autism

For the full list, click here.

Multiple sclerosis

For the full list, click here.

Schizophrenia

For the full list, click here.

The post-NGC landscape, a sample of US guidelines added to Trip this month

I’ve written about our attempts to mitigate the loss of the National Guideline Clearinghouse (even producing a ‘conversion’ how to use Trip guide).  But below are a sample of the new guidelines, added this month, to Trip:

As mentioned, the above is a sample, a modest sample 🙂

Searching for guidelines post-NGC

This post is to help National Guideline Clearinghouse (NGC) users navigate Trip to find the guidelines they need.

Firstly, Trip links to over 3,500 guidelines from the USA (and over 10,000 guidelines in total).  The NGC used to provide summaries to less than half this amount (for a variety of reasons). But Trip is much more than guidelines, we includes a broad range of resources arranged around the evidence hierarchy – as you use Trip you’ll come to appreciate this.

Another thing to consider is Trip’s a very small organisation with a budget a fraction of the NGC and therefore we are not able to mimic all of the sophisticated search refinements of NGC’s.  We are funded via a freemium business model (to understand the differences see the chart here).  Note the guidelines are provided for free but please consider subscribing to help support our efforts (individual and organisational subscriptions are available).

Anyway enough preamble, to search for guidelines navigate to Trip and you’ll see this screen:

I suspect it’s superfluous but I’ve added a big arrow showing where you add the search terms.  Once you’ve searched you go to the results page:

This is the results for all our content so you may want to refine the search for guidelines or USA guidelines, this is easy:

The refine feature is on the right-hand side of the results page. This allows you to refine results by any evidence type, but the two highlighted are for all guidelines and USA guidelines.  If you click on the USA guidelines the results are restricted to just those:

 

Simple!

You can further refine by year (see towards the bottom of the refine area on the results page).  Trip Pro also allows advanced search and refine by clinical area.

For further information on using Trip we have produced a selection of ‘how to’ videos and you may find the ‘Tour’ interesting:

Any further questions, just send them my way: jon.brassey@tripdatabase.com

 

PubMed: Cited by feature

I’d not noticed the cited by N systematic reviews before in PubMed:

I’m thinking that might be useful!

 

Trip Overviews of Evidence, more examples

A further list of automated reviews for you to browse:

Question answering – the next step(s) for Trip

As well as improvements to our automated review system we’re planning more improvements to the site and it’ll focus on getting back to our roots – clinical Q&A.  Trip was born out of a need to help support a formal, manual, clinical question answering service (called ATTRACT) and is still the main reason people use Trip.

We’re looking to build on lots of separate features we already have in Trip:

But we hope to bring a number of other techniques ranging from machine learning to community support.

We can’t wait…

Dipping your TOE in the ocean of evidence

A long standing issue with our automated review system (and see these examples for acne and migraine) is trying to understand where it fits in the evidence ‘arena’.  In other words how do we position it so people understand what it is and how they might benefit from it.

To help us we’ve asked a number of colleagues about the system and how they might use it. Three bits of feedback, both from doctors, encapsulate the thinking:

Doctor One
A super fast (but not exhaustive nor systematic) screening tool to search for useful (or not useful) therapies.

e.g. If I have a patient with X disorder and I am familiar with 1 or 2 therapies yet the patient is not responding and is willing to try other alternatives. This seems like a much quicker way of getting potentially useful alternatives (and afterwards begin a more detailed search based on suggested trials) than reading pages and pages of pubmed results.

Doctor Two
For me, as a GP, I wouldn’t trust the results of this to decide on what to do. But that’s probably not the point (and I’d go to a guideline or systematic review anyway). The system is great for exploring of evidence, being able to visualise the evidence-space. I think the title ‘auto-synthesis’ probably doesn’t do the tool any favours, since you’ll just get a load of people saying ‘no it’s not…’ (not that being controversial is necessarily a bad thing!) If you do a pubmed search for something you’ll get 100s results, and it’s totally unmanageable. Here you have a system which presents a single visualisation, which prioritises RCTs and SRs (so up the pyramid), and makes some assessment of quality (to help prioritise), and auto does the PICO bit. All very cool, very useful, and impactful, but just maybe a tweak to the marketing/usage message.

Doctor Three

Personally I’ve found it clinically useful lately in a couple of ways…
1) A good short-cut to see what treatments have been studied for a condition
2) Related to #1, I suppose, I ‘ve also found it a quick way to find if a PARTICULAR intervention has been studied.- eg, for a patient with delirium, I was wondering whether melatonin had been studied for hospitalized elderly patients, so after searching on delirium and melatonin, (https://www.tripdatabase.com/autosynthesis?criteria=delirium&lang=en), I was able to search further by expanding the Melatonin bubble. I find it particularly useful to be able to expand the bubbles, then link directly to pub med article entries.

So, all say roughly the same thing – it’s an evidence exploration tool.  Imagine if you searched for ‘acne’ on Trip, Medline, Google etc. It gives you search results but no sense of the evidence base in that area.

So, to us, it seems like an evidence exploration tool but is it actually an evidence map?  We did played with the idea of Trip Overviews of Evidence (TOE) but we’re not sure! But we’ve had various suggestions – please help us pick:

One other suggestion, which is so good, but the acronym is less good: Automated Review and Synthesis of Evidence

If you’ve anything else to add then either email me (jon.brassey@tripdatabase.com) directly or leave a comment below.

Autosynthesis – an example of the significant challenges ahead?

This was a sobering exercise.

As part of the update of Trip I came across this article Efficacy of 8 Different Drug Treatments for Patients With Trigeminal Neuralgia: A Network Meta-analysis. So, I excitedly went to see how good our automated review system did for trigeminal neuralgia.  On an initial examination, of the 8 interventions we did well in just one – so, 1 out of 8 – that’s a fail in anyone’s book. However, it’s not as it first seems….

Lidocaine – we gave it an overall score of 0.01 (indicating a pretty neutral score). This was based on three very small studies.  As such we discount really small studies due to their inherent unreliability.  The network meta-analysis (NMA) also referenced three studies (but not the same three!):

Of which our system only incorporated the top one.  We included two others:

What confuses me is that the two references we didn’t find – from the network meta-analysis (NMA) – are not specifically about trigeminal neuralgia. So, I’m thinking our result is potentially better than theirs!!  I’ve emailed the author for clarification!

Botulinum toxin type A – we scored it as 0.45 (maximum score is 1) so it fits with their analysis.

Carbamazepine – A big failure on our part, we scored it -0.03. We included two studies of carbamazepine, neither of which belonged there. So, we should have reported no trials. It should not have even featured in our results.

Tizanidine – We scored it -0.03 with our system found a single trial A clinical and experimental investigation of the effects of tizanidine in trigeminal neuralgia which was very small and reported “The limited efficacy of TZD“. It scores near zero as, due to the size, we consider it unreliable and therefore discount the score.

The actual NMA referenced one other study Tizanidine in the management of trigeminal neuralgia.  This is not in the Trip index (failure of our RCT system as it is included in PubMed). And that paper reported “The results indicate that tizanidine was well tolerated, but the effects, if any, were inferior to those of carbamazepine.” so hardly a glowing indictment of the efficacy of tizanidine!

I actually think our assessment is reasonable and it seems a stretch of the paper to report it as being superior to placebo (even if they don’t claim statistical significance).

Lamotrigine – we found no trials.  Trip includes one of the trials the NMA included Lamotrigine (lamictal) in refractory trigeminal neuralgia: results from a double-blind placebo controlled crossover trial but for some reason it wasn’t tagged properly. Something to investigate

Oxcarbazepine – we found no trials and Trip includes no trials, so our system didn’t fail it was due to the fact Trip doesn’t contain all published clinical trials.

Pimozide – we found no trials. Trip includes one of the trials Pimozide therapy for trigeminal neuralgia but for some reason it wasn’t tagged properly. Something to investigate.

Proparacaine – We scored it -0.07 and the NMA reported it as no better than placebo. In hindsight I think this is what our system found. The system compares interventions with placebo. So towards 1 = better than placebo, -1 = worse than placebo and 0 = similar to placebo.

So, having gone through each entry I actually think our system did better than before.

Correct

  • Botulinum toxin type A
  • Proparacaine

Uncertain, I think our system did better than the paper (on the evidence I’ve seen)

  • Lidocaine
  • Tizanidine

Wrong, due to finding no trials with no trials in Trip and not reporting the intervention (so not too bad as we didn’t make any claim on efficacy)

  • Oxcarbazepine

Wrong, due to finding no trials but missing trials in Trip and not reporting the intervention (so not too bad as we didn’t make any claim on efficacy)

  • Lamotrigine
  • Pimozide

Failure, due to us falsely including two trials and making a ‘claim’ for it’s efficacy. It should not have featured at all!

  • Carbamazepine

Conclusion: When I first looked I was fairly depressed by the results. However, now I’ve understood them I’m actually quite pleased.  Of the eight interventions in the NMA we only clearly got one wrong (Carbamazepine) where we wrongly assigned a score.  We omitted giving a score for three (but we should have for two of those Lamotrigine and Pimozide) however, as that does not create any prediction by our system I’m fairly relaxed about it – but will still investigate why.  There are still two unclear results (Lidocaine and Tizanidine) where I actually think our results are better – but will wait to see what the authors report back.

Interestingly the CKS guidance on trigeminal neuralgia (sorry only available in the UK) suggests using carbamazepine as the first line, before stating:

If carbamazepine is contraindicated, ineffective, or not tolerated, seek specialist advice. Do not offer any other drug treatment unless advised to do so by a specialist.

This indicates a lack of faith in any other intervention! CKS reference the NICE guidance on Neuropathic pain in adults which has a section “2.3 Carbamazepine for treating trigeminal neuralgia” which reports:

Carbamazepine has been the standard treatment for trigeminal neuralgia since the 1960s. Despite the lack of trial evidence, it is perceived by clinicians to be efficacious. Further research should be conducted as described in the table below.

So, it’s not surprising there are no trials but the recommendation itself seems to lack an evidence base.

Bottom line: Initial a ‘fail’ but actually a ‘reasonable pass’

 

Create a free website or blog at WordPress.com.

Up ↑