Our PICO search is really popular but it’s not changed since it was introduced years ago. For those unfamiliar with it, this is what it looks like:
Our search works on a contingency search basis. So, assuming the user enters 4 search terms (a P, I, C and O) we do an initial search looking for the terms in the title only. If there aren’t many results we repeat with P, I and C as a title search and O as title and text. If there are still too few, we do P and I as title and C and O as title and text etc.
It seems to work well, but I’m sure it could be improved especially given our work with the KConnect project – where we specifically annotate articles with the PICO elements (albeit only the RCTs and systematic reviews).
But as we look to overhaul it we need your help….
Do you use the PICO search? Is it perfect? If not, how might it be improved?
Given the recent interest in guidelines and Trip we’ve had a number of new conversations with users from the USA. I’ll reproduce (well, slightly edited) one below as I think it’s broadly useful for others to understand a fairly common search problem. In this case the person highlighted how frustrating searching for guidelines can be, so here’s my response:
Firstly, as previously mentioned, guidelines are particularly problematic as they typically long and cover a broad range of topics. This creates two different problems – missing them and noise!
So, you might have a guideline ‘The diagnosis and management of hypertension‘ which covers loads of areas. It might mention gestational hypertension as a chapter, but nothing more. If you searched for gestational hypertension this guideline would not feature highly as gestational is not mentioned in the title (our system favours matches in the title). So, it appears low down in the search results (even though that chapter might be the best result there is for the user). This is frustrating and we have a few things we’re going to try in the near future to improve this! But, your problem is more the noise…
Take one of your example search safety mechanical ventilation, when you search on the main Trip (so no refinement) you get the following results:
I’ve highlighted the scores – which relate to how relevant the document is to the search terms (BTW only I get to see those scores). But when you refine to USA Guidelines it gets noisy:
You’ll see, from result 3 the relevancy goes right down. In fact, only 2 (of 85 USA guidelines we return) look reasonable.
This comes back to the size of guidelines. They’re long and invariably the search terms appear within them – but they might mention safety, mechanical and ventilation all in different contexts. But as they include the terms they are counted as ‘hits’ and returned. Search can be stupid!!
The thing that we think we’ll introduce is a relevancy cut-off so we could say only return documents with a relevancy score above 0.1, 0.2 (we’d need to test) but allow users to ‘see ALL documents’ – if they want the noise!
This is an often overlooked feature of Trip, so it’s about time I highlighted it.
Latest and greatest takes a topic and looks at the latest evidence for the topic and also the ‘greatest’ – by that, the articles that have been clicked on most for the last 12 months. A list of topics can be found here but you can access the latest and greatest for any topic via the link at the top of any particular search, for example:
We particularly like the ‘greatest’ side of the feature as it allows uses to easily see the articles deemed most useful/interesting for a given topic. A bit like a topic-based clinical zeitgeist!
I’ve written about our attempts to mitigate the loss of the National Guideline Clearinghouse (even producing a ‘conversion’ how to use Trip guide). But below are a sample of the new guidelines, added this month, to Trip:
This post is to help National Guideline Clearinghouse (NGC) users navigate Trip to find the guidelines they need.
Firstly, Trip links to over 3,500 guidelines from the USA (and over 10,000 guidelines in total). The NGC used to provide summaries to less than half this amount (for a variety of reasons). But Trip is much more than guidelines, we includes a broad range of resources arranged around the evidence hierarchy – as you use Trip you’ll come to appreciate this.
Another thing to consider is Trip’s a very small organisation with a budget a fraction of the NGC and therefore we are not able to mimic all of the sophisticated search refinements of NGC’s. We are funded via a freemium business model (to understand the differences see the chart here). Note the guidelines are provided for free but please consider subscribing to help support our efforts (individual and organisational subscriptions are available).
Anyway enough preamble, to search for guidelines navigate to Trip and you’ll see this screen:
I suspect it’s superfluous but I’ve added a big arrow showing where you add the search terms. Once you’ve searched you go to the results page:
This is the results for all our content so you may want to refine the search for guidelines or USA guidelines, this is easy:
The refine feature is on the right-hand side of the results page. This allows you to refine results by any evidence type, but the two highlighted are for all guidelines and USA guidelines. If you click on the USA guidelines the results are restricted to just those:
You can further refine by year (see towards the bottom of the refine area on the results page). Trip Pro also allows advanced search and refine by clinical area.
For further information on using Trip we have produced a selection of ‘how to’ videos and you may find the ‘Tour’ interesting:
As well as improvements to our automated review system we’re planning more improvements to the site and it’ll focus on getting back to our roots – clinical Q&A. Trip was born out of a need to help support a formal, manual, clinical question answering service (called ATTRACT) and is still the main reason people use Trip.
We’re looking to build on lots of separate features we already have in Trip:
A long standing issue with our automated review system (and see these examples for acne and migraine) is trying to understand where it fits in the evidence ‘arena’. In other words how do we position it so people understand what it is and how they might benefit from it.
To help us we’ve asked a number of colleagues about the system and how they might use it. Three bits of feedback, both from doctors, encapsulate the thinking:
Doctor One A super fast (but not exhaustive nor systematic) screening tool to search for useful (or not useful) therapies.
e.g. If I have a patient with X disorder and I am familiar with 1 or 2 therapies yet the patient is not responding and is willing to try other alternatives. This seems like a much quicker way of getting potentially useful alternatives (and afterwards begin a more detailed search based on suggested trials) than reading pages and pages of pubmed results.
Doctor Two For me, as a GP, I wouldn’t trust the results of this to decide on what to do. But that’s probably not the point (and I’d go to a guideline or systematic review anyway). The system is great for exploring of evidence, being able to visualise the evidence-space. I think the title ‘auto-synthesis’ probably doesn’t do the tool any favours, since you’ll just get a load of people saying ‘no it’s not…’ (not that being controversial is necessarily a bad thing!) If you do a pubmed search for something you’ll get 100s results, and it’s totally unmanageable. Here you have a system which presents a single visualisation, which prioritises RCTs and SRs (so up the pyramid), and makes some assessment of quality (to help prioritise), and auto does the PICO bit. All very cool, very useful, and impactful, but just maybe a tweak to the marketing/usage message.
Personally I’ve found it clinically useful lately in a couple of ways… 1) A good short-cut to see what treatments have been studied for a condition 2) Related to #1, I suppose, I ‘ve also found it a quick way to find if a PARTICULAR intervention has been studied.- eg, for a patient with delirium, I was wondering whether melatonin had been studied for hospitalized elderly patients, so after searching on delirium and melatonin, (https://www.tripdatabase.com/autosynthesis?criteria=delirium&lang=en), I was able to search further by expanding the Melatonin bubble. I find it particularly useful to be able to expand the bubbles, then link directly to pub med article entries.
So, all say roughly the same thing – it’s an evidence exploration tool. Imagine if you searched for ‘acne’ on Trip, Medline, Google etc. It gives you search results but no sense of the evidence base in that area.
So, to us, it seems like an evidence exploration tool but is it actually an evidence map? We did played with the idea of Trip Overviews of Evidence (TOE) but we’re not sure! But we’ve had various suggestions – please help us pick:
One other suggestion, which is so good, but the acronym is less good: Automated Review and Synthesis of Evidence
If you’ve anything else to add then either email me (firstname.lastname@example.org) directly or leave a comment below.