Trip Database Blog

Liberating the literature



Automating PICO and searching

I have the great honour of being part of the KConnect consortia, recipients of an EU Horizon 2020 innovation grant. Trip is involved in a number of great projects within KConnect and I plan to blog about them all over the course of the year.  The first to feature is enhancing our PICO interface.

Asking questions may seem straight forward but it can be difficult so by helping users understand the key elements of their questions it typically gives the questions better structure.  PICO stands for:

  • P = Population (eg what condition the user has)
  • I = Intervention (eg a drug, diagnostic test)
  • C = Comparison (eg an alternate drug or test)
  • O = Outcome (eg mortality, QoL)

Take these two, real question:

  • How can you safely treat constipation in pregnancy?
  • In diabetes would an AIIRA benefit over an ACE? 

In the top Q, the P = pregnancy and the O = constipation.  Alternatively the population could be pregnancy and constipation.

The second Q is more complicated but the P = diabetes, I = AIIRA and C = ACE inhibitors

You’ll note that questions don’t need all four elements; it’s a flexible concept!  Irrespective of the number of PICO elements it can be really useful in helping users think about the key elements of the question they may have.

From user feedback I hear time and time again that the PICO interface is great and really helps health professionals think through their questions.

KConnect is helping us improve it still further!  We will simply allow users to type our their question in full and press search.  We will automatically attempt to identify the PICO elements and then pass those elements to our search.  By highlighting the suggested PICO elements it will teach users by experience what the PICO elements are as well as speeding up the question answering process.

A further minor step – which might be really interesting – is to record the full question and the articles the user subsequently clicked on.  It’s not quite the same as a full answer, but a ‘half way house’.

We’ve a good few months of work on this using, various techniques: machine learning, semantic annotation, hard work.

I’ll keep you posted.

Peer-review and journals

Richard Smith (who used to edit the BMJ) has just posted on Facebook:

Publishing in journals is a slow, balls aching process that adds no value. Only academics who need the “points” bother. Far better to blog.

Richard has long argued against the peer-review process and here are two blogs, by Richard, for further reading

A connected article, by John Ioannidis, How to Make More Published Research True, makes a number of assertions, two of which are selectively shown below:

  • Currently, many published research findings are false or exaggerated, and an estimated 85% of research resources are wasted.
  • Modifications need to be made in the reward system for science, affecting the exchange rates for currencies (e.g., publications and grants) and purchased academic goods (e.g., promotion and other academic or administrative power) and introducing currencies that are better aligned with translatable and reproducible research.

 I’ve often marveled at the connected worlds of academia and publishers – two worlds that have a symbiotic relationship, one without the other wouldn’t work.  I am on a few online academic paper repositories and I’m always getting emails from people I follow who have published a new article.  I’m staggered by how often they can churn them out.

I then look at the wonderful EvidenceUpdates (a service funded by the BMJ and supplied by HIRU at McMaster). They scan the ‘top’ 120 journals and do an in-house quality assessment (a form of critical appraisal) and those that pass get sent to a network of clinicians who assess the papers for newsworthiness and relevancy to clinical practice.  Amazingly around 95-96% are rejected.  While I’m not suggesting all 95-96% are junk, I suspect the majority are little more than vanity publishing.  Academics wanting another article for their CV and publishers desperate for content to justify the purchase price.

My own world of Q&A frequently shows how poorly aligned academia is with coal-face clinical information need.  It was one of the reasons I got involved in the setting up of DUETs (DUETs publishes treatment uncertainties from patients, carers, clinicians, and from research recommendations, covering a wide variety of health problems.) The idea is to grab ‘real’ questions with a view to improving research procurement.  There is a suspicion that academics pursue their own interests which may not be aligned with clinical need.  So, DUETs allows for the ‘real’ questions to be raised.  Working with James Lind Alliance it can be a powerful combination.

Creating a Q&A environment in Trip

For those of you who’ve followed this blog for a while will see that I’m always revisiting the answer engine concept, most recently two months ago. A month before that I mentioned it in the context of a a Journal of Clinical Q&A

This all stems from my belief that Trip is a wonderful tool to answer clinical questions but a also belief that it could be even better!  After all, it was the reason I started it in the first place – to help me answer clinical questions via the ATTRACT Q&A serviceSurveys have shown that many clinicians agree, with over 70% of questions, supporting clinical care, are helped by using Trip.

Recapping briefly on the answer engine and the Journal of Clinical Q&A:

  • The answer engine will try to predict questions from the search terms and insert an answer above the search results.  Users will get an answer in one click.
  • Journal of Clinical Q&A is a journal idea – radically different from any other journal.  It will be a structured answer to a clinical question, posted on the site (and helping populate the answer engine) which will be peer-reviewed and given a citation.

So far, fairly radical and fairly good.

Now, another variable to consider – the PICO search system.  In the forthcoming upgrade we’ll be enhancing this feature in the premium version.  It will be more guided than the existing version and it could work like this:

  1. Users types in their full-text question.
  2. Users then select the PICO elements from the question.
  3. Users view relevant results.
  4. Users are given the option to write up an answer. If they write up the answer we will show them the articles they’ve looked at and they can indicate which were useful (and thereby form the reference list).
  5. They can choose to keep it private or share it – feeding the answer engine.

Another powerful component for a Q&A environment, what could go wrong (I ask tentatively!)?

The Trip Answer Engine (again)

With the move to the next upgrade – and a freemium Trip the notion of the Answer Engine appears again. I’ve talked about the Answer Engine for at least 4 years but previously I’ve never had the conviction that it’ll work.  The idea is great: infer a question from the search terms and show ‘the’ answer.

It’s because I like the idea so much I keep coming back to it. I’ve done a mock-up of how it might look.

I’m waiting to hear from one publisher about using their content. If they agree that I can re-use their content it’ll be thousands of Q&As ready to go and I’ll be ready to commit to getting it off the ground.

I’m also talking to other publishers about their willingness to participate. We get Q&As and they get their content in a prime position on Trip, a win:win in my book! Other than that I’ll be undertaking another user survey and will ask then if people want to volunteer to add a few Q&As. If everything falls into place we’ll have a reasonable chance of making it work!


Yahoo Pipes have been around for a while now. Unfortuantely, I’ve not had the time or inclination to dig around and see what they can do. I’ve now started using them and think they are potentially very powerful. I’m sure, at present, I’m using them crudely. In fact, I’ve so far, simply merged two separate RSS feeds – but it’s a start.

Take this old (2004) ATTRACT answer on exercise and depression. If you view that you’ll see that it has a warning “NOTE: The following question is over two years old. We do not routinely update our answers. Therefore, significant new research may now be available.”

I’ve always wondered about auto-updating of answered. Therefore, I created two separate searches in PubMed (with exercise and depression both as [majr] mesh headings), one for RCTs and one for systematic reviews (via clinical queries) and restricted the date to those articles published after the ATTRACT answer. I then exported the results as an RSS feed and joined them together in Yahoo Pipes. You can see the results here.

If this output was then tagged to the bottom of all appropriate Q&A answers they would, in effect, auto-update.

Blog at

Up ↑