Search

Trip Database Blog

Liberating the literature

Covid-19

Given the current situation Trip has been adding as many high-quality documents on Covid-19 that we can find.  This means at least daily uploads of new material. The best search is not “covid-19” (which currently returns 35 results), so we recommend searching for covid-19 or “novel coronavirus” (which returns 143)!

Trip is also involved in the Oxford COVID-19 Evidence Service, helping support a significant number of rapid reviews.

If you see any useful resources then please let us know.

Do you use Zwift?

I expect 90% of Trip users have not heard of Zwift and I suspect 99% have never used it. But if you’re part of that 1% then please let me know.

No, I’m not suggesting a ‘Team Trip’ (although that’s quite a nice idea) more I want to bounce a few ideas around with you that relate to Trip developments! Email me jon.brassey@tripdatabase.com

 

Moving forward with Trip

As mentioned previously we are rewriting the vast majority of Trip’s code. This is a great opportunity to revisit Trip’s purpose. I guess it’s something like: Trip is  a search engine which allows users to easily locate the highest-quality evidence for their search.

But, in developing Trip we have often looked at why people search and frequently it is to support clinical question answering by health professionals. This has led us to introduce new features to support this wider aim (such as instant answers, drug information) – these have tended to be little used. Are we wasting our time and should our main focus be on the search?

So, do we focus on making it easier for users to find evidence (and related issues such as exporting records, linking to full-text) or do we balance development to include broader decision support tools such as instance answers, drug information and community support?

As well as the main poll we’re also asking users – if they have time/motivation – to leave their email (in the ‘Other’ box) if they are prepared to help further. This will allow us to ask more open questions relating to Trip’s future.

Rewriting the Trip code

Trip has been running since 1997. It started off as a really basic ‘garden shed’ site and soon after, when we get some funding, we had the site written by a commercial web-company. That must be nearly twenty years ago. Since then the site has changed massively with new bits of code added left, right and centre. Even though the site works really well the underlying code is messy, has lots of redundancy and some of it is written in very old code (~20 years).

As we’re getting bigger we’re adding new developer capacity and it is increasingly obvious that we need to rewrite the code – almost from scratch – as it’ll allow the new developers to hit the ground running as opposed to a very steep learning curve of trying to figure out the code and then learn how to use some ancient coding languages.

This is a massive undertaking and we estimate it’ll take 6 months to get a prototype up and running.  Apart from the cost the advantages will be numerous:

  • We’ll be using all the latest web-technologies
  • It’ll help future-proof Trip (well, as much as possible)
  • I’ll be able to squeeze in some new features
  • We should be able to save some money on things like our email system, server costs etc
  • I’m hoping we can get a redesign of the site as well.

As mentioned it’ll be six months to a prototype and it may not be that the new site doesn’t go live till 2021 – but it should be worth it.

In the interim we will continue to roll out changes to Trip, for instance we’re working on integrating LibKey and separately working on incorporating MeSH in to our search system.

Short term pain (for us, users will not notice) but long term gain for all.

 

Grading guidelines

At the end of last year we posted Quality and guidelines which set out our thinking around grading guidelines with a view of improving the experience for our users. Since then we’ve done a great deal of work exploring this issue and have arrived at a modified version of the Institute of Medicine’s Clinical Practice Guidelines We Can Trust scoring system.

Firstly, an important distinction to highlight, is that we are not able to grade individual guidelines. Trip has over 10,000 clinical guidelines and that’s simply impractical from a resource perspective. So, the plan is to grade each guideline publisher. The idea is that each publisher will be independently visited by two people (Trip staff and volunteers) and they will score them based on these questions:

  • Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
  • Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
  • Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
  • Are they clear about funding? No = 0, Yes = 1
  • Do they mention how they handle conflict of interest? No = 0, Yes = 1

The best score being 8!  Our work has shown that the above results give very good approximations to the more formal methods, hence we’re using this simpler approach. The idea is to start displaying these scores alongside each result (we’ll work on a graphic to display it and allow users to easily see how we’ve scored them).

I mentioned volunteers above and we’ve recruited a number via emails from Trip. But if you’ve missed them and are interested in helping out then please send an email to jon.brassey@tripdatabase.com.

Search tip: Phrase searching, ironing out an anomaly

I had an email relating to the phrase searching, it highlighted that a search for “e-learning” was generating huge amounts of irrelevant results.  It appears that the hyphen, within the phrase searching, causes confusion!

After a bit of trial an error the appropriate way appears to ditch the hyphen and simply search for “e learning”.  The result:

I expect most will agree this is a better, more manageable, result!

Thanks Feargus for highlighting the issue!

Quality and guidelines

In 2011 the Institute of Medicines published Clinical Practice Guidelines We Can Trust and it produced 8 standards:

  1. Establishing transparency
  2. Management of conflict of interest (COI)
  3. Guideline development group composition
  4. Clinical practice guideline–systematic review intersection
  5. Establishing evidence foundations for and rating strength of recommendations
  6. Articulation of recommendations
  7. External review
  8. Updating

There are other checklists available (e.g. see this recent comparision A Comparison of AGREE and RIGHT: which Clinical Practice Guideline Reporting Checklist Should Be Followed by Guideline Developers?).

I raise all this as I wonder if we, at Trip, could automatically approximate quality of guidelines based on the IoM’s 8 point checklist. Given it needs to be automatic it would need a number of rules that could help understand the likely quality.  Taking the 8 standards I could see us approximating the following:

  1. Transparency – does it mention funding? This is doable via text-mining.
  2. Conflict of interest – does it mention conflict of interest within the guideline? This is doable via text-mining.
  3. Guideline development group composition – does it mention a multidisciplinary team and/or patient involvement? Potentially doable, but not convinced.
  4. Clinical practice guideline–systematic review intersection – does it mention systematic reviews (a bit more nuanced in reality)? This is doable via text-mining.
  5. Establishing evidence foundations for and rating strength of recommendations – does it rate the strength of evidence? This is probably doable via text-mining.
  6. Articulation of recommendations – does it clearly list recommendations? Potentially doable, but not convinced.
  7. External review – does it mention the review process? Potentially doable, but not convinced.
  8. Updating – does it mention the date and/or updating date? This is doable via text-mining.

So, what I could see us doing is checking each guideline for the following:

  1. Does it mention funding? Y/N
  2. Does it discuss conflict of interest? Y/N
  3. Does it mention systematic reviews? Y/N
  4. Does it discuss the strength of evidence? Y/N
  5. Does it mention recommendations? Y/N
  6. Does it have a date within the guideline? Y/N
  7. Does it mention updating Y/N

Do, we could scan each of the guidelines for either all 7 items (although it may just be 5, as items 4 and 5 are potentially problematic).  So, if we go for the ‘simple’ 5 we would be able to rate each guideline on a 5 point scale.

The question becomes if a guideline mentions funding, conflict of interest etc is that a good indicator (or approximation) for the quality of a guideline? I think it seems fairly reasonable (as long as recommendations are clear) but what do others think?  How might it be improved?

 

Risk of Bias scores for controlled trials

We’ve been working with RobotReviewer for a number of years. They do two things for us:

  • Highlight all the controlled trials in PubMed with a high degree of accuracy
  • Assess these trials for bias using their amazing automated systems (see this earlier blog when we first started working with them).

RobotReviewer have improved their systems making bias and trial identification even better and to ‘celebrate’ this we’ve made some changes to Trip.  We’ve altered the way bias scores are displayed in Trip and we’ve now created a filter so you can choose to only show those trials with a low estimated risk of bias (labelled “Controlled trial quality: predicted high”):

 

This is a big improvement in helping people easily locate high quality evidence, as such we’re delighted.

Oh yes, for the data nerds, as of a few days ago there were 552,463 controlled trials in PubMed!

Changes on the results page

We’ve had a bit of a re-jig of the results page.  This is the old format (note the Q&A reference at the top and the search suggestions ‘Trip users also search for’ towards the bottom):

 

The new format has reversed the position of these:

The rationale is simple, the search refinement section has been moved to the top as users might well see a large number of results and want to refine straightaway. The Q&A, where it was, was confusing and appeared too early in the search ‘journey’. It makes more sense when a user has gone through some results and is thinking there may be no answer.

Oh yes, the search refinement area (at the top) is a rollover – if you rollover it expands:

As ever, comments welcome!

Blog at WordPress.com.

Up ↑