Search

Trip Database Blog

Liberating the literature

What are health professionals searching for in relation to COVID-19?

We’ve analysed the search logs of Trip and we’ve had just under 10,000 searches for COVID-19 and related terms. To date, the vast majority of searches (11 of the 12 most popular searches) were simply searches for the actual disease:

  • covid
  • Coronavirus
  • covid-19
  • covid 19 OR “novel coronavirus”
  • covid 19
  • Coronavirus
  • covid19
  • Coronavirus disease 2019
  • Novel coronavirus (2019-nCoV)
  • coronavirus or covid-19
  • covid-19

A significant number of variants for the same concept!

The non-specificity of these suggests that these were searches to familiarise people with the topic or evidence base and not in response to a specific question.

In tenth position was the first multi-term/complex search ‘coronavirus diagnosis‘. Interestingly, given the disease variants a better search might have been (covid-19 OR “novel coronavirus”) AND diagnosis – as just searching for ‘coronavirus’ brings in other coronavirus results. Searching for ‘novel coronavirus’ makes the search more specific and reduces the results from 292 to 39.

We then got a large number of repeated searches for coronavirus coupled with countries, all from the Middle East or North Africa, I have no idea why!

It’s not till the 42nd most frequent search term that we get to ‘coronavirus treatment’, followed shortly after by chloroquine coronavirus! After that we get to a real ‘long-tail’ set of results:

Interventions

  • Ibuprofen
  • Interferon
  • vaccine
  • azithromycin

Other

  • critical care
  • blood test nasal swab diagnosis
  • pediatrics
  • patient transport
  • pregnancy
  • diarrhea
  • mortality
  • mask
  • covid
  • covid
  • PPE or personal protective equipment
  • pneumonia

Over time, I suspect users will start searching for more specific searches and we plan to periodically update this list.

The main lesson for Trip is that it would be great if we can provide better support our users. We can easily create the synonyms coronavirus or covid-19. But can be transpose coronavirus to ‘novel coronavirus’?

Popular COVID-19 articles

In the last few days we’ve significantly increased our coverage of COVID-19 articles and the searching numbers are large. Below is a list of the most frequently looked at articles. It’s biased to the extent that some articles have been in Trip for weeks while others have only gone in in the last day or so. With that in mind, here’s the top ten (most popular first):

  1. Interim guidance: public health management of cases and contacts associated with novel Coronavirus (COVID-19) in the community. British Columbia Centre for Disease Control
  2. Coronavirus (COVID-19): latest information and advice. Public Health England
  3. COVID-19 (Novel Coronavirus). DynaMed Plus
  4. Infection prevention and control for novel coronavirus (COVID-19): interim guidance for acute healthcare settings. Government of Canada
  5. Novel coronavirus (COVID-19) guidance for primary care providers in community setting. Ontario Ministry of Health
  6. COVID-19: infection prevention and control. Public Health England
  7. Finding the evidence: Coronavirus. Public Health England
  8. Novel coronavirus (COVID-19) guidance for acute care. Ontario Ministry of Health
  9. COVID-19: background information. Public Health England
  10. COVID-19: guidance for sampling and for diagnostic laboratories. Public Health England

Special mention to two articles from the Oxford COVID-19 Evidence Service (CoI – we’re helping them out with the evidence searching side of things). They have been on the site for around 24 hours and have had significant numbers of clicks, the top one only just outside the top ten!

Covid-19

Given the current situation Trip has been adding as many high-quality documents on Covid-19 that we can find.  This means at least daily uploads of new material. The best search is not “covid-19” (which currently returns 35 results), so we recommend searching for covid-19 or “novel coronavirus” (which returns 143)!

Trip is also involved in the Oxford COVID-19 Evidence Service, helping support a significant number of rapid reviews.

If you see any useful resources then please let us know.

Do you use Zwift?

I expect 90% of Trip users have not heard of Zwift and I suspect 99% have never used it. But if you’re part of that 1% then please let me know.

No, I’m not suggesting a ‘Team Trip’ (although that’s quite a nice idea) more I want to bounce a few ideas around with you that relate to Trip developments! Email me jon.brassey@tripdatabase.com

 

Moving forward with Trip

As mentioned previously we are rewriting the vast majority of Trip’s code. This is a great opportunity to revisit Trip’s purpose. I guess it’s something like: Trip is  a search engine which allows users to easily locate the highest-quality evidence for their search.

But, in developing Trip we have often looked at why people search and frequently it is to support clinical question answering by health professionals. This has led us to introduce new features to support this wider aim (such as instant answers, drug information) – these have tended to be little used. Are we wasting our time and should our main focus be on the search?

So, do we focus on making it easier for users to find evidence (and related issues such as exporting records, linking to full-text) or do we balance development to include broader decision support tools such as instance answers, drug information and community support?

As well as the main poll we’re also asking users – if they have time/motivation – to leave their email (in the ‘Other’ box) if they are prepared to help further. This will allow us to ask more open questions relating to Trip’s future.

Rewriting the Trip code

Trip has been running since 1997. It started off as a really basic ‘garden shed’ site and soon after, when we get some funding, we had the site written by a commercial web-company. That must be nearly twenty years ago. Since then the site has changed massively with new bits of code added left, right and centre. Even though the site works really well the underlying code is messy, has lots of redundancy and some of it is written in very old code (~20 years).

As we’re getting bigger we’re adding new developer capacity and it is increasingly obvious that we need to rewrite the code – almost from scratch – as it’ll allow the new developers to hit the ground running as opposed to a very steep learning curve of trying to figure out the code and then learn how to use some ancient coding languages.

This is a massive undertaking and we estimate it’ll take 6 months to get a prototype up and running.  Apart from the cost the advantages will be numerous:

  • We’ll be using all the latest web-technologies
  • It’ll help future-proof Trip (well, as much as possible)
  • I’ll be able to squeeze in some new features
  • We should be able to save some money on things like our email system, server costs etc
  • I’m hoping we can get a redesign of the site as well.

As mentioned it’ll be six months to a prototype and it may not be that the new site doesn’t go live till 2021 – but it should be worth it.

In the interim we will continue to roll out changes to Trip, for instance we’re working on integrating LibKey and separately working on incorporating MeSH in to our search system.

Short term pain (for us, users will not notice) but long term gain for all.

 

Grading guidelines

At the end of last year we posted Quality and guidelines which set out our thinking around grading guidelines with a view of improving the experience for our users. Since then we’ve done a great deal of work exploring this issue and have arrived at a modified version of the Institute of Medicine’s Clinical Practice Guidelines We Can Trust scoring system.

Firstly, an important distinction to highlight, is that we are not able to grade individual guidelines. Trip has over 10,000 clinical guidelines and that’s simply impractical from a resource perspective. So, the plan is to grade each guideline publisher. The idea is that each publisher will be independently visited by two people (Trip staff and volunteers) and they will score them based on these questions:

  • Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
  • Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
  • Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
  • Are they clear about funding? No = 0, Yes = 1
  • Do they mention how they handle conflict of interest? No = 0, Yes = 1

The best score being 8!  Our work has shown that the above results give very good approximations to the more formal methods, hence we’re using this simpler approach. The idea is to start displaying these scores alongside each result (we’ll work on a graphic to display it and allow users to easily see how we’ve scored them).

I mentioned volunteers above and we’ve recruited a number via emails from Trip. But if you’ve missed them and are interested in helping out then please send an email to jon.brassey@tripdatabase.com.

Search tip: Phrase searching, ironing out an anomaly

I had an email relating to the phrase searching, it highlighted that a search for “e-learning” was generating huge amounts of irrelevant results.  It appears that the hyphen, within the phrase searching, causes confusion!

After a bit of trial an error the appropriate way appears to ditch the hyphen and simply search for “e learning”.  The result:

I expect most will agree this is a better, more manageable, result!

Thanks Feargus for highlighting the issue!

Quality and guidelines

In 2011 the Institute of Medicines published Clinical Practice Guidelines We Can Trust and it produced 8 standards:

  1. Establishing transparency
  2. Management of conflict of interest (COI)
  3. Guideline development group composition
  4. Clinical practice guideline–systematic review intersection
  5. Establishing evidence foundations for and rating strength of recommendations
  6. Articulation of recommendations
  7. External review
  8. Updating

There are other checklists available (e.g. see this recent comparision A Comparison of AGREE and RIGHT: which Clinical Practice Guideline Reporting Checklist Should Be Followed by Guideline Developers?).

I raise all this as I wonder if we, at Trip, could automatically approximate quality of guidelines based on the IoM’s 8 point checklist. Given it needs to be automatic it would need a number of rules that could help understand the likely quality.  Taking the 8 standards I could see us approximating the following:

  1. Transparency – does it mention funding? This is doable via text-mining.
  2. Conflict of interest – does it mention conflict of interest within the guideline? This is doable via text-mining.
  3. Guideline development group composition – does it mention a multidisciplinary team and/or patient involvement? Potentially doable, but not convinced.
  4. Clinical practice guideline–systematic review intersection – does it mention systematic reviews (a bit more nuanced in reality)? This is doable via text-mining.
  5. Establishing evidence foundations for and rating strength of recommendations – does it rate the strength of evidence? This is probably doable via text-mining.
  6. Articulation of recommendations – does it clearly list recommendations? Potentially doable, but not convinced.
  7. External review – does it mention the review process? Potentially doable, but not convinced.
  8. Updating – does it mention the date and/or updating date? This is doable via text-mining.

So, what I could see us doing is checking each guideline for the following:

  1. Does it mention funding? Y/N
  2. Does it discuss conflict of interest? Y/N
  3. Does it mention systematic reviews? Y/N
  4. Does it discuss the strength of evidence? Y/N
  5. Does it mention recommendations? Y/N
  6. Does it have a date within the guideline? Y/N
  7. Does it mention updating Y/N

Do, we could scan each of the guidelines for either all 7 items (although it may just be 5, as items 4 and 5 are potentially problematic).  So, if we go for the ‘simple’ 5 we would be able to rate each guideline on a 5 point scale.

The question becomes if a guideline mentions funding, conflict of interest etc is that a good indicator (or approximation) for the quality of a guideline? I think it seems fairly reasonable (as long as recommendations are clear) but what do others think?  How might it be improved?

 

Blog at WordPress.com.

Up ↑