Search

Trip Database Blog

Liberating the literature

Category

Uncategorized

Guideline grading is getting there

We’ve been working on this for a while (eg Grading guidelines, an update). Well the grading has continued and we’ve now graded over 250 guideline producers covering the vast majority of the guideline we cover.

The grading is a score from 0-8, with 8 being the highest score (click here for an explanation of the scoring system) and the distribution is as follows:

Y-axis is the score while the X-axis is the number of producers scoring that particular score. So, around 50 producers score the maximum 8!

Nearly 50% score 7 or 8, which is encouraging. Lots also score 0 and this likely reflects the inaccessibility of methods for us to score. Their guideline might be great (seems unlikely) but if a user can’t see the methods how can they assess the ‘worth’ of the guideline?

Once we get the new site properly launched we can move on to introducing this 🙂

The new site has arrived

After an enormous amount of work and testing we have released the new site in beta mode! Given the huge rewrite of code and irrespective of all the testing, we felt it was a big risk to simply replace the current/old site with the new one. So, for a short period of time they’ll run in parallel with a link from the top of the current/old site:

Alternatively you can simply go directly to the new site here: https://labs2020.tripdatabase.com/Home

This takes you to the homepage (this one being for Pro subscribers, free users get a green colour scheme):

And here is the Pro results page:

Where to start with the changes?

Possibly the biggest change has been the shift of the filter by evidence type from the right to the left of the results page. I was remarkably attached to the old way, but our designer convinced me of the shift.

Other than that the design has been cleaned up and there are some new icons:

Lots of other things to explore and click on (for instance the coloured ‘lozenges’ highlighting the evidence type – go on, click on one of them).

So, please go and try it now (via https://labs2020.tripdatabase.com/Home) and let us know what you think via our feedback form. We’ve already received some feedback and they’re being worked on, thankfully most of them appear to be minor. Alternatively contact me directly: jon.brassey@tripdatabase.com, it’s always great to hear from you!

Where have we been?

I can’t believe it, we’re heading towards the end of February and this is our first post of the year. The main reason being there has been no obvious news to reveal. However, we’ve been really busy behind the scenes with the new site. It is imminent, just not sure when exactly, we’re ironing out one outstanding issue!

When ready we will not replace the old site, we will run the two concurrently. Users going to the site will see the usual site with a link to the new site, with an invite to ‘try the beta’. Given the huge rewrite it just seems sensible to gently ease in to the new version.

Grading guidelines, an update

The original Grading guidelines post is nearly a year old and things have not moved smoothly – not only did Covid happen but also there was the rewrite of the website that took most of 2020! But we’ve not abandoned our wish to ‘grade’ guidelines to help our users with an indication of how ‘evidence-based’ the guidelines are. Our system scores each guideline producer (based on the system mentioned in the earlier post), with a maximum score of 8.

We have now graded around two-thirds of the guidelines and that figure is rapidly increasing. Our hope is to introduce the guideline system early in 2021 (but after the relaunch of the newly coded site)!

For interest we share a distribution of the guideline scores below:

28% get the maximum score of 8 with score of 0 (15%) and 7 (14%) being next most ‘popular’. So, many are being serious about producing evidence-based guidelines but also many seem seriously less inclined!! One clarification point is that a number of the zero scores are based on an inability to find any detail of the methodology employed. They may be great guidelines; they’re just not making is easy for us (or others) to find out!

Article networks, again

I love generating these network maps and I keep returning to these over the years.

The above is a map based on a sample of articles on Covid-19. Each article is represented by a node (grey circle) and the edges (lines of various colours) represent connections between them. The project was to explore ways of grouping documents with a view to speeding up evidence reviews. In this work the connections were (a) semantic and (b) citation.

What is clear is that the articles group around topics. But given the experimental nature of the work, the small sample and imperfect data I’m loathed to draw any firms conclusions but I am taking it as another endorsement of this approach, one I want to explore next year.

But how might such knowledge be useful? Here are a few and I’d be delighted to hear of any other suggestions:

  • Improve search 1 – if a user clicks on an article in a distinct cluster you can immediately highlight the closest other articles.
  • Improve search 2 – when someone searches you could highlight the distinct clusters and use it as a form of search refinement. So, using the above diagram, a user might have searched for Covid and we could highlight the three clusters.
  • Improve search 3 – a user might select 10 of the articles in a cluster but miss an article – we could flag this up.
  • Better intelligence – we could monitor the clusters and see when new articles become joined. We could then alert users who had interacted, previously, with the cluster.
  • Rapid reviews – we could highlight all the RCTs and/or systematic reviews in a cluster and start to extract value from each trial (e.g. risk of bias, sample size).

When we roll it out we will be able to include a third type of connection – clickstream data – which we’ve previously demonstrated to be incredibly powerful. It’s at times like this I wish we had a sizeable R&D budget

 

What are health professionals searching for in relation to COVID-19? Update 4

Our fourth update on this topic, the last one being in April! I have yet to see any other attempt at highlighting the information/search needs of health professionals, so if you know of any please let me know.

Below are the top 15 search terms used alongside searches for Covid-19 (or synonyms). This is a crude list with no attempt to reconcile synonymous entries.

  1. diagnosis
  2. PPE or masks
  3. chloroquine
  4. pregnancy
  5. screening
  6. interferon
  7. chloroquine OR hydrochloroquine
  8. surgery
  9. dentistry
  10. vaccine
  11. antibiotics
  12. azithromycin
  13. treatment
  14. vitamin d
  15. ivermectin

So, what’s changed?

Firstly, diagnosis is now in top spot…

Secondly, there are increases on the topics of vaccines, vitamin d and ivermectin.

Lastly, there seems to be tailing off of searches on the topic. The graph below shows a cumulative total of searches for Covid-19 (and synonyms) – that’s the thick blue line. The dashed yellow line is a trend line based on the first months of the pandemic. 

The new site is getting closer

The new site is getting ready for testing…

No new functionality, just modernising the code to reflect modern web standards. It’s been a mammoth effort and once released we’ll be able to move much more quickly with new features – and we have a big list of these 🙂

As well as the reprogramming we’ve taken the opportunity to refresh the design. Again, this is subject to testers feedback and alteration. But here’s a very enigmatic sneak preview:

New content…

Trip’s aim is to support health professionals to easily obtain answers to their clinical questions. This requires (1) clever systems to interpret search queries and (2) then find the most appropriate evidence. New evidence is published all the time and an important task is getting it added to Trip in a timely manner!  This is done in two main ways:

  • Automatically – the majority of our content is added automatically
  • Manually – where automation isn’t possible we have to undertake a manual trawl of the sites and look for new content. This is typically done on a monthly basis

We’ve just uploaded our latest manual content which typically takes 24 hours from adding to the site to being searchable!

As well as existing sources we constantly scan the web for new, quality, content (often helped with ‘tip offs’ from our users). This month we’ve added two new publications:

Trip, getting better every day…

Do you use LibKey? Better access to full-text

Trip has now integrated LibKey in to Trip. LibKey is an amazing tool and it makes Trip even better at supporting our users to obtain full-text articles.

One of the traditional methods of obtaining full-text has been to use something called a link resolver. This aims to link a user with their institutions full-text holdings. For instance if a University subscribes to a journal, the idea is that we send the user to the full-text journal article instead of the PubMed abstract. This is great, to a point. The downside is that it’s dumb! By that it doesn’t know if the institution actually subscribes to the journal – we simply insert the link out (using the link resolver) for every journal article! So, if the University doesn’t subscribe you get a disappointed user.

LibKey, on the other hand, is smart! It knows which institutions have which subscriptions and over which time-period. So, now more misleading linkouts.

For this to work we need to link your institutions LibKey details with Trip and away we go. Fingers crossed your institution uses LibKey!

Blog at WordPress.com.

Up ↑