Trip Database Blog

Liberating the literature


May 2023

Introducing our RCT score

Hot on the heels of us releasing our guideline score we’re releasing our RCT score. We’ve been working with the wonderful RobotReviewer team for years now and one of their products is a Risk of Bias (RoB) score for RCTs. We introduced it in 2016 where we classified all the trials into categories of ‘low risk of bias’ and ‘high/unknown risk of bias’. When we recently re-wrote the site we did not immediately include the RoB score. In part this reflects that, since 2016, the thinking and technology has developed considerably. So, we’re very pleased to reintroduce it to the site.

The new score does not categorise the RoB into ‘low’ or ‘high/unknown’, it gives a score based on the likely RoB on a linear scale. We take that score and transform that into a graphic that is similar to that seen on the guideline score:

RCTs are important in the world of EBM and, as with guidelines, they are not all equally good! This score reflects the likelihood of bias and should help our users better make sense of the evidence base.

New Filter: European Guidelines

One relatively minor addition to our recent guideline enhancements has been the introduction of a new geographic guideline filter – ‘Europe’

Over the last few years we’ve been diligently identifying and adding guidelines from Europe so it made sense to add a new filter.

One final tweak was to make the ability to filter by geographic area a ‘Pro’ only feature.

Introducing our guideline score

The production of guidelines is a complex task and there are a multitude of methods, some more rigorous than others. While Trip places guidelines at the top of the evidence pyramid we need to recognise this is an over simplification . Our guideline score is designed to help our users understand how robust a guideline might be.

The guideline score has been a concept we’ve explored for a number of years (eg Quality and guidelines from 2019 and Grading guidelines from 2020) and involves us scoring each publisher (not individual guideline – see limitations below) based on 5 criteria:

  • Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
  • Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
  • Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
  • Are they clear about funding? No = 0, Yes = 1
  • Do they mention how they handle conflict of interest? No = 0, Yes = 1

The highest score being 8. Our work has shown that the above results give a very good approximations to the more formal methods, hence we’re using this simpler approach. And this is what it looks like:


This approach has a number of issues, for instance:

  • It is carried out at the publisher level and was done at a certain date. So, if we scored things in 2021 the scoring covers guidelines produced by the publisher in 2014 (say) and 2023. The methodology might well have changed between those dates. This is not reflected in our scoring.
  • Linked with the above point, it assumes the guideline publisher uses the same methodology for all guidelines.
  • Many of the lowest scoring producers do so due to the lack of publication of their methodologies making it impossible to properly score them, so our approach may underestimate the rigour of the methodology. If our approach encourages publishers to be more transparent then it’ll be a great result in itself!
  • The scoring system uses 5 elements, it might benefit from more but we have to pragmatically balance rigour and resource.

Blog at

Up ↑