In 2011 the Institute of Medicines published Clinical Practice Guidelines We Can Trust and it produced 8 standards:

  1. Establishing transparency
  2. Management of conflict of interest (COI)
  3. Guideline development group composition
  4. Clinical practice guideline–systematic review intersection
  5. Establishing evidence foundations for and rating strength of recommendations
  6. Articulation of recommendations
  7. External review
  8. Updating

There are other checklists available (e.g. see this recent comparision A Comparison of AGREE and RIGHT: which Clinical Practice Guideline Reporting Checklist Should Be Followed by Guideline Developers?).

I raise all this as I wonder if we, at Trip, could automatically approximate quality of guidelines based on the IoM’s 8 point checklist. Given it needs to be automatic it would need a number of rules that could help understand the likely quality.  Taking the 8 standards I could see us approximating the following:

  1. Transparency – does it mention funding? This is doable via text-mining.
  2. Conflict of interest – does it mention conflict of interest within the guideline? This is doable via text-mining.
  3. Guideline development group composition – does it mention a multidisciplinary team and/or patient involvement? Potentially doable, but not convinced.
  4. Clinical practice guideline–systematic review intersection – does it mention systematic reviews (a bit more nuanced in reality)? This is doable via text-mining.
  5. Establishing evidence foundations for and rating strength of recommendations – does it rate the strength of evidence? This is probably doable via text-mining.
  6. Articulation of recommendations – does it clearly list recommendations? Potentially doable, but not convinced.
  7. External review – does it mention the review process? Potentially doable, but not convinced.
  8. Updating – does it mention the date and/or updating date? This is doable via text-mining.

So, what I could see us doing is checking each guideline for the following:

  1. Does it mention funding? Y/N
  2. Does it discuss conflict of interest? Y/N
  3. Does it mention systematic reviews? Y/N
  4. Does it discuss the strength of evidence? Y/N
  5. Does it mention recommendations? Y/N
  6. Does it have a date within the guideline? Y/N
  7. Does it mention updating Y/N

Do, we could scan each of the guidelines for either all 7 items (although it may just be 5, as items 4 and 5 are potentially problematic).  So, if we go for the ‘simple’ 5 we would be able to rate each guideline on a 5 point scale.

The question becomes if a guideline mentions funding, conflict of interest etc is that a good indicator (or approximation) for the quality of a guideline? I think it seems fairly reasonable (as long as recommendations are clear) but what do others think?  How might it be improved?