At the end of last year we posted Quality and guidelines which set out our thinking around grading guidelines with a view of improving the experience for our users. Since then we’ve done a great deal of work exploring this issue and have arrived at a modified version of the Institute of Medicine’s Clinical Practice Guidelines We Can Trust scoring system.
Firstly, an important distinction to highlight, is that we are not able to grade individual guidelines. Trip has over 10,000 clinical guidelines and that’s simply impractical from a resource perspective. So, the plan is to grade each guideline publisher. The idea is that each publisher will be independently visited by two people (Trip staff and volunteers) and they will score them based on these questions:
- Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
- Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
- Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
- Are they clear about funding? No = 0, Yes = 1
- Do they mention how they handle conflict of interest? No = 0, Yes = 1
The best score being 8! Our work has shown that the above results give very good approximations to the more formal methods, hence we’re using this simpler approach. The idea is to start displaying these scores alongside each result (we’ll work on a graphic to display it and allow users to easily see how we’ve scored them).
I mentioned volunteers above and we’ve recruited a number via emails from Trip. But if you’ve missed them and are interested in helping out then please send an email to jon.brassey@tripdatabase.com.
Recent Comments