It's been a busy day. We're going to be 'staff light' next week so I've put in a fair few hours today. Of the 7 questions attempted few could be answered with robust evidence; a few examples:
What is the difference between soluble and insoluble fibre? How does the soluble form reduce cholesterol.
The first part was fairly straightforward, while the second part was conjecture that doesn't appear to have moved on in a number of years. I suppose that it doesn't matter greatly as long as there is an effect, but even so...
What current evidence for best practice is there in the conservative podiatric management of paediatric pes cavus?
Cochrane reviewed this topic in 2007 (not restricted to children) and found no real evidence, apart from one area. We did find a 2008 retrospective review of cases in children, but that's hardly the most robust evidence!
In minor surgery following abscess incision and drainage, is packing necessary and if so how long and how frequently should the abscess be packed?
This was a good one! Why? Because the published evidence appeared to clash with our medical directors experience!!! His view was that the research was of low quality. But then we get down to the 'nitty gritty' of evidence-based practice - what is the role of experience?
In the absence of robust evidence, when there is only poor-quality research, where does a doctor's (or nurse's) experience fit into things? I often feel that 'evidence-based' practice judges experience too harshly. Perhaps not judges it; sneers might be a better word.
I still feel that DUETs has some promise in highlighting gaps in the research evidence. DUETs is attempting to record gaps in the evidence (relating to therapeutics) highlighted from research recommendations and clinical Q&As. The idea being to inform/enhance the research procurement process. In other words, if you know that lots of people are interested in a particular clinical question and the evidence-base is poor - procure some research!
As part of our TRIPanswers (to be launched July/August 2008) we're taking a different angle at highlighting the research gaps - with our 'tag cloud of clinical uncertainty'. Ours is a more pragmatic approach. Perhaps the biggest differences will be what constitutes uncertainty (we'll take a less academic approach) and what can be covered (we'll cover everything, not just therapeutics). Our pragmatic approach will allow us to 'cover' more ground and we should launch with between 500-1,500 uncertainties.
If either (or both) approaches makes an impact it'll have been worthwhile.