Conflict of interest declaration: Trip’s main aim is to help clinician’s answer their questions using the best available evidence. As such we have worked, and continue to develop, techniques to hugely reduce the costs of doing systematic reviews. See (Trip Rapid Reviews – systematic reviews in five minutes, Ultra-rapid reviews, first test results and Trip Rapid Review worked example – SSRIs and the management of hot flashes)
In my presentations to Evidence Live I was (constructively) critical of Cochrane. This was distilled into two blog posts A critique of the Cochrane Collaboration and Some additional thoughts on systematic reviews. In the first article I quoted Trish Greenhalgh:
“Researchers in dominant paradigms tend to be very keen on procedure. They set up committees to define and police the rules of their paradigm, awarding grants and accolades to those who follow those rules. This entirely circular exercise works very well just after the establishment of a new paradigm, since building systematically on what has gone before is an efficient and effective route to scientific progress. But once new discoveries have stretched the paradigm to its limits, these same rules and procedures become counterproductive and constraining. That’s what I mean by conceptual cul-de-sacs.”
I quoted Trish as I felt that Cochrane had come to dominate and lead the systematic review paradigm. But one thing I didn’t write-up at the time and linked with Trish’s quote was my feeling that the methodological rigour and standards set by Cochrane was actually an economic barrier to entry for competitors. The Wikipedia article on barriers to entry reports:
“In theories of competition in economics, barriers to entry, also known as barrier to entry, are obstacles that make it difficult to enter a given market. The term can refer to hindrances a firm faces in trying to enter a market or industry—such as government regulation and patents, or a large, established firm taking advantage of economies of scale—or those an individual faces in trying to gain entrance to a profession—such as education or licensing requirements.
Because barriers to entry protect incumbent firms and restrict competition in a market, they can contribute to distortionary prices. The existence of monopolies or market power is often aided by barriers to entry.”
Cochrane, due to their dominance, effectively set the standards of what’s deemed acceptable (irrespective of the significant evidence to the contrary – see the previous two blog posts for further information). This effectively stifles competition. If systematic reviews could be done quickly and easily by anyone the business model of Cochrane would be severely compromised – I can see no other losers (except perhaps pharma).
Perhaps it is a coincidence that most changes to systematic review methods over the years appear to have more to do with increasing the methodological burden (by squeezing increasingly small amounts of bias out of the results) than with reducing the costs?
What has prompted the above post has been the announcement of the winner of the Nobel Prize for Economics. Jean Tirole has won for his work on market power and regulation. The BBC reports:
“Many industries are dominated by a small number of large firms or a single monopoly,” the jury said of Mr Tirole’s work. “Left unregulated, such markets often produce socially undesirable results – prices higher than those motivated by costs, or unproductive firms that survive by blocking the entry of new and more productive ones.”
Now, that’s got to be a good link – EBM, Cochrane and the Nobel Prize for Economics!
But the point of the post, is not to moan at Cochrane, but to suggest that the systematic review ‘market’ is problematic and there appears to be little appetite to radically change things. If we want to improve care we need more systematic reviews which means we need to innovate. And by innovate I don’t mean small iterative improvements, more substantial changes are needed.
Perhaps we could start at first principles and ask why do we do systematic reviews in the first place? I used to think it was to get an accurate assessment of effect size. However, if you look at the evidence it’s fairly clear that systematic reviews – based on published trials – are pretty poor in this regard. But if it’s not that, then why do we do them? Once we can clearly articulate why we can perhaps better understand how to produce them more efficiently.
October 19, 2014 at 11:40 am
I would be wary of importing economic theory based on everyday markets for goods and services into the field of evidence based medicine John. The perfect market as idealised by the Austrian and Chicago economic schools is a wonderful theoretical construct but it is known that there are many circumstances in the real world which can render arguments based on it's precepts to fail leading to market failure. Barriers to entry are one of these. However another major requirement of the perfect market is perfect knowledge – both buyer and seller are assumed to have perfect understanding of both the qualities of the goods being transferred and their individual value to the each participant. It is this precondition which is breached in most transactions between a member of the laity and any professional – the professional, by definition, has greater knowledge than the client in the field under discussion. That is after all why the transaction is taking place. The traditional defence for the client in such markets is certification of competence for the professional, achieved through academic qualification, peer review or occasionally performance monitoring, or some combination of these. Such mechanisms are easily portrayed as 'barriers to entry' but one has to remember that they are there for a reason. It is why we do not allow anyone to set up as a domestic gas fitter without undergoing any kind of training. In the context of systematic review it is remarkably easy in this era of electronic access, for anyone to gather together half a dozen papers on a topic and publish a 'systematic review' but without, at the very least, free and open disclosure of the methods used, it may be very difficult for the ordinary reader to discern the true value of the conclusions. Cochrane is not so much acting as a barrier to entry here as demonstrating what can be done, given sufficient resources, to make a review worthy of trust. It does not prevent others from entering the 'market' but provides a benchmark against which the validity of other reviews can be evaluated.
LikeLiked by 1 person
October 19, 2014 at 12:03 pm
Thank you for the comment.
Using the gas fitter example the body responsible for ensuring quality (CORGI in the UK) is separate and independent of the gas fitters. So, the potential for CoI is much reduced.
On the other hand, Cochrane are both producers as well as the standard setters (effectively). The potential for CoI is much higher.
I fully appreciate the need for transparency and trust in a product. I find it problematic that we seem to have got to a situation where the Cochrane method is seen as the 'gold standard' given the significant problems with them. However, the lack of overt discussion of these problems seem to be contrary to transparency. But that comes back to CoI.
I think there should be a variety of systematic review methods where the cost and benefits are clearer labelled. Consumers of the information can go from there.
October 19, 2014 at 12:29 pm
Actually CORGI has been replaced by the Gas Safe Register and although that is hosted by the Health and Safety Executive, it's means of determining whether an individual fitter is competent is essentially inspection of work carried out by the applicant by… other qualified gas fitters! Conflict of interest is actually a somewhat separate issue – the underlying question here is, how do you know that an expert is an expert?
I don't think Cochrane, as an entity would portray itself as the arbiter of what is acceptable in systematic review. No-one that I have met in the organisation views it as policing the conduct of systematic review and the Cochrane methodology itself is open to further modification and improvement when such changes can be shown to improve reliability.
It's also worth remembering that there is not really a 'market' for systematic reviews in the way that there is for fruit and veg. The only currency which the customer is using in choosing to read a Cochrane review or a competitor review is their time. Provided that different reviews DO display their methodology clearly they can in this respect compete on an even playing field with Cochrane – the reader can see that one review is based on the personal collection of papers of one author while the other has been done by the current Cochrane method, and can weigh this against the fact that one of them is 5 pages to read and the other 120!
There is a place in the world for all kinds of systematic review, just as there is a place for all kinds of primary papers, from case reports to randomised, placebo controlled, blind trials. What really matters is transparency so that things are not able to portray themselves, or be easily misinterpreted as, what they are not.
LikeLiked by 1 person
October 19, 2014 at 12:38 pm
I'm not wanting to make it too anti-Cochrane and the reason I do, is because they are relatively transparent and the failings are easier to see.
I think we can both agree that transparency of process is the way forward. But transparency in failings is also an important part of that. Cochrane, alongside others, tend to gloss over the shortcomings. A clear example is the reliance Cochrane place on published trials (see http://www.ncbi.nlm.nih.gov/pubmed/23613540). This introduces large problems in estimating effect sizes as it misses 30-50% of unpublished trials.
But lets have a table with resource required (financial, opportunity costs etc) versus 'accuracy' and see what we get. Unfortunately we need more research first (CoI I'm just about to submit a research bid based on just this notion).
October 19, 2014 at 12:58 pm
I'm not sure that Schroll et al tells you much about the 'reliance placed by Cochrane on published trials' – you could also argue that it shows that Cochrane authors do make attempts to obtain unpublished data and that this is often incorporated into reviews – and also that it can be pretty hard to obtain. From my own experience I can say that what is retrieved is often clarification/extension of data that has been published rather than wholly unpublished data.
We all know that there is masses of unpublished data out there – I have a considerable amount of it related to CTS myself. I'm not sure that this needs to be explained at length in the preamble to every Cochrane review – they are long enough already.
Perhaps a more important 'barrier to entry' in the process of getting evidence into the public domain is the sheer difficulty of presenting such data for public consumption when there are so many other pressures on our time.
Interesting discussion though – thank you
LikeLiked by 1 person
October 19, 2014 at 1:04 pm
My interpretation of the Schroll paper is that – broadly – attempts to get unpublished papers are, at the very least, haphazard/unsystematic.
There are lots of studies comparing SRs using published data and those using unpublished (my favourite being http://www.bmj.com/content/344/bmj.d7202).
As for Cochrane reviews being long-enough, we can agree on that!
October 19, 2014 at 1:19 pm
We will have to disagree about Schroll. Efforts to find unpublished data can be systematic in some circumstances, as in the study using FDA data which you quoted, but much of the unpublished medical data is like 'dark matter' in physics – we think it must be there but we currently have no successful tools for finding it.
In one sense I'm slightly relieved by Bero et el as although it shows that inclusion of unpublished data altered the estimates of effect size in >90% of reviews – they were pretty much half upwards and half downwards – which at least helps to refute the charge that the drug companies systematically suppress bad news for financial reasons, but that's a different topic
October 19, 2014 at 1:29 pm
The Bero paper was interesting for showing that the effect of including unpublished data (in this case regulatory trials registered with the FDA) is large. In around 50% of the cases the effect size estimate was out by at least 10%.
I was surprised by the equal split between underestimate and overestimate but that makes it more troubling as when you look at a summary estimate there is a 50% chance that its out by at least 10% BUT you have no idea if that's an under-estimate or over-estimate. The confidence intervals don't touch this.
I'm still unsure how to handle it other than saying any suggested effect size, in an SR based on published trials, is likely to be a ball-park estimate!
October 19, 2014 at 2:31 pm
Well at least a better ball-park estimate than any single trial 🙂 It is nice to have a paper quantifying the possible scale of such errors.