Conflict of interest declaration: Trip’s main aim is to help clinician’s answer their questions using the best available evidence. As such we have worked, and continue to develop, techniques to hugely reduce the costs of doing systematic reviews. See (Trip Rapid Reviews – systematic reviews in five minutes, Ultra-rapid reviews, first test results and Trip Rapid Review worked example – SSRIs and the management of hot flashes)
In my presentations to Evidence Live I was (constructively) critical of Cochrane. This was distilled into two blog posts A critique of the Cochrane Collaboration and Some additional thoughts on systematic reviews. In the first article I quoted Trish Greenhalgh:
“Researchers in dominant paradigms tend to be very keen on procedure. They set up committees to define and police the rules of their paradigm, awarding grants and accolades to those who follow those rules. This entirely circular exercise works very well just after the establishment of a new paradigm, since building systematically on what has gone before is an efficient and effective route to scientific progress. But once new discoveries have stretched the paradigm to its limits, these same rules and procedures become counterproductive and constraining. That’s what I mean by conceptual cul-de-sacs.”
I quoted Trish as I felt that Cochrane had come to dominate and lead the systematic review paradigm. But one thing I didn’t write-up at the time and linked with Trish’s quote was my feeling that the methodological rigour and standards set by Cochrane was actually an economic barrier to entry for competitors. The Wikipedia article on barriers to entry reports:
“In theories of competition in economics, barriers to entry, also known as barrier to entry, are obstacles that make it difficult to enter a given market. The term can refer to hindrances a firm faces in trying to enter a market or industry—such as government regulation and patents, or a large, established firm taking advantage of economies of scale—or those an individual faces in trying to gain entrance to a profession—such as education or licensing requirements.
Because barriers to entry protect incumbent firms and restrict competition in a market, they can contribute to distortionary prices. The existence of monopolies or market power is often aided by barriers to entry.”
Cochrane, due to their dominance, effectively set the standards of what’s deemed acceptable (irrespective of the significant evidence to the contrary – see the previous two blog posts for further information). This effectively stifles competition. If systematic reviews could be done quickly and easily by anyone the business model of Cochrane would be severely compromised – I can see no other losers (except perhaps pharma).
Perhaps it is a coincidence that most changes to systematic review methods over the years appear to have more to do with increasing the methodological burden (by squeezing increasingly small amounts of bias out of the results) than with reducing the costs?
What has prompted the above post has been the announcement of the winner of the Nobel Prize for Economics. Jean Tirole has won for his work on market power and regulation. The BBC reports:
“Many industries are dominated by a small number of large firms or a single monopoly,” the jury said of Mr Tirole’s work. “Left unregulated, such markets often produce socially undesirable results – prices higher than those motivated by costs, or unproductive firms that survive by blocking the entry of new and more productive ones.”
Now, that’s got to be a good link – EBM, Cochrane and the Nobel Prize for Economics!
But the point of the post, is not to moan at Cochrane, but to suggest that the systematic review ‘market’ is problematic and there appears to be little appetite to radically change things. If we want to improve care we need more systematic reviews which means we need to innovate. And by innovate I don’t mean small iterative improvements, more substantial changes are needed.
Perhaps we could start at first principles and ask why do we do systematic reviews in the first place? I used to think it was to get an accurate assessment of effect size. However, if you look at the evidence it’s fairly clear that systematic reviews – based on published trials – are pretty poor in this regard. But if it’s not that, then why do we do them? Once we can clearly articulate why we can perhaps better understand how to produce them more efficiently.