Friday, October 17, 2014

Survey 2014 - initial results

Trip users are amazing - in less than 48 hours of releasing the survey we had 1,0001 responses, at which stage SurveyMonkey closed the survey saying we'd reached the limit!  Apologies if you feel your voice hasn't been heard, if that is the case email me (jon.brassey@tripdatabase.com), I'd love to hear from you. Given your generosity of time I thought I'd share the initial results highlights...

The top 5 professions represented in the survey
  • Doctor - secondary or tertiary care
  • Doctor - primary care
  • Librarian
  • Other
  • Researcher/scientist
75% of respondents have been using Trip for more than a year with 35% using it for longer than 3 years.

I asked about the most important features relating to our content and these are the top 6 responses (those that were highlighted by more than 30% of the respondents):
  • Largest single searchable collection of ‘evidence-based’ content
  • Largest global collection of clinical guidelines
  • Many more systematic reviews than Cochrane
  • Content is from around the globe, for example USA, UK, Canada, Australia, New Zealand, France, Germany, Japan, Singapore, South Africa
  • Selected collection of PubMed’s leading clinical journals
  • Database of over 500,000 clinical trials
I also asked if there were many surprises - and there were lots of responses.  The main one being the lack of awareness of our image and video collections.  We clearly need to work hard on getting that message out.

I asked about the most important key features of Trip, the following are all those that polled over 30%:

  • Easy filtering of results to restrict to evidence types e.g. systematic reviews, guidelines
  • Monthly alert of new evidence linked to your interests
  • PICO search interface
  • Order the results by quality, relevance or date
  • Easy/Friendly interface with no steep learning curve
  • Advanced search interface
  • Colour coding scheme to make it easier to highlight high quality evidence
Our users seem keen to be alerted to clinical trials, jobs, conferences and books (most polled over 50% approval).

We asked about a Trip Evidence Service and most thought it was a good idea.  However, only 11% thought they would be able to find the money within their organisation.  But I'm encouraged as 11% is still high, given our large user base.

Most people appeared to be broadly supportive/understanding of our need to move to a freemium business model.

I listed a number of possible new premium features and those that polled greater than 20% (only the top 3 were higher than 30%):
  • Add in additional full-text articles
  • Creation of an 'Answer Engine' giving you instant answers to your clinical questions
  • PICO+. Based on the popular PICO search make it more user friendly and powerful
  • A 'Help' feature so if you can't find what you need you can ask the wider Trip community
  • Providing education points based on your time using Trip
  • Improved emails highlighting evidence that is more likely to be useful to you
  • Introduce a 'People who looked at this article, also looked at these articles' features to highlight related articles
  • Improved export of records
Due to us using colour we asked about colourblindness and 3.2% said they were colourblind.  I've no idea how that compares to the wider population.  nearly 30% of the users reported "I am not colour blind and I was not aware that you used colour to help highlight the quality of the results".  So, another communication challenge for us.

Finally, in looking through the 'Any other comments' section I was completely overwhelmed by the messages of love and support.  Knowing that makes my work so much easier.

Tuesday, October 14, 2014

Survey time

We are planning to make significant changes to Trip in early 2015.

An important aspect of this is better understanding our users; how they use Trip and what features they value.  In addition we're keen to explore attitudes to various proposed changes.

This is a really important survey so can you please take 5-10 minutes to go through the 14 questions.

Click here to take part in the survey

Thank you.

Monday, October 13, 2014

Economics and EBM

Conflict of interest declaration: Trip's main aim is to help clinician's answer their questions using the best available evidence.  As such we have worked, and continue to develop, techniques to hugely reduce the costs of doing systematic reviews.  See (Trip Rapid Reviews - systematic reviews in five minutes, Ultra-rapid reviews, first test results and Trip Rapid Review worked example - SSRIs and the management of hot flashes)

In my presentations to Evidence Live I was (constructively) critical of Cochrane.  This was distilled into two blog posts A critique of the Cochrane Collaboration and Some additional thoughts on systematic reviews.  In the first article I quoted Trish Greenhalgh:

"Researchers in dominant paradigms tend to be very keen on procedure. They set up committees to define and police the rules of their paradigm, awarding grants and accolades to those who follow those rules. This entirely circular exercise works very well just after the establishment of a new paradigm, since building systematically on what has gone before is an efficient and effective route to scientific progress. But once new discoveries have stretched the paradigm to its limits, these same rules and procedures become counterproductive and constraining. That’s what I mean by conceptual cul-de-sacs."

I quoted Trish as I felt that Cochrane had come to dominate and lead the systematic review paradigm.  But one thing I didn't write-up at the time and linked with Trish's quote was my feeling that the methodological rigour and standards set by Cochrane was actually an economic barrier to entry for competitors.  The Wikipedia article on barriers to entry reports:

"In theories of competition in economics, barriers to entry, also known as barrier to entry, are obstacles that make it difficult to enter a given market. The term can refer to hindrances a firm faces in trying to enter a market or industry—such as government regulation and patents, or a large, established firm taking advantage of economies of scale—or those an individual faces in trying to gain entrance to a profession—such as education or licensing requirements.

Because barriers to entry protect incumbent firms and restrict competition in a market, they can contribute to distortionary prices. The existence of monopolies or market power is often aided by barriers to entry."


Cochrane, due to their dominance, effectively set the standards of what's deemed acceptable (irrespective of the significant evidence to the contrary - see the previous two blog posts for further information).  This effectively stifles competition. If systematic reviews could be done quickly and easily by anyone the business model of Cochrane would be severely compromised - I can see no other losers (except perhaps pharma).

Perhaps it is a coincidence that most changes to systematic review methods over the years appear to have more to do with increasing the methodological burden (by squeezing increasingly small amounts of bias out of the results) than with reducing the costs?

What has prompted the above post has been the announcement of the winner of the Nobel Prize for Economics. Jean Tirole has won for his work on market power and regulation.  The BBC reports:

"Many industries are dominated by a small number of large firms or a single monopoly," the jury said of Mr Tirole's work. "Left unregulated, such markets often produce socially undesirable results - prices higher than those motivated by costs, or unproductive firms that survive by blocking the entry of new and more productive ones."

Now, that's got to be a good link - EBM, Cochrane and the Nobel Prize for Economics!

But the point of the post, is not to moan at Cochrane, but to suggest that the systematic review 'market' is problematic and there appears to be little appetite to radically change things.  If we want to improve care we need more systematic reviews which means we need to innovate.  And by innovate I don't mean small iterative improvements, more substantial changes are needed.

Perhaps we could start at first principles and ask why do we do systematic reviews in the first place?  I used to think it was to get an accurate assessment of effect size.  However, if you look at the evidence it's fairly clear that systematic reviews - based on published trials - are pretty poor in this regard.  But if it's not that, then why do we do them?  Once we can clearly articulate why we can perhaps better understand how to produce them more efficiently.

Friday, September 19, 2014

Highlighting clinical uncertainties

I've been involved in clinical uncertainties for many years.  I had the pleasure of helping create the DUETs database (UK Database of Uncertainties about the Effects of Treatments).  Around the same time Trip released the Tag Cloud of Clinical Uncertainty which was a great experiment. In all aspects of my professional life I like to highlight the importance of clinical uncertainty.  Although the term often unnerves people, I think they feel threatened by the notion.  But, it could be worse, the likes of Iain Chalmers and Muir Gray have often used the phrase clinical ignorance - which is far harsher.

My interest in uncertainties stems from my desire to improve the research procurement process.  The main drive with Trip is to help clinicians answer their clinical questions.  Without knowing the gaps in their knowledge and the evidence base how can you procure suitable primary research or even secondary, evidence synthesis?  Myself and my teams have answered over 10,000 clinical questions so we know how frequently the research base is lacking or not focused on clinical care.  So things need to improve and I think recording uncertainties is a great way to help.

I raise all this as, in my business planning, I've spoken to one of the UK's largest research 'agency' and they are keen to work with Trip to better understand users questions and gaps in the evidence base.  So, for me, the answer is to try and capture the clinical questions our users have and when Trip has let them down.  I have a few ideas around this, including creating a PICO+ tool; a step-by-step tool to allow users to easily answer their clinical questions.  The user would start by adding the full question and then we would guide them through the PICO steps (e.g. what is the population, what is the intervention).  At the end they can tell us if their question has been answered.  If not, it's got a good chance of being a clinical uncertainty. 

Seems like a plan!

Tuesday, September 02, 2014

The Answer Engine and The Journal of Clinical Q&A

Trip prides itself as a great tool for answering clinical questions. Over 80% of users find the information they need, all or most of the time.  But that's still not perfect and one idea I keep coming back to is the 'answer engine'.  The wonderful Muir Gray said, in relation to finding evidence, that three clicks was two clicks too many.  So, the challenge is, is there a way of getting answers based on a single mouse click?

The answer (or perhaps 'My answer') is the answer engine.

This would involve a system to try and understand the search and display a suitable answer.  So, if a user searches for minocycline and acne we can be fairly confident that they're interested to know if minocycline is effective in treating acne.  Therefore, we could drop in the following answer:

Minocycline is an effective treatment for moderate to moderately-severe inflammatory acne vulgaris, but there is still no evidence that it is superior to other commonly-used therapies. This review found no reliable evidence to justify the reinstatement of its first-line use, even though the price-differential is less than it was 10 years ago. Concerns remain about its safety compared to other tetracyclines.

This has been taken from a recent Cochrane Systematic Review.  The normal search results would appear beneath the 'answer'.  The user gets a great answer in one click.

There are a few issues with the above and one is scalability.  Parts of this can be automated but much of it will be manual.  Also, relying of sources - such as Cochrane - means it's led by the evidence producers not the user.  So, the challenge is to have users supply answers.  Which leads to the Journal of Clinical Q&A.

The idea is to set up a brand new journal dedicated to answering real clinical questions.  Based, roughly on the BestBETs site it will follow a similar structure (e.g. Steroids in lateral epicondylitis).  The clinical bottom line will be pulled through into Trip to act as the 'answer' and then users can click to see the full article.

Peer-review is problematic on many levels and Richard Smith (former editor of the BMJ) has frequently criticises the current peer-review process (e.g. A woeful tale of the uselessness of peer review and Scrap peer review and beware of “top journals”).  But how can we improve on it?  I'm not sure we can but I'm open to help!  My current proposal is as follows:
  • Each answer will be reviewed by an in-house team, a sanity check.  Those that seem reasonable will be released into the answer engine as an 'answer pending approval'
  • We would then ask the wider Trip community to read and rate the answer.  This would borrow from the F1000 approach which uses three classes: Approved, Approved with Reservations and Not Approved.  An article will be considered published when it reaches a certain approval threshold. Note, the F1000 approach is not without criticism (e.g. PubMed and F1000 Research — Unclear Standards Applied Unevenly), hence writing this article with the hope of obtaining help.
Trip would have a good answer and the person who uploaded it will obtain a citation.  The plan is to start slowly, see how it develops but the longer-term view would be to see the answers appear in Medline/PubMed - as currently happens with a large number of the articles in BestBETs.

The above is an idea, a work in progress.  I think there is every chance this can become a reality but a little help in refining the concept would be really good.

Saturday, August 23, 2014

A Trip evidence service?

Trip is a wonderfully useful search engine, widely used and it has a great reputation and brand. We're thinking we could build on this to create a formalised evidence service. 

An evidence service could undertake a number of roles to support users (probably organisations) for instance:
  • Literature searches
  • Critical appraisals
  • Evidence reviews/synthesis
  • Clinical Q&A
  • Horizon scanning
  • Etc
We have a network of highly skilled information experts who would undertake the work.  Due to our low overheads we could provide a very cost-effective service.

I have experience in the UK where there are a large number of organisations (e.g. CCGs) that do not have timely access to timely, robust evidence to support their decisions.  This is really problematic when introducing changes to the system; how can they be evidence-based with no evidence input?  I doubt UK is atypical in this respect.  Therefore, there is a real opportunity to improve care and improve our business!

If you're interested in the service and want to help us develop our service then let me know.

Thursday, August 21, 2014

Beauty is in the eye of the beholder

Clickstream data is not widely known about.  In short it's the analysis of users clicking on websites.  We've started exploring this and the clickstream we're using is based on users clicking on particular search results. In short, if you do a search on Trip and click on documents number 2, 4 and 9 you're effectively telling us that, for your intention, they're connected.  In isolation it's arguably meaningless, but over thousands of searches you start to see structure.  I've blogged about this previously (here, here and here) but now we've got more results.

Below is the largest continuous graph/map of connected documents - over 10,000 long (click on image to expand).