Search

Trip Database Blog

Liberating the literature

Category

http://schemas.google.com/blogger/2008/kind#post

Email problems

We are in the process of switching to a new email system and this is causing problems!  If you’re requesting things such as password renewal your email is in a queue of 28,776 and the rate of sending is 1,006 per hour!  So, another 24+ hours till we’ve got through those.

Prior to the new system we used an in-house email tool we built from scratch.  It worked really well for 7+ years but has recently started to creak at the seams.  So, we’ve upgraded to a paid system called Mandrill

The problem is that when you’re new it doesn’t allow rapid sending of emails as it ‘senses’ your reputation. It looks at things such as number of rejected emails (for instance).  The one thing we have are a load of dormant accounts and currently we’ve got a bounce rate of 20% – so 20% of emails are bouncing back as being undelivered.  This doesn’t help our reputation, which is ‘poor’ – hence being restricted to 1,006 per hour.

The good news is – and Mandrill is great for this – is that it allows us to easily auto-delete these dormant account so next time our reputation will be much higher and therefore we should have a much higher send rate.

That aside Mandrill does all sorts of things which should allow us to create a much better email experience and also it gives us analytics showing how many emails were opened, how many links were clicked etc.  Fascinating reading.

So, apologies if you’re caught in the email queue!

Trip tips: refining your search

One of the many powerful features of Trip is the ability to refine your results based on the type of evidence you’re looking for.  It’s really simple to use and below are some screen shots to walk you through the process.

If you have any questions just ask: jon.brassey@tripdatabase.com.

If you’re interested in upgrading, see the main differences in this infographic and upgrade here.

Automating PICO and searching

I have the great honour of being part of the KConnect consortia, recipients of an EU Horizon 2020 innovation grant. Trip is involved in a number of great projects within KConnect and I plan to blog about them all over the course of the year.  The first to feature is enhancing our PICO interface.

Asking questions may seem straight forward but it can be difficult so by helping users understand the key elements of their questions it typically gives the questions better structure.  PICO stands for:

  • P = Population (eg what condition the user has)
  • I = Intervention (eg a drug, diagnostic test)
  • C = Comparison (eg an alternate drug or test)
  • O = Outcome (eg mortality, QoL)

Take these two, real question:

  • How can you safely treat constipation in pregnancy?
  • In diabetes would an AIIRA benefit over an ACE? 

In the top Q, the P = pregnancy and the O = constipation.  Alternatively the population could be pregnancy and constipation.

The second Q is more complicated but the P = diabetes, I = AIIRA and C = ACE inhibitors

You’ll note that questions don’t need all four elements; it’s a flexible concept!  Irrespective of the number of PICO elements it can be really useful in helping users think about the key elements of the question they may have.

From user feedback I hear time and time again that the PICO interface is great and really helps health professionals think through their questions.

KConnect is helping us improve it still further!  We will simply allow users to type our their question in full and press search.  We will automatically attempt to identify the PICO elements and then pass those elements to our search.  By highlighting the suggested PICO elements it will teach users by experience what the PICO elements are as well as speeding up the question answering process.

A further minor step – which might be really interesting – is to record the full question and the articles the user subsequently clicked on.  It’s not quite the same as a full answer, but a ‘half way house’.

We’ve a good few months of work on this using, various techniques: machine learning, semantic annotation, hard work.

I’ll keep you posted.

Peer-review and journals

Richard Smith (who used to edit the BMJ) has just posted on Facebook:

Publishing in journals is a slow, balls aching process that adds no value. Only academics who need the “points” bother. Far better to blog.

Richard has long argued against the peer-review process and here are two blogs, by Richard, for further reading

A connected article, by John Ioannidis, How to Make More Published Research True, makes a number of assertions, two of which are selectively shown below:

  • Currently, many published research findings are false or exaggerated, and an estimated 85% of research resources are wasted.
  • Modifications need to be made in the reward system for science, affecting the exchange rates for currencies (e.g., publications and grants) and purchased academic goods (e.g., promotion and other academic or administrative power) and introducing currencies that are better aligned with translatable and reproducible research.

 I’ve often marveled at the connected worlds of academia and publishers – two worlds that have a symbiotic relationship, one without the other wouldn’t work.  I am on a few online academic paper repositories and I’m always getting emails from people I follow who have published a new article.  I’m staggered by how often they can churn them out.

I then look at the wonderful EvidenceUpdates (a service funded by the BMJ and supplied by HIRU at McMaster). They scan the ‘top’ 120 journals and do an in-house quality assessment (a form of critical appraisal) and those that pass get sent to a network of clinicians who assess the papers for newsworthiness and relevancy to clinical practice.  Amazingly around 95-96% are rejected.  While I’m not suggesting all 95-96% are junk, I suspect the majority are little more than vanity publishing.  Academics wanting another article for their CV and publishers desperate for content to justify the purchase price.

My own world of Q&A frequently shows how poorly aligned academia is with coal-face clinical information need.  It was one of the reasons I got involved in the setting up of DUETs (DUETs publishes treatment uncertainties from patients, carers, clinicians, and from research recommendations, covering a wide variety of health problems.) The idea is to grab ‘real’ questions with a view to improving research procurement.  There is a suspicion that academics pursue their own interests which may not be aligned with clinical need.  So, DUETs allows for the ‘real’ questions to be raised.  Working with James Lind Alliance it can be a powerful combination.

Flibanserin and blogs

‘Female viagra’: FDA panel backs Flibanserin with safety restriction

Whenever I see stories about new drugs published in the newspapers I turn to Trip to see what we’ve got on the topic. It turns out we’ve got relatively little.  The top trial (SNOWDROP trial) concludes:

In naturally postmenopausal women with HSDD, flibanserin, compared with placebo, has been associated with improvement in sexual desire, improvement in the number of SSEs, and reduced distress associated with low sexual desire, and is well tolerated.

But what I like is that the first result in Trip is Astroturfers rule the day: FDA’s flibanserin reviewers were “emotionally blackmailed” by a slick lobbying campaign, a critical examination of the process.  While many do not like the inclusion of blogs (they can easily be removed from the search results) they can offer a critical insight/context which can be missing from journal articles.

As an aside, another thing I enjoyed seeing was that our clinical trial search found 12 trials, all of which are now classed as closed.  I’d not appreciated the power of having registered trials pulled through in to Trip.

Trip tiles

We’ve been live, as a Freemium service, for a little over two weeks.  In my more pessimistic (pre-launch) moments I was thinking that at this stage I may be having to abandon the whole idea as no-one was purchasing Trip.  However, I’m delighted that this is not the case!  We’re massively ahead of schedule and as such we’re accelerating various upgrades that we’d hoped to do towards the end of 2015.

Even more exciting, we’re thinking of new ideas!

One idea springs from my desire to do something interesting with the Timeline.  The Timeline records your searches and articles viewed on Trip and not much else.  So, one idea is to create something called Trip Tiles!  A fresh tile would be created with every new search and at the top of the tile would be the search terms and underneath would be the articles viewed.  In many ways this is what the timeline currently does.  But I think there’s the potential to link other people’s searches.  So, you might search and find three articles and as part of that process we highlight that 1 or more of the articles has been viewed in someone else’s timeline and offer you the chance to see their tile.

Best illustrate that with one of my legendary attempts at a picture (if we roll out this feature we’ll get them properly designed):

You could go from tile to tile both browsing and looking to see if you’ve missed any useful articles that someone else has already found.  Not only that you can see what search terms they’ve used – again possibly useful.
How we’d implement this would be a challenge, but I’d see that as an interesting challenge not a particularly tough one!  Any feedback on the idea would be appreciated – comments on the blog or via email: jon.brassey@tripdatabase.com

Evidence Live, systematic reviews and the US Air Force

I’m just back from the wonderful Evidence Live.  While I was away I saw this news story Is the West losing its edge on defence? and I was particularly drawn to the following passage:

The military have also contributed to their own misfortunes by conspiring with defence contractors to build ever more expensive weapons that can only be afforded in much smaller numbers than those they are supposed to replace.

Pierre Sprey, chief designer on the F-16 fighter noted the ruinous consequences of buying stealth aircraft at hundreds of millions of dollars a copy.

“It’s a triumph of the black arts of selling an airplane that doesn’t work,” he said.

This fits in very nicely with my perspective on systematic review methods, and was one of the main threads in my presentation on the future of evidence synthesis.  The current methods of systematic review production are costing way too much for what they deliver.  If you consider that the majority of systematic reviews rely on published trials they are inherently unreliable.

In the EBM world we’re buying F-16s…!

More to follow on this theme.

UPDATE: The wonderful Anne Marie Cunningham has pointed out (see comments) has pointed out that the consequence is of buying the really expensive stealth fighters (not F16s).  That’s a consequence of rushing a blog post so soon after a vigorous conference!  The point remains – purchasing too expensive planes has caused massive problems. 

The light at the end of the tunnel…

…is, I hope, not the light of an oncoming train. I’ve nabbed that line from my favourite band – Half Man Half Biscuit (HMHB) who wrote The Light At The End Of The Tunnel (Is The Light Of An Oncoming Train) a good few years ago! My love for HMHB aside, I keep reflecting on how things seem to be going really well for Trip and I’m desperately hoping we’ve turned a corner.  So, why the optimism:

  • 2014 was pretty good.
  • We’re working on the new Freemium version of Trip.  What’s going to come out is going to be impressively good and some of the premium upgrades will be great.
  • We’re involved in the really interesting EU funded project which will be doing some really innovative things.  I’ll blog about that more when the final specifications are agreed, but we’ll be looking at making Trip more multi-lingual, we’re going to be improving the Trip Rapid Review system and loads of work around similarity which is useful for the next point.
  • Relatedness/similarity is looking very useful for what we want to do with regard developing our financial viability.  The measures we’re developing will allow us to do all sorts of interesting things, for instance we can highlight a new book that’s useful to a particular clinician, we can highlight a new trial that’s pertinent to an existing systematic review.  Many more uses on top of that, but I’ve got to keep some secrets.
  • I’m starting to realise the value in our clickstream data (helped by two separate teams and soon to be joined by a PhD student as part of the EU project).  You only have to look at most of this year’s blog posts to see I’m working hard on this.  This can help with the relatedness work but it can do other useful things, such as improving the search results and better predicting new articles that are of use to a Trip user.  If our mission is to ensure health professionals get the right evidence to support their care – using clickstream data will make it so much more effective.  The advantage of the clickstream data is that it’s Trip’s data to utilise, it’s our IP.  It’s at the heart of our future.  I actually think it’s this point that’s making me so happy/optimistic.
  • Lots of other nice bits and bobs e.g. I’ve just been invited to lecture in the USA in Autumn/Fall; I’m part of a large consortium bidding to be a support team for complex reviews; I’m presenting at the wonderful Evidence Live; I’m making headway in my new NHS job (I am lead for Knowledge Mobilisation for Public Health Wales); I’m waiting to hear about a large MRC grant (not optimistic but something to look forward to).

Long may this continue!

Another use for clickstream data

In the previous post (Clickstream data and results reordering) I highlighted how the clickstream data could be used to easily surface articles that are not picked up by usual keyword searches.  That post highlighted how it could be used to improve search results.  In my mind I was thinking this could help surface documents to improve a clinician trying to answer their clinical questions.

But what about in systematic reviews (or similar comprehensive searches)?  A couple of scenarios spring to mind:

  1. A user conducts a search and find, say 15, controlled trials.  We could create a system that highlights the most connected clinical trials that have not been selected already.  So, possibly an in-built safety check to ensure that no trials are missed.
  2. Related concepts.  You see some spectacularly complex search terms, no doubt human generated.  There may be other systems but we could surface related concepts.  A simple example was shown in the early post (Clickstream data and results reordering) where it highlighted that obesity is related to diet.  OK, we all know that – but the computer didn’t, it spontaneously highlighted it.  Doing this on a large scale using Trip’s ‘big data’ will generate more obscure relationships – potentially very useful in generating a comprehensive search strategy!

If there are any systematic reviewers/searchers I’d love to hear what you think!

Blog at WordPress.com.

Up ↑