Trip Database Blog

Liberating the literature


October 2015

Oral tapentadol for cancer pain

I was looking at Twitter when I saw this tweet:

So, why not see what the highly experimental Trip Rapid Review system makes of oral tapentadol for cancer pain.  So, I spent five minutes and came up with this result:

We found three trials and the actual Cochrane Systematic Review found four, the trial difference was an unpublished one (but well done Cochrane for finding that one).  Frustratingly the actual review had no forest plot – so our pseudo-plot (above) will have to do.

Our system gave a score of 0.42 which suggests reasonable, but unspectacular, results for oral tapentadol.  The actual Cochrane conclusion is:

Information from RCTs on the effectiveness and tolerability of tapentadol was limited. The available studies were of moderate or small size and used different designs, which prevented pooling of data. Pain relief and adverse events were comparable between the tapentadol and morphine and oxycodone groups

It’s difficult to compare end-points – Cochrane says it’s as good as morphine and oxycodone (which may be good, bad or indifferent – I don’t know) while our system suggests it’s ok/not bad.

Given the highly experimental nature of our system I think we give consistently good results.  The important next steps are:

  • Improve the system – which we’re about to start on, via our Horizon 2020 funded work.
  • Validate the approach so we can understand when it works well and when it doesn’t.

We’ve used this approach before (see the example on SSRIs for the management of hot flashes) with good results and our earliest internal tests found around 85% agreement.

I’m not suggesting this is a replacement system and I’m under no illusion of the potential for harm – if mis-used – but it’s a novel approach which should see further developments.  While much of the developments in machine learning, text mining etc in systematic reviews are about replacing humans in the standard systematic approach I see this approach as being wholly more revolutionary.

Content, content, content

Broken links are never great but unfortunately they are unavoidable.  They typically happen when a website either removes an article or has a redesign and moves all the old links to new ones.  Users, naturally, get frustrated and Trip has to do better.

We currently have a broken link detector – which sends an alert for each broken link.  In reality it’s crude in that it ‘decides’ a link is broken if it takes longer to load than five seconds.  If a user is on a slow internet connection or a site is running slowly – an alert is sent.  So, we get lots of false positives, making the system poor.

But, recently we spent a great deal of time analysing these and found a large number of sites with problems and have completely revamped these links.  So, the true positives should fall dramatically.  However, moving forward, we’re planning a new broken link system.  If we detect a broken link (or one we suspect is broken) we will re-try it a number of times over the next 24 hours and only after a few tries will it be considered a true positive and an alert created.  The alert will contain additional tools to make it much easier to amend the index.  While no system will be perfect, we hope that this system will help to significantly reduce the problem of broken links.

Search safety net

The search safety net is a novel feature to help improve searching; helping users not miss important papers.  I wanted to explain it – simply – but have failed on that score.  It’s important so I hope you can make sense of what I’ve written.  If you have any questions, my email is

After a search you will see a new ‘Search Safety Net’ button

If you click that it’ll bring up a list of related search terms.  It does this by looking at the top 250 search results and analysing the search terms people have previously used when clicking on these results.  This works on the notion that a single document can be clicked on after numerous searches.  For instance, in the example above search terms might have been ‘prostate cancer screening’, ‘MRI screening’ etc.

The next section of the search safety net happens AFTER you’re conducted your search and found a number of documents you like AND looked at (or simply clicked the ‘check box’ to the left of the result).  If you click on the Search Safety Net button again you see three columns of results:

The first column is closely related articles, the second is other related articles and the third is the related search terms.  The latter column is similar to the description of related search terms above, but is based purely on the documents clicked (as opposed to the top 250 results).  However, to understand the process behind the other two columns you need to understand a clickstream data.

Paper 1 ———- Paper 2 ———- Paper 3

In the above there are three papers (1-3).  A user, in the same session, clicks on Paper 1 and Paper 2, therefore we can make a link between the two.  Another user might click on Paper 2 and Paper 3, again making a link.  So, Paper 1 is connected to Paper 2 (a single step, using network language) while Paper 3 is two-steps away from Paper 1.  We have this data for all articles in Trip.

Slightly simplifying things (!) the first column is the most popular related articles based on documents that are one step away from the documents clicked.  So, we look at all the articles clicked by the user and pull back all the documents that are one step away, displaying most ‘popular’ at the top.  The second column are all the documents that are two steps away.  This is likely to find less focused results, but the occasional really interesting study that might have been missed.

Two important issues:

  • For this to work requires clicks, if the documents you’ve looked at has no clicks, then you’ll get no results.
  • This is being released as a ‘beta’ bit of software, as in we’re still developing it.  At present it is available to both free and Premium users of Trip.  However, this is likely to change in the near future.

Blog at

Up ↑