Trip Database Blog

Liberating the literature

Confusing dual categories

In the image below you will see that the user has restricted the category to ‘Primary research’:

So, you can understand their confusion when the top result is classed as a ‘Systematic review’. This is not the first time it has been raised and, as it’s a recurring source of confusion, we need to act!

The issue arises because we treat ‘Primary research’ to mean ‘Journal articles’. So, in the example above the systematic review was published in a journal – so it gets two categories and we display the highest.

So, our plan is to make sure that any dual category article (primary research and systematic review) is treated only as a systematic review. This means a stricter interpretation of ‘Primary research’ (excluding systematic reviews, which are secondary research).

No timeline on this but it’s in planning. And thank you to Adam T for raising it with us.

Full text articles in Trip

Trip tries to link to complete articles where these are available. For many evidence types (eg clinical guidelines and evidence-based synopses) this is fairly straightforward. However, when you look at articles obtained from journals it gets complicated as many are behind a paywall, with only the abstract freely available. But Trip has a few bits of functionality that can help.

Here are the main ways of linking to full-text articles in Trip:

Pro subscription: Working with third-party organisations we identify freely available full text (from places like PubMed Central, other full-text repositories and authors archive copies). If a Pro user searches Trip we always place the full-text, if we have it, as the main link-out. User’s can still find the PubMed abstract via the link to the right of the article:

Approximately 70% of all the journal articles we include in Trip link out to full text versions. In the top example, if the user wishes to see the abstract, they simply click on the abstract icon (the one on the right hand side).

Free users have a different experience as they would see this:

This shows, to the user, that we have the full text in our system and if they were to subscribe they could link out to it directly.

Link resolvers: These are a tool to link articles in Trip to an organisation’s subscription journals. So, if an institution has a subscription to, say, NEJM it makes sense that a user from that organisation can easily access the full-text. Unfortunately, this technology is ‘dumb’ in that it doesn’t know if the institution actually subscribes to the journal or not. So, it’s a bit hit and miss. As such we use this icon to signify that you may be able to access a full text copy:

The title of the document still links to the PubMed abstract, while the icon is displayed to the right of the title and the user needs to click on that to attempt to access the full text. Often, if the full-text isn’t available, there is a library holding page which offers further support regarding accessing the full text.

LibKey: Is certainly not ‘dumb’ as it knows – thanks to the work of an organisation’s librarians – what journals an organisation subscribes to. LibKey is a great tool and the team behind it (Third Iron) are great to work with! When an institution has LibKey we work with them to ensure we seamlessly link out to known subscriptions. But, two things to point out:

  • There is no special icon to signify it’s a LibKey link out – you’ll just see the usual ‘full text’ icon (as mentioned above).
  • If we have the full-text in our repository we link to that, in preference to any LibKey links. So, often (as we have full text links to around 70% of all articles) you’ll not find many link outs via LibKey! Notwithstanding that, LibKey is still a great resource to use and one we know our users appreciate.

If you have any further questions on full-text options then send us an email


We’re back

We’re still trying to unpick what went on but the important thing is that Trip is back!

That was the longest outage, by a huge margin, in our history. This has been unpleasant for everyone and I’m sorry about that!

The site is down and has been for over 12 hours…

We are really sorry about our site being down.

The issue is with our server, hosted by Rackspace. Yesterday afternoon we had to restart it (a process that normally takes a couple of minutes) and in doing so it resulted in it automatically updating the Windows software – this took ages, got to 100% and then it just stopped working. Rackspace engineers then tried to get it to work but they have clearly been unsuccessful! The last we heard they are trying to roll back the changes. We are trying everything we can to get Rackspace to speed things up!

Once again, we’re sorry for this loss of service.

PICO search – it’s a special type of search

Introduced over ten years ago, the PICO search is very popular with our users. However, it is frequently misunderstood. Yesterday we received an email from a user:

We are finding different results between advanced search and PICO search. We are using the same terms. Is this possible? Is there any way to solve it?

So, what’s going on? PICO search is not designed to be an exhaustive search, it is designed to find a small number of highly relevant results. Or, put it another way, it’s a very specific search not a sensitive one.

At the heart of the PICO search is something called contingency searching, which helps us deliver a specific search.  After the user enters their search term our first search is for all the PICO elements as title only searches. If there are too few results we then make the final search term entered a ‘title and text’ search and repeat the search and if that too has too few results we make the penultimate term a ‘title and text’ and we repeat that until we get a manageable number of results. All these repeated searches are done in the background; from a user’s perspective it’s a single search.

So, if you want lots of results use the default or advanced search. However, if you want a more focussed set of results use PICO (although the default search is pretty good as well)!

The number of systematic reviews in Trip

As part of the improvements to Trip we have had a special focus on systematic reviews (SRs), as these are a key element of EBM (and many recent posts on this blog relate to SRs). As it stands we are just shy of 500,000 (this figure includes a number of Health Technology Assessments). In addition to published SRs we also link to over 150,000 registered SRs – those planned and ongoing.

However, an important consideration of ours is that users may see a systematic review (at the top of the EBM pyramid) and suspend scepticism. So, we’re still working hard on a quality score for SRs. This is going well and I’d like to think it’ll make an appearance before the end of the year.

So, size is important but size with quality is even better.

Improving the quality of Trip

One of Trip’s virtues is being easy to use. Behind this ease is a hugely complex website, one that took over two years to re-code. As well as re-coding the website there were other problems that affected the quality of the site and we’re addressing these in a systematic manner. As we tick these off the ‘to do’ list, the site get stronger. Some recent work includes:

Systematic reviews (SRs)

We automatically grab new SRs from PubMed (and we recently improved the filter for identifying new SRs). However, we also obtain SRs from other sources and we have finished automating this process. So, now, every week we grab a whole batch of new SRs. Previously the ‘other sources’ was a manual process and was not regularly undertaken.

Ongoing clinical trials

Another system that was previously semi-manual and not undertaken regularly, was the import of ongoing clinical trials from This is now automated with new trials added weekly.

Broken links

The user interface – to alert us to a broken link – works well but the Trip process to fix these links was sub-optimal. We have now reworked that and it works really well. Less broken links = increased quality.

Our ‘to do’ list has been full of things like this – not major individual pieces of work but important aspects that might affect the performance of Trip. Each one ticked off means the less likely users are to have a poor experience – surely a good marker of quality.

Systematic reviews in Trip and some unintended consequences

As mentioned previously we are trying to identify as many systematic reviews as possible to include in Trip. One method is to use third party services that capture academic publications from a variety of sources. We, in turn, try to identify systematic reviews from these service and add them to Trip. This works well – generally – but two issues have arisen that we had not anticipated.

Predatory journals: I received an email yesterday which started “I noticed on my most recent search that a predatory journal made it into into your search result“. It transpired that the article in question was a systematic review. In other words the 3rd party scraped the article from the web and we grabbed it ‘blindly’. This is clearly problematic and we’ll take steps to stop this happening in the future.

Pre-prints: This is less clear cut – so would welcome input. In the new system I’m seeing systematic reviews from places such as medRxiv. On one hand these are potentially problematic as they haven’t been through peer review. But on the other hand they are clearly labelled as not having gone through peer review and, also, they may well be good quality and contain valuable information that might not be seen for months (due to the slow peer review process).

It’d be interesting and useful to hear your thoughts on the above! So, please leave a comment or email me directly

Connected/related articles

This post is an attempt to ‘think aloud’ about connected/related articles…. By that I mean, if you find an article you like how can you quickly find others that are similar. We know that searching is imprecise and a user might find articles that match their intention at say result #2, #7, #12 and then they may lose interest and miss ones at #54 or #97.

At Trip we have had something called SmartSearch for years. This mines the Trip weblogs to highlight articles that have been co-clicked in the same search session. So, if a user clicks on articles #2, #7 and #12 we infer a connection. We have successfully mapped these connections and it reveals a structure in the data. In the example below it’s a small sample of connections taken from searches for urinary tract infections:

Each blue square represents a document and the lines/edges are connections made by co-clicking the documents within the same search session. You can see from the annotation that these form clusters around topics. However, co-clicking is not perfect!

Fortunately, there are other types of connections that I think we can use – semantic similarity and citations.

Semantic similarity: I’m thinking principally of PubMed’s related articles. This uses statistical methods to find articles with similar textual information e.g.

At the top is the document of interest and below are the articles deemed semantically similar. So, these articles are all related and one could make connections between them.

Citations: Articles typically list a bunch of references – so the article is citing these. And any article can itself be cited. So, you have forward and backward citations. Again, these have been shown as connections and mapped, e.g (source):

So, three types of connections: co-click, semantic similarity and citations. In isolation all have their issues but combined it could be something incredibly powerful. Well, that’s the theory….

While I believe SmartSearch is brilliant, I don’t think we’ve implemented it particularly well. The main issue I have is that a user needs to ‘call’ the results. On one hand that’s not a big deal but it looks like this:

I’ve highlighted it in red so you don’t miss it (an important issue in itself) but also it’s not really telling the user why they should click. In other words, it has a weak ‘call to action’. In part this is because it’s not ‘real time’ – a user clicks a button and the system calculates the related articles. I’m thinking if we told users that there were, say, 25 closely connected articles and 7 very closely connected articles, possibly teasing what these were, it would be much more compelling.

Another consideration, the notion of connected articles can work on two levels: the individual article and a collection of articles.

Individual articles: Each article within Trip could feature other connected articles be it co-clicks, semantic similarity or citations. It could be that we create a badge (thinking of the Altmetric Donut) that helps indicate to users how many connections there might be.

Collection of articles: If a user clicks on more than one article we, in effect, add up the information from the individual article data. This allows for some clever weightings to be brought in to highlight particularly important/closely connected articles.

But what information is important/useful to the user? I’m seeing two types of display:

List: A list of articles, arranged by some weighting to reflect ‘closeness’ – so those at the top are deemed closer to the article(s) chosen. We could enhance that by indication which are systematic reviews, guidelines etc

Chronological list: As above but arranged by date. The article(s) chosen would be shown and then a user could easily see more recent connected papers and also more historical papers. The former being particularly useful for updating reviews!

Right, those are my thoughts, for now. They seem doable and coherent but am I missing something? Could this approach be made more useful? If you have any thoughts please let me know either in the comments or via email:


One excellent bit of feedback is to add connections between clinical trial registries and subsequent studies. This should be feasible. Similarly, link PROSPERO records (register of ongoing systematic reviews).


Another excellent idea – add retraction data

Blog at

Up ↑