Trip Database Blog

Liberating the literature

Unblocking Trip’s indexing ‘fatberg’

To get content in to Trip it needs to be indexed – another term for processing.  This starts when we add basic records to the site, this can be as simple as document title, URL and year of publication.  Our systems then process the records before they arrive in the searchable index of records.

The basic records get in to our system in two main ways:

  • A monthly manual upload of new content.
  • Automatic grabbing from sites such as PubMed.

Recently we noticed that records from PubMed were not getting in to Trip in a timely manner, caused by an indexing ‘fatberg‘. To cut a long story short we have unblocked our indexing pipes and we’re now all back, ship shape and Bristol fashion!

Questions in need of answers… social media addiction and risk of sedative medication

Our community Q&A has two questions needing an answer:

  • What is the best scale for social media addiction?
  • Is there any cross sectional study about the risk of uses sedative medication?

If you know of any relevant literature then please let us know.

As a reminder the Trip Community Q&A tries to link unanswered clinical questions with users likely to know the answer – the hope being that they are best placed to answer it. The idea has been inspired by sites such as Quora.

Anatomy of a clinical question

When starting the community Q&A I was worried that people would be lazy, not search themselves and simply ask the question. This has not been the case. The overwhelming majority have been complex and legitimately not fully answerable via Trip!  Here are a few examples with my interpretation underneath:

In children with pharyngitis, how useful are prediction tools to select whom to perform microbiological tests?

This was difficult to answer for a couple of reasons:

  • For the person asking the question: He was Spanish and therefore I wonder about the knowledge of synonyms. He asked about ‘prediction tools’ whereas the (English) literature tends to use the phrase ‘decision rules’. So, when searching that starts from a point of great disadvantage!
  • From Trip’s perspective we simply did not index the main document, from CADTH.  We tend to grab all the content from them so I’ll need to understand why it’s not in our index!

The adult patient developed insomnia due to metoprolol, is there B-blocker less associated with causing insomnia?

The main issue is the lack of clarity of evidence – as mentioned in the answer it is often contradictory. So, was the motivation of the questioner to try to get a more definitive answer than suggested by the literature?

What is the possible effect for giving an ACE +NSID+diuretics in elderly patients?

We’ve actually been asked this question twice!  This is a problematic search due to the issue of synonyms. But, for me, the issue of the disparate nature of the literature is the key problem. There are a number of references, none are definitive. So, that induces an uncertainty all of its own.  Clearly a review is needed, and this is what the community Q&A provided, but without it, it’s tough for a health professional to start to make sense of the literature.

I feel there should be more fundamental lessons but I clearly need more thinking time!

Community Q&A, an update

This has been such an interesting experiment with loads of learning.

For those unfamiliar with the community Q&A, the idea is, if you have a clinical question and you can’t find the answer on Trip, then leave the question (link found here). We then encourage other Trip users to answer it. The idea is simple, a Q might be difficult for the user but for an expert in the field it could well be easy. So, Trip’s role is to match the Q to the expert.

Some reflections after a couple of months…:


  • Probably the biggest error is how we ‘present’ the system, it appears to be confusing so people don’t really get it. And, by that, when you first encounter it (on the top of the results) people aren’t really sure what to make of it. So, we need to improve how we explain the system (any help out there?)

Learning points

  • People often don’t leave structured clinical questions which can lead to ambiguous questions – hardly ideal.
  • So far, very few people – apart from in-house Trip – have answered the questions. We’ve had input via Twitter and email, but we need to do more to make it scalable.
  • People have tended to ask difficult questions, which is what the system is for. It’s great learning for us, as it helps remind us of the complexity of answering clinical questions.


It’s not gone as planned or anticipated, but this is something we can take our time with. We’ll hopefully be pushing out some changes in the near future.

If you have any thoughts on how we might improve it, please let us know.

Feedback on community Q&A

This post is a request to better understand Trip user’s attitudes to the community Q&A service. Specifically in relation to the Q&A The incidence of emergency surgery due to intestinal obstruction due to postoperative adhesions is declining. Is this due to laparoscopy?

Can you please complete the poll, ticking as many answers that seem appropriate. We’d also welcome comments:


If you have any other feedback – positive or negative – please leave as a comment or email me

Community Q&A – progress so far

The community Q&A feature on Trip has been operating for a few weeks now. It’s still early days of the beta version (we call it a beta as it’s still being tested).

For those who’ve missed it, this feature allows you to leave a question (that you’ve been unable to find an answer to) and Trip then sends it to users best placed to answer it. It relies on the collective wisdom of the wonderful Trip community!

So far, there have been a number of positives and some areas that need strengthening, for instance:


  • People are asking questions, we’ve had over ten!
  • People are asking in languages other than English – this has surprised and delighted me.
  • I’ve really enjoying trying to answer them – reminds me of the challenges users face in answering complex questions.
  • Only one of the questions was ‘easy’ in that the answer could be found within the first few results on Trip.

Less positive

  • Not enough people answering the questions. While I enjoy it, it’s not scalable to rely on me.
  • There are a lot of interface issues that need sorting – but that’s he beauty of a beta, it’s to test things.
  • We’re still not getting the emailing right – alerting people to questions and answers.

We’ve now got lots of feedback and experience to improve the system and will be working on that over the next few weeks. This is something that doesn’t need rushing but so far I’m really pleased.

If you’ve used it, seen it (or even not seen it) we’d welcome any thoughts/comments. The more feedback we have the better we can make it.

The mesh scandal

EBM Live was, as ever, brilliant (BTW next year it’s in Toronto, July 8-6). One thing that I was aware of previously but it hadn’t really registered was the scandal around vaginal mesh.  For an overview of the issues see this video

I spoke with Kath Sansom (of Sling The Mesh) on a number of occasions, including an excellent session at Oxford’s Natural History Museum (alongside speakers representing Primodos and Sodium Valproate).

Lots has been written about these issues, a small sample being:

I’m raising this as it highlights a number of issues that are relevant to what Trip does:

  • It was very useful for me to understand the way lives are affected by these things. It can be ‘easy’ to treat these as abstract issues. They’re not.
  • There seems to be huge issues relating to the regulation of devices from a safety and efficacy issue. In the video I was struck by how self-satisfied John Wilkinson (from the UK regulators the MHRA) appeared to be. I’d prefer a regulator that was a lot more dynamic on these issues. His response was, in essence, to say everything’s fine with how we regulate devices.
  • Outcomes – the outcomes used in the mesh studies seem laughable and certainly not patient focused. A main outcome being the number of Tena pads used over a period of time. Sure, that’s a measure to test one aspect of efficacy of the intervention but, given the severe adverse events that can occur, seems wholly inadequate.
  • Declaration of interests. In the UK there is no requirement for doctors to declare payments from industry. The USA has the Sunshine Act which “…requires manufacturers of drugs, medical devices, biological and medical supplies covered by the three federal health care programs Medicare, Medicaid, and State Children’s Health Insurance Program (SCHIP) to collect and track all financial relationships with physicians and teaching hospitals and to report these data to the Centers for Medicare and Medicaid Services (CMS). The goal of the law is to increase the transparency of financial relationships between health care providers and pharmaceutical manufacturers and to uncover potential conflicts of interest.“. In the UK we have no such arrangement. Doctors like Margaret McCartney have been trying for years to improve the situation (see here and here as a couple of examples). Until that becomes compulsory we have no way of knowing if the doctors advocating an intervention (mesh or anything else) aren’t receiving ‘support’ from the device (or drug) manufactures. A key element of EBM is one of transparency so that shared decisions (between patients and health professionals) can be made. If the health professional is receiving ‘support’ from a manufacturer they’re getting this support to help guide decisions in the manufacturers favour which might not be the same as the patient! It really is quite simple.
  • Kath was saying, as a non-health professional, it took a long time to start to get her head around scientific language. The medical literature should not be the preserve of health professionals. I’d like to think Trip could help here and I’m working on an idea to allow more support for ‘non experts’ to understand the medical literature.

I can’t help feeling Trip should use it’s reach to campaign more on things like this….


Document similarity

I really enjoy it when we can fund a bit of R&D and one such project is starting to bear fruit. The overall aim of the project is to develop a ‘brain’ underlying Trip that can better deliver evidence to users. This can help in a large number of our areas of interest.

This sub-project is based on the notion of document similarity; in other words if you have a document, which other documents are similar.  A clear use case is one where you find a document that is really interesting, which other ones are most similar to it? Typically you’d keep scrolling through the results. If you have a similarity measure, it automatically finds them for you!

But there are many other uses. For instance we can take clinical questions, classify them as documents (I’m using the term document quite broadly – essentially it’s a distinct amount of text) and see what documents are most similar. Why search when the system can do it automatically?

We can also use this intelligence to keep people up to date with the latest evidence. If we know a user likes document A, we can scan new evidence to see those that are similar enough and alert the user to these.

It’s still early days and we’re still developing things (we’re using a variety of techniques including machine learning) but initial results are promising. Below is a list of text documents we uploaded and ‘asked’ the system to arrange them by similarity:

Most dissimilar
– Prostate-Specific Antigen-Based Screening for Prostate Cancer: Evidence Report and Systematic Review for the US Preventive Services Task Force
– The Society for Vascular Surgery Wound, Ischemia, and foot Infection (WIfI) classification independently predicts wound healing in diabetic foot ulcers

– Docetaxel Versus Surveillance After Radical Prostatectomy for High-risk Prostate Cancer: Results from the Prospective Randomised, Open-label Phase 3 Scandinavian Prostate Cancer Group 12 Trial
Disordered Eating Behaviors Are Not Increased by an Intervention to Improve Diet Quality but Are Associated With Poorer Glycemic Control Among Youth With Type 1 Diabetes

– Decreasing Seroprevalence of Measles Antibodies after Vaccination – Possible Gap in Measles Protection in Adults in the Czech Republic
– The Society for Vascular Surgery Wound, Ischemia, and foot Infection (WIfI) classification independently predicts wound healing in diabetic foot ulcers

Most similar
– Results of Prostate Cancer Screening in a Unique Cohort at 19yr of Follow-up
– Prostate cancer screening with prostate-specific antigen (PSA) test: a systematic review and meta-analysis

– Prostate cancer screening with prostate-specific antigen (PSA) test: a systematic review and meta-analysis
– Prostate-Specific Antigen-Based Screening for Prostate Cancer: Evidence Report and Systematic Review for the US Preventive Services Task Force

– Results of Prostate Cancer Screening in a Unique Cohort at 19yr of Follow-up
– Prostate-Specific Antigen-Based Screening for Prostate Cancer: Evidence Report and Systematic Review for the US Preventive Services Task Force

So far, so good!


Screen time and children – supporting decision making

We’re currently doing our monthly update highlighting new content and when doing these, some topics stand out; no idea why.  This month there have been a few on screen time in children:

The top entry, from PROSPERO, is a systematic review protocol exploring the association of screen time and sleep. The second (EPPI Centre) is an ‘evidence map’, while the final entry (CPS) is a clinical guideline.

I was interested to see that the guideline has a distinct section on sleep and if you dig deeper through the documents you seen overlap in a number of places.  Is this good or bad? I suspect that for the producers of the reviews there are ‘logical’ reasons for the content of their product. But I can only think, from a decision makers position, it’s unhelpful.  I’m not sure what the answer is, well not entirely.  Throw in to the mix all the other publications on the topic (see this Trip search as an example) and it could be seen as a mess!

At Trip we’re focused on supporting decision makers to take evidence-based/informed decisions. So, is the answer to, somehow, extract the actionable messages from these very long documents? Sounds reasonable to me, but it opens up additional issues – such as who are the stakeholders, what are their decisions and what do they need to make them!?

As seems so often the response – it’s something to ponder (is this a bit like saying ‘more research needed’)!



Blog at

Up ↑