Search

Trip Database Blog

Liberating the literature

EMDA MMC – an example search

We’re undertaking a rapid review to answer the client’s question “Is there high-quality evidence to support the use of EMDA MMC in combination with BCG therapy for non-muscle invasive high-risk bladder cancer (NMIHRBC)“. It’s an interesting question and I thought it might be nice to show how we use Trip to help answer the question.

Understanding the Q: EMDA = ElectroMotive drug administration, MMC = Mitomycin C and BCG = BCG vaccine.

Initial search: I keep things simple and adjust if the initial search is problematic. In this case I searched (EMDA OR electromotive) AND (MMC OR Mitomycin) AND BCG AND (“bladder cancer” OR NMIHRBC). In this there are 4 elements:

  • EMDA OR electromotive – covers the first part of the search
  • MMC OR Mitomycin – using the abbreviation and the term ‘Mitomycin’. I didn’t add the C as it doesn’t seem necessary, in this case. If we got lots of results I could always replace ‘Mitomycin’ with “Mitomycin C” (note quotation marks to use it as a phrase search)
  • BCG – As above there seems no real need to add the term ‘vaccine’
  • “bladder cancer” OR NMIHRBC – bladder cancer might seem to be too vague (why not search for the fuller term ‘non-muscle invasive high-risk bladder cancer‘? Well, again, if it’s problematic – and we got too many results – I could always add it to make the search more specific

This is what it looks like via the Advanced Search:

So, the search generates 24 results. I clicked on 14 articles that looked particularly relevant and then used the Connected Articles feature (BTW see our video explainer of Connected Articles) to reveal closely connected/linked articles. Here are the top results:

In total Connected Articles returns 100 results and even at the bottom many seem relevant:

Connected Articles goes beyond the Trip content and is great for helping minimise the chance of overlooking important articles. I need to work through these 100 to look for supplementary documents to enhance the review.

So, the above is a nice example of using Trip to highlight some highly relevant documents including guidelines from the European Association of Urology, some documents from NICE and Cochrane, as well as some other systematic reviews.

Disappointing guideline scores

Our guideline scoring system is nearly a year old and it’s been well received by all types of Trip users. Over time we add new guideline producers and these scores are added to the system.

We are in the process of adding ten new guidelines producers and it’s really disappointing to see most of them scoring really low! As a reminder this is how we score guidelines (at the publisher level, not individual guideline level):

  • Do they publish their methodology? No = 0, Yes = 1, Yes and mention AGREE (or similar) = 2
  • Do they use any evidence grading e.g. GRADE? No = 0, Yes = 2
  • Do they undertake a systematic evidence search? Unsure/No = 0, Yes = 2
  • Are they clear about funding? No = 0, Yes = 1
  • Do they mention how they handle conflict of interest? No = 0, Yes = 1

Of the ten new publishers we could find no information on their methodology, most didn’t use GRADE and we have no idea about the search. Funding is most frequently mentioned and their is no way of knowing how they deal with conflict of interest. A number highlight potential conflicts but not how this is managed – in which case the publisher gets a zero score for that element.

It does make me wonder how users of the guidelines feel? I think the answer is, and this is a worry, that many won’t care or consider the implications. How can you be confident that the guideline you’re using has been produced in an unbiased manner? In short the answer has to be faith, faith in the publishing organisation. This authority bias is problematic for many reasons.

In short, guideline publishers, please be more transparent with your processes, it shouldn’t be difficult…!

A new LLM project idea: Trip Clinical Evidence Review

EDIT: Mock-up (mentioned below) is available to view here.

Trip adds loads of great evidence every month and this idea relates to better presenting this new evidence for users. We have a current way of doing this, which is to show it as a set of search results:

Above is the latest evidence for Primary Care!

We’ve been wanting to improve this for a long time but, with the advent of LLMs, the time might be right! So, instead of simply listing them as a set of results we could, for clinical areas (such as primary care, oncology, gastroenterology) create specialist ‘journals’ for each topic e.g. Trip Oncology Insights or Trip Clinical Evidence Review: Cardiology.

For each clinical area we would select 20-30 of the best/most important articles that month and summarise them to allow for easy reading (with link-outs to the full-text) with the option of seeing all the remaining articles via the search results method or possibly some visualisation like this to allow for easy identification of articles in your particular interest:

I’ve asked our designer to mock up what this might look like, I’ll share when we have that. EDIT: Mock-up is available to view here.

Latest evidence from Trip

At Trip, we continuously update our database with thousands of new articles each month, prioritizing the inclusion of high-quality evidence such as guidelines and systematic reviews. Our ‘Latest’ feature is designed to present these recent additions in a user-centric manner, tailored according to individual profiles. This customization occurs in two primary ways:

  • Clinical area: For users with specific clinical interests noted in their profiles, such as Gastroenterology, we curate and list the most recent, relevant evidence in that field. (Click here to explore the latest articles in Gastroenterology).
  • Personalised Topics: Users have the option to highlight particular topics of interest within their profiles. Based on these selections, we actively search for and recommend new articles that align with these specified interests, such as research on bipolar.

To ensure our users remain informed of the latest evidence, we distribute a monthly email containing a direct link to these updated resources. This link is accessible at any time, allowing users the flexibility to explore the latest findings at their convenience. While this feature is frequently utilized and appreciated by our community, we believe there are opportunities for enhancement to further enrich user engagement and satisfaction.

At the top is a list of 5 ‘key’ documents and below that a list of recorded search terms and/or clinical areas of interest. If you click on a link it takes you to a page of search results – the results being a search for the topic for content added that month:

Recognizing the current approach as ‘sub-optimal’—a diplomatic term for inadequate—we are in the process of re-evaluating this feature. It’s evident that users have a strong desire to access the most recent evidence related to their specific areas of interest, and we acknowledge the need for improvement in meeting this demand. We highly value your feedback and encourage you to share your insights and suggestions either through the comments section or by directly emailing me at jon.brassey@tripdatabase.com. This initiative is part of our ongoing commitment to enhance user experience and optimize our services for better search engine visibility and user satisfaction.

Update on one LLM project: Q&A

We have just started testing our automated Q&A system mentioned here. In short you ask a free-text question and it looks over Trip’s content to generate the best answer. Our medical director’s first tests were very encouraging so we’ve thrown it open to our AI Innovation Circle for additional testing. The initial questions from the ‘Circle’ being:

  • Which decreases Migraines more: Magnesium, Zinc, Vitamin D or Omega-3?
  • Which decreases Cluster Headaches more: Magnesium, Zinc, Vitamin D or Omega-3?
  • Which combination of Magnesium, Zinc, Vitamin D or Omega-3 has been found to decrease headaches?
  • How can cluster headaches be quickly stopped?
  • What is the best treatment for a migraine?
  • What is the QUICKEST treatment for a migraine?
  • How does the combination of acetaminophen + caffiene + aspirin for migraine treatment compare to prescription treatments for migraine?
  • How do corticosteroids compare to other treatments for preventing migraine recurrence?

We have sent these Qs to our system and have generated two answers per question, one using ChatGPT-4 and the other version 3.5 (ChatGPT describes the difference between the two as “ChatGPT-4 is an advanced iteration of OpenAI’s language model that offers improved understanding and generation of text, enhanced contextual awareness, and greater nuance in responses compared to its predecessor, ChatGPT-3.5.“). Each response takes around 10-15 seconds to generate and, really importantly, these answers are referenced.

We’ve sent the answers back to the people who asked them and we’re awaiting feedback. We know that some of them are suboptimal but we also know how we can improve things. That, ultimately, is the purpose of this proof of concept project – can a Q&A system work based on Trip’s content? Based on our work to date we are absolutely confident that we can make a robust system. Once this project is finished being tested we will take the learning and combine it with the learning from the second project (fully automated evidence reviews) and decide if/how to operationalise one or both of them…

A review of Trip’s 2023

2023 was a great year for Trip.

We continue to have a impressive impact on care around the globe, helping support millions of decisions with evidence-based content. This is Trip’s core function, as such, it’s ‘business as usual’. However, we have continued to improve the site as much as we can and 2023 was full of significant improvements.

First quarter of 2023: A real focus on quality improvement including the way we handle updates to content such as systematic reviews and ongoing clinical trials. We also started work on a major de-duplication effort and ending the quarter by refining our aspirations on quality.

Second quarter of 2023: This is where we really started to explore the potential of large language models (LLMs) such as ChatGPT. We also introduced scoring systems for guidelines and RCTs and a new guideline category for Europe. Finally, our de-duplication efforts saw 143,218 duplicate articles removed from Trip.

Third quarter of 2023: Our user survey results were published which is always interesting for us. We started exploring automated clinical Q&A using LLMs (see here and here). We also started our overhaul of the advanced search and started exploring some improvements to search relevancy.

Fourth quarter of 2023: We finished the year off strongly with the release on the new advanced search and the wonderful connected articles. We continued to push ahead with our LLM work with two main projects occupying our time (as well as a bunch of ideas bubbling away in the background). The two projects being a fully automated evidence review system and a clinical Q&A system. We also announced the creation of the AI Innovation Circle to help us to manage the AI potential/hype. Finally, we announced the main focus for the start of 2024: moving the Trip infrastructure to ‘the cloud’ and, unsurprisingly AI!

All this remarkable progress is supported by having a stable business model that has allowed us to remain independent and grow (as the site improves our income increases; a virtuous cycle)! While subscriptions to Trip Pro is the main source of income our bespoke rapid reviews have boomed in 2023. These are produced for a small group of selected organisations who appreciate our speed, evidence-based approach and cost-effectiveness. If you want to know more please email us.

Oh yes, don’t forget, we are still a ridiculously small organisation.

Another great boost for us is our users, they are wonderful and frequently crucial in helping us with surveys, feedback on new features etc. So, a big thank you from Trip to you.

2023 has proved immense, 2024 will hopefully be even bigger.

Quality: removing incorrectly tagged systematic reviews

We import extra systematic reviews from a number of sources and we use special filters to identify them. It has come to our attention that one of the filters was too sensitive and therefore brought it a number of false positives ie tagged as systematic reviews and are clearly not!

So, we have started to clean these up and we’re currently removing 42,956 of them. Due to the method employed we’re also removing some true positives and therefore, once these are all removed, we will reimport these true positives! Convoluted, but the best way of getting this right.

Two LLM projects….

I recently wrote about our plans for the next few months at Trip, this is already out of date! In the post I mentioned our work on automated reviews – this is still ongoing. However, we’ve also started an additional project – on Q&A.

Both projects are about generating full-automated POCs (proof of concepts). We’ve written about the automated reviews in the previous post. However, the Q&A work is a new one and will be based on retrieval augmented generation (RAG). One criticism of using LLMs (such as ChatGPT) is that they hallucinate. By using RAG you can, to a great extent, overcome these fears. RAG forces the LLM to obtain the answer from a predefined corpus of knowledge (as opposed to the knowledge within the LLM). In our case, for the POC, it will be using Trip’s clinical guidelines, systematic reviews and RCTs. If all goes to plan we should have a wonderful, evidence-based, Q&A system – one that can grade the answer based on the evidence used (taking into account risk of bias in RCTs, guideline scores etc).

Two final details:

  • For both topics we are concentrating on the topic of migraine – a topic which isn’t too big or too small!
  • We hope to be testing both of these in January

We’re quietly confident of both approaches but I suspect things will not be perfect!! As I said in my previous post on the topic – happy days!

Related articles

Using the same mechanism as Connected Articles users can see related content for individual results on Trip.

Under each result on Trip you will see a related articles link:

If you hover over this link you’ll see the following:

If you click on this it’ll return the related content:

Nice and simple and it can be very useful 🙂

Blog at WordPress.com.

Up ↑