As part of the Q&A system we are minded that including patient identifiable information (PII) is a big no-no! So, as part of the initial question processing we have a ‘guardrail’ system to handle this eventuality!
Test Q (completely made up) : My patient (Mrs Jane Smith DOB 17/12/1856) has persistent vulvovaginal candidiasis not improving with prolonged fluconazole treatment. What investigations should I undertake and can you suggest some treatments?
Our system reported:
The question as written is problematic due to a violation of criterion (2): it includes patient identifiable information.
Issue (2): Inclusion of Patient Identifiable Information Name: “Mrs Jane Smith”
Date of Birth: “DOB 17/12/1856”
These are direct identifiers and must not be included in a clinical query, even in professional contexts where confidentiality is expected.
Corrected Version (with Identifiable Information Removed): Q: A postmenopausal woman has persistent vulvovaginal candidiasis that has not improved with prolonged fluconazole treatment. What investigations should I undertake and can you suggest alternative treatments?
The question processing stage is really interesting. As well as guardrail (stripping out PII, profanities etc) it’s correcting spelling, grammar, formatting. It’s assigning question type and clinical area(s). There’s a lot of activity before the search has even started.
We’re getting close…. The system was great before and it’s getting much much better!
We last tinkered with the journals list in 2022, so a refresh was long overdue.
At the moment Trip takes content from PubMed in three main ways:
A filter to ID all the RCTs in PubMed, whatever the source.
A filter to ID all the systematic reviews in PubMed, whatever the source.
All the articles from a core set of journals.
Core journals
When we first added journals to Trip around 1998–99, we started with 25 titles. This number grew to 100, then 450, and as of today, we include just over 600 journals. With the upcoming launch of our clinical Q&A system, we felt it was a good time to review our journal coverage with the aim of expanding it further.
We took a multi-step approach:
The Q&A system uses a categorisation framework based on 38 clinical areas. We used these categories to identify relevant journals in each category.
We excluded journals that do not support clinical practice—such as those focused on laboratory-based research.
We removed journals already included in Trip.
From the remaining titles, we selected those with the strongest impact factors for inclusion.
Additionally, since impact factors can undervalue newer journals, we manually identified promising new titles likely to be influential – such as NEJM AI – and added them as well.
The outcome of our review: we identified 281 new journals, which we’ll be adding over the next few days. This will bring our total to just under 900 journals. That feels about right—representing roughly 20% of all actively indexed journals in PubMed.
While we may continue to add the occasional journal in the future, it’s unlikely we’ll see an expansion of this scale again. There’s always a balance to strike between broad coverage and introducing noise – and we believe we’ve judged it well.
Rocio has been a wonderful supporter of Trip for years, and when she offered to test our Q&A system, she brought her usual diligence to the task. After trying it out, she emailed to ask why a key paper – a recent systematic review from a Lancet journal – wasn’t included in the answer. That simple question kicked off a deep dive, a lot of analysis, and a lot of work… and ultimately led to the realisation that we’ve now built a much better product.
At first, we thought it was a synonyms issue. The question used the term ablation, but the paper only mentioned ablative in the abstract. Simple enough – we added a synonym pair. But the issue persisted. So… what was going on? Honestly, we had no idea.
What it did make us realise, though, was that we’d made a whole bunch of assumptions – about the process, the steps, and what was actually happening under the hood. So, the big question: how do we fix that?
The underlying issue was our lack of visibility into what was happening under the hood. To truly understand the problem, we needed to build a test bed – something that would reveal what was going on at every stage of the process. This included:
The transformation of the question into search terms
The actual search results return
The scoring of each of the results
The final selection of articles to be included
The test bed looks like this and, while not pretty, it is very functional:
We were able to tweak and test a lot of variables, which gave us confidence in understanding what was really happening. So, what did we discover (and fix)?
Partial scoring by the LLM: While up to 125 results might be returned, the AI wasn’t scoring all of them – only about two-thirds. That’s why the Lancet paper was missing. Fix: We improved the prompt to ensure the LLM evaluated all documents.
Over-reliance on titles: When we only used titles (without snippets), we often missed key papers – especially when the title was ambiguous. Fix: We added short snippets, which solved the issue and improved relevance detection.
Arbitrary final selection: If more than 10 relevant articles were found, the AI randomly selected which ones to include in the answer. Fix: We built a heuristic to prioritise the most recent and evidence-based content. This single change has significantly improved the robustness of our answers – and testers already thought the answers were great!
So, we’ve gone from a great product – built on a lot of assumptions – to an even greater one, now grounded in solid foundations that we can confidently stand behind and promote when it launches in early June.
Yesterday, I returned to my former workplace – Public Health Wales (PHW) – to meet with the evidence team and discuss Trip’s use of large language models (LLMs). It was a great meeting, but unexpectedly challenging – in a constructive way. The discussion highlighted our differing approaches:
Automated Q&A – focused on delivering quick, accessible answers to support health professionals.
PHW evidence reviews – aimed at producing more measured, rigorous outputs, typically developed over several months.
The conversation reminded me of when I first began manually answering clinical questions for health professionals. Back then, I worried about not conducting full systematic reviews – was that a problem? Over time, I came to realise that while our responses weren’t systematic reviews, they were often more useful and timely than what most health professionals could access or create on their own. Further down the line, after many questions, I theorised that evidence accumulation and ‘correctness’ probably looked like this:
In other words you can – in most cases – get the right answer quite quickly and then after that it becomes a law of diminishing returns… In the graph above I would include Q&A in the ‘rapid review’ space.
Back at PHW, their strong reputation – and professionalism – means they’re understandably cautious about producing anything that could be seen as unreliable. Two key themes emerged in our discussion: transparency and reproducibility. Both are tied to concerns about the ‘black box’ nature of large language models: while you can see the input and the output, what happens in between isn’t always clear.
With their insights and suggestions, I’ve started sketching out a plan to address these concerns:
Transparency ‘button’ – While this may not be included in the initial open beta, the idea is to let users see what steps the system has taken. This could include the search terms used and which documents were excluded (from the top 100+ retrieved).
Peer review – Our medical director will regularly review a sample of questions and responses for quality assurance.
Encourage feedback – The system will allow users to flag responses they believe are problematic.
Reference check – We’ll take a sample of questions, ask them three separate times, and compare the clinical bottom lines and the references used.
This last point ties directly to the reproducibility challenge. We already know that LLMs can generate different answers to the same question depending on how and when they’re asked. The key questions are: How much do the references and answers vary? And more importantly, does that variation meaningfully affect the final clinical recommendation? That might make a nice research study!
If you have any additional suggestions for strengthening the Q&A system’s quality, I’d love to hear them.
Two final reflections:
First, it was incredibly valuable to gain an external perspective on our Q&A system and to better understand their scepticism and viewpoint (thank you PHW).
Second, AI is advancing rapidly, and evidence producers – regardless of their focus – need to engage with it now and start planning for meaningful integration.
We expect to receive a large number of clinical questions and need an effective way to organise them for easy access. While users will be able to search the questions, browsing will also be supported through a classification scheme.
We plan to classify the questions in three ways:
Clinical area (e.g. cardiology, oncology) – we have a 38, from Allergy & Immunology to Urology
Question type (e.g. diagnosis, treatment)
Quality of evidence – a simple system to indicate how robust the evidence is in answering the question, this will be high, medium or low
The question type classification is an interesting one and the full list is:
Causes & Risk Factors
Screening, Detection & Diagnosis
Initial Management
Long-term Management
Complications & Adverse Effects
Special Considerations
Outlook & Future Care
We developed this approach to reflect the natural timeline of a condition – from risk factors and diagnosis through to treatment and prognosis. The idea was inspired by clinical guidelines, which provide comprehensive overviews of condition management but can’t address every possible clinical scenario. By linking relevant Q&As to each stage of the guideline, we can fill in those gaps – and potentially even allow users to submit specific questions directly from within the guideline itself.
ATTRACT was a clinical Q&A system that began in Gwent, Wales, in 1997. Members of the primary care team could submit questions by post, email, phone – or even fax – and we would provide an evidence-based answer. It was the inspiration behind the creation of Trip, designed to speed up the question-answering process. ATTRACT expanded from Gwent to cover all of Wales, and a few years later, I led the national Q&A service for England through the NeLH/NLH, alongside a number of other initiatives. Altogether, these services have answered over 10,000 questions – but ATTRACT remains closest to my heart. My first professional love, perhaps?
Now that we’ve successfully tested our automated Q&A system, we’re working hard to launch it as an open beta in the near future. One outstanding task is the site design – and while revisiting old ideas, I came across an image of the homepage from 2005:
Given my experience in answering clinical questions the ability to automatically answer clinical questions is still staggering to me – and I’ve been working on this project for over a year. Before, in the ATTRACT days, a question would take 3-6 hours to answer and now they take 20-30 seconds. In fact I took the above question What is the efficacy of ginger in osteoarthritis of the knee or hip? and tried it in our system. it took 24 seconds to generate the answer (which can be viewed below). As I say staggering.
One final thought, what do we call the Q&A service. Do we call it Trip Answers, Trip Q&A, Ask Trip? These all indicate what the service is about – so that’s a positive for them. However, a more evocative (possibly even romantic) name would be ATTRACT. And, many will not know this, ATTRACT was an acronym for Ask Trip To Rapidly Alleviate Confused Thoughts – so still perfectly relevant!
Answer to the Q What is the efficacy of ginger in osteoarthritis of the knee or hip? [Note: this is before the improvements from the beta testing have been instigated, so future answers will be even better]
Clinical Bottom Line
Ginger supplements may provide symptomatic relief in knee osteoarthritis due to their anti-inflammatory properties, but the evidence is mixed. It should be considered as an adjunct to primary treatments, with attention to individual responses and study quality variability.
Answer
Evidence from Systematic Reviews and Meta-analyses
Knee Osteoarthritis
A systematic review and meta-analysis (32214292) focused on evaluating ginger’s efficacy in knee osteoarthritis. While oral ginger did not show significant evidence to improve pain relief and function compared to placebo, topical ginger also did not provide statistically significant improvements in pain or knee function compared to standard treatments.
Mixed Findings on Efficacy
Another systematic review reviewing ginger in osteoarthritis, including studies on both knee and hip joints, reported mixed findings with some studies showing ginger may not be superior in improving pain when compared to ibuprofen and noted that evidence supporting ginger for changes in disability and functional capacity is weak (27819836).
Randomized Controlled Trials (RCTs)
Phase I Clinical Trial
A study evaluating a ginger-honey mixture, “G-Rup® syrup,” showed significant improvements in joint stiffness, pain, and functional scores in knee osteoarthritis patients (13070751). This suggests potential symptomatic benefits of a ginger-honey mixture in knee osteoarthritis.
Add-on Effect with Diclofenac
In a double-blind placebo-controlled trial involving ginger and diclofenac for knee osteoarthritis, ginger in conjunction with diclofenac demonstrated superior improvements compared to either treatment alone (24617168).
Steamed Ginger Extract
An RCT examining steamed ginger extract found significant improvements in pain and functional indices in patients with mild knee osteoarthritis, indicating its potential utility as a functional food in OA management (39212491).
Mechanism of Action and Safety Profile
Anti-inflammatory Properties
Ginger’s active compounds, such as gingerol and shogaol, possess anti-inflammatory properties that can reduce levels of pro-inflammatory cytokines (TNF-α, IL-1β) and inflammatory markers like CRP in knee osteoarthritis patients ([BI]).
Safety and Tolerability
Across various studies, ginger was generally well-tolerated with occasional mild gastrointestinal side effects compared to NSAIDs like ibuprofen, making it a safer alternative for long-term use (27819836).
Conclusion: Evidence on the efficacy of ginger in knee or hip osteoarthritis is mixed, with some RCTs demonstrating significant symptomatic benefits, particularly in knee osteoarthritis. However, the quality and consistency of evidence vary. While not a first-line treatment, ginger’s anti-inflammatory properties suggest it may be a valuable adjunctive therapy, especially given its favorable safety profile compared to NSAIDs.
The questions are still coming in and we’re approaching 300, so I thought we could have a quick look at the types of questions!
Conditions: below is a list of conditions that the questions related to, the fact that diabetes, with 4, is the highest shows the large range of conditions asked about.
Drug Information/Mechanism of Action/Reviews 15-20
Etiology/Causes/Mechanisms 10-15
Prognosis/Outcomes 10-15
Public Health/Prevention 10-15
Basic Science/Pathophysiology 5-10
Patient Experience/Qualitative Aspects 5-10
Ethical/Societal Considerations < 5
Other/Unclear < 5
Broad versus narrow questions:
Broad questions: These typically cover a wide range of aspects related to a condition, treatment, or topic. They might ask for general overviews, multiple options, or the fundamental principles.
Narrow questions: These focus on a very specific aspect, such as a particular drug, a precise diagnostic criterion, a specific patient population, or a detailed mechanism. Approximate Count of Broad vs. Narrow Questions:
Broad Questions: Approximately 60 – 75 questions seem to have a broader scope. These often start with phrases like “What are,” “Explain,” “Discuss,” “What matters to patients in their patient experience,” or ask for lists or overviews of a topic. Example of a Broad Question: “What are the core concepts in the primary prevention of ASCVD ?”
Narrow Questions: Approximately 150 – 165 questions appear to be more narrowly focused. These often inquire about specific treatments (“What is the best treatment for…”), particular diagnostic methods (“How to diagnose…”), the role of a specific drug (“How do SGLT2 inhibitors affect…”), or very defined scenarios. Example of a Narrow Question: “What is the correct dose for Meropenem in patients with hemodiafiltration?”
In some ways, I didn’t even need to read the beta tester feedback. Why? Because the most compelling evidence was in the behavior itself: users kept coming back. That repeated engagement spoke volumes – it showed the system was delivering real value and gaining meaningful traction.
Positive Feedback Highlights
But we did ask for feedback and it was broadly very positive, the headlines:
70% were health professionals
Most asked 3+ questions
Accuracy was deemed high
The answers were deemed relevant and trustworthy
Speed – 70% said ‘very fast’ and 30% said ‘reassuringly paced’
100% of health professionals would recommend the system to their colleagues
Here are a few standout quotes:
Thanks for the opportunity – I feel a product from the Trip family has particular value given your history in information architecture and providing credible, evidence-tracked, healthcare information support
It is very impressive to see the speed and capacity to extract and summarise data from evidence resources
Amazing system – would use very frequently in clinical practice!
Please continue this excellent initiative
Honestly, overall the database is intriguing. It has a resiliency and foundation that lends itself to be far more trustworthy and clinically focused than most other databases. I see it also as a great tool to teach med students about building blocks of clinical reasoning and research.
What’s Next: Immediate and Future Enhancements
As well as the good there were lots of constructive feedback which falls into a number of stages of the Q&A process, with some examples of the issues:
Initial question processing – when a user submits the question we need to do some processing to better disambiguate questions, for instance one Q we received was simply liver elastography.
Answer creation – we need to better handle the search process e.g. send additional meta data, make the search more sensitive if too few result etc.
Answer design – the way we include references was problematic for many but also there was a wish for an overall strength/weakness of evidence statement to be included.
Answer placement – we need to add the Q&As to the Trip search index and to have systems in place to deal with duplications
All the above are seen as being ‘immediate’ action points, by that I mean these will be done before we roll this out as an open beta on Trip. There are some medium-long term improvements we need to do:
Add extra content types eg drug information resources.
Use location information – if the user is from the USA then favour American guidelines.
For each Q&A give additional prompts for follow-up questions. In other words if a user asks What are the pros and cons of prostate cancer screening? We might suggest follow-up questions such as What is the best screening tool for prostate cancer? or What are the different mortality rates at various cancer stages in prostate cancer?
The beta test has been energising and insightful. With such a strong foundation and clear areas to build on, we’re more confident than ever that we’re creating something genuinely valuable for clinical decision-making. The next phase? Opening up the beta and continuing to learn, refine, and improve – together with our users.
As we begin rolling out AI features in Trip (the first being primary care new evidence summary and the second Mind Maps, with Q&A to follow) we need to consider how to do so sustainably. This includes encouraging more users to upgrade to Pro subscriptions and reviewing the pricing structure of those subscriptions.
To encourage free users to upgrade to Pro, we plan to limit the filtering options available to them. This will help further highlight the distinction between Free and Pro tiers. Here’s what that could look like:
We’re likely to increase Pro pricing and are considering a grandfather clause – allowing existing institutional customers to retain current pricing, while the new rates apply only to new customers.
We’re taking a considered approach to these changes, so please share your thoughts – we’d love to hear your feedback.
Recent Comments