When answering clinical questions, it’s not enough to simply provide an answer – it’s essential to communicate how strong the supporting evidence is. Because our Q&A system is automated, we’ve developed a pragmatic yet transparent way of scoring the strength of evidence behind each answer.
How We Classify Evidence At the core of our approach is how we classify the references used to generate an answer. For simplicity, we’ve grouped the sources into four categories:
Essential – The highest quality sources, such as NICE, AHRQ, guidelines; especially when they are up to date.
Desirable – Other high-quality secondary evidence (e.g. systematic reviews) and key primary research studies.
Other – The rest of the content in Trip e.g. peer-reviewed journal articles, eTextbooks.
AI – Content that is generated primarily through the large language model (LLM), used when evidence is sparse or missing.
The Scoring System Each answer is scored based on the proportion of higher-quality evidence (Essential and Desirable) it includes:
High – 75% or more of the references are Essential or Desirable
Good – 55–74% are Essential or Desirable
Moderate – Below 55% Essential/Desirable
Limited – 50% or more of the answer is generated by the AI (i.e. minimal reference support)
A Nuanced Interpretation This system produces some interesting situations. For example, an answer may score High if it’s based entirely on high-quality sources – even if those sources all agree that the evidence is limited or conflicting. In other words, a High score reflects the confidence in the evidence base used to construct the answer, not necessarily that the answer is definitive or conclusive.
We believe this approach strikes a useful balance between automation and transparency. It allows users to quickly gauge how much trust they can place in the evidence behind each answer, while also recognising the complexity and occasional uncertainty inherent in clinical decision-making.
98% of the core Q&A work is complete and now we’re mainly testing and correcting minor issuess…
The main AskTrip page will look like this:
While an answer page looks like this:
Lots to see here:
References are now looking lovely – in the beta this was the biggest bugbear of testers!
Clinical areas – to help users browse Q&As of interest
Clinical type – it should actually be ‘Question type’, these include causes, treatment, complication etc, a way of classifying Q&As to help browsing but also act as a timeline of a condition – one for a future project
Quality of evidence – how strong was the evidence in answering the question. Useful for users but also one for a future project
Show original question – we noticed, from the beta, that users didn’t always form perfect questions e.g. no initial capital letter, no question mark, odd spacing. Our system corrects that – for display – but the original question is available to be seen
Related questions – not the best examples, due to the lack of Q&As in this version of the site and ignore the numbers after each title, that’s for our testing. But this is showing users questions that are closely related to this particular question
Report an issue – if a particular answer concerns someone, they can easily report it to us to be checked by our medical team
Since the beta we’ve added a whole bunch of new features, many behind the scenes, so we’re delighted that the answers still come back in less than 30 seconds.
Release date – definitely by the end of the month, maybe as early as the end of next week!
The testing of the automated Q&A system is ongoing and yesterday I thought we’d hit a major problem – a poor answer!
A quick historical detour: around 15–20 years ago, while running the NLH Q&A Service for the NHS in England, the wonderful Muir Gray (who funded the service) was keen to identify frequently asked questions. One that consistently came up was: “What is the optimal frequency of vitamin B12 injections in pernicious anaemia?”
Fast forward to today. I tested that same question on our current Q&A system – and the results were underwhelming. Thankfully, thanks to Rocio, we had a full testing trace, letting me follow every step in the process. This led me to the NICE guideline Vitamin B12 deficiency in over 16s: diagnosis and management, which states: “In this guideline, we do not use the term ‘pernicious anaemia’ to describe autoimmune gastritis.”
Curious, I emailed Chris (our medical director) to ask whether “pernicious anaemia” is now considered outdated. His reply? “Pernicious anaemia is old hat – only old fogies like us still use that term!”
So, back to the Q&A system, I did two things:
Re-asked the question using autoimmune gastritis instead of pernicious anaemia = great answer..
Updated our synonym mappings to treat both terms as equivalent in search.
What started as a concern about system performance turned into a valuable reminder of how medical language evolves – and that I’m getting old 🙂
As part of the Q&A system we are minded that including patient identifiable information (PII) is a big no-no! So, as part of the initial question processing we have a ‘guardrail’ system to handle this eventuality!
Test Q (completely made up) : My patient (Mrs Jane Smith DOB 17/12/1856) has persistent vulvovaginal candidiasis not improving with prolonged fluconazole treatment. What investigations should I undertake and can you suggest some treatments?
Our system reported:
The question as written is problematic due to a violation of criterion (2): it includes patient identifiable information.
Issue (2): Inclusion of Patient Identifiable Information Name: “Mrs Jane Smith”
Date of Birth: “DOB 17/12/1856”
These are direct identifiers and must not be included in a clinical query, even in professional contexts where confidentiality is expected.
Corrected Version (with Identifiable Information Removed): Q: A postmenopausal woman has persistent vulvovaginal candidiasis that has not improved with prolonged fluconazole treatment. What investigations should I undertake and can you suggest alternative treatments?
The question processing stage is really interesting. As well as guardrail (stripping out PII, profanities etc) it’s correcting spelling, grammar, formatting. It’s assigning question type and clinical area(s). There’s a lot of activity before the search has even started.
We’re getting close…. The system was great before and it’s getting much much better!
We last tinkered with the journals list in 2022, so a refresh was long overdue.
At the moment Trip takes content from PubMed in three main ways:
A filter to ID all the RCTs in PubMed, whatever the source.
A filter to ID all the systematic reviews in PubMed, whatever the source.
All the articles from a core set of journals.
Core journals
When we first added journals to Trip around 1998–99, we started with 25 titles. This number grew to 100, then 450, and as of today, we include just over 600 journals. With the upcoming launch of our clinical Q&A system, we felt it was a good time to review our journal coverage with the aim of expanding it further.
We took a multi-step approach:
The Q&A system uses a categorisation framework based on 38 clinical areas. We used these categories to identify relevant journals in each category.
We excluded journals that do not support clinical practice—such as those focused on laboratory-based research.
We removed journals already included in Trip.
From the remaining titles, we selected those with the strongest impact factors for inclusion.
Additionally, since impact factors can undervalue newer journals, we manually identified promising new titles likely to be influential – such as NEJM AI – and added them as well.
The outcome of our review: we identified 281 new journals, which we’ll be adding over the next few days. This will bring our total to just under 900 journals. That feels about right—representing roughly 20% of all actively indexed journals in PubMed.
While we may continue to add the occasional journal in the future, it’s unlikely we’ll see an expansion of this scale again. There’s always a balance to strike between broad coverage and introducing noise – and we believe we’ve judged it well.
Rocio has been a wonderful supporter of Trip for years, and when she offered to test our Q&A system, she brought her usual diligence to the task. After trying it out, she emailed to ask why a key paper – a recent systematic review from a Lancet journal – wasn’t included in the answer. That simple question kicked off a deep dive, a lot of analysis, and a lot of work… and ultimately led to the realisation that we’ve now built a much better product.
At first, we thought it was a synonyms issue. The question used the term ablation, but the paper only mentioned ablative in the abstract. Simple enough – we added a synonym pair. But the issue persisted. So… what was going on? Honestly, we had no idea.
What it did make us realise, though, was that we’d made a whole bunch of assumptions – about the process, the steps, and what was actually happening under the hood. So, the big question: how do we fix that?
The underlying issue was our lack of visibility into what was happening under the hood. To truly understand the problem, we needed to build a test bed – something that would reveal what was going on at every stage of the process. This included:
The transformation of the question into search terms
The actual search results return
The scoring of each of the results
The final selection of articles to be included
The test bed looks like this and, while not pretty, it is very functional:
We were able to tweak and test a lot of variables, which gave us confidence in understanding what was really happening. So, what did we discover (and fix)?
Partial scoring by the LLM: While up to 125 results might be returned, the AI wasn’t scoring all of them – only about two-thirds. That’s why the Lancet paper was missing. Fix: We improved the prompt to ensure the LLM evaluated all documents.
Over-reliance on titles: When we only used titles (without snippets), we often missed key papers – especially when the title was ambiguous. Fix: We added short snippets, which solved the issue and improved relevance detection.
Arbitrary final selection: If more than 10 relevant articles were found, the AI randomly selected which ones to include in the answer. Fix: We built a heuristic to prioritise the most recent and evidence-based content. This single change has significantly improved the robustness of our answers – and testers already thought the answers were great!
So, we’ve gone from a great product – built on a lot of assumptions – to an even greater one, now grounded in solid foundations that we can confidently stand behind and promote when it launches in early June.
Yesterday, I returned to my former workplace – Public Health Wales (PHW) – to meet with the evidence team and discuss Trip’s use of large language models (LLMs). It was a great meeting, but unexpectedly challenging – in a constructive way. The discussion highlighted our differing approaches:
Automated Q&A – focused on delivering quick, accessible answers to support health professionals.
PHW evidence reviews – aimed at producing more measured, rigorous outputs, typically developed over several months.
The conversation reminded me of when I first began manually answering clinical questions for health professionals. Back then, I worried about not conducting full systematic reviews – was that a problem? Over time, I came to realise that while our responses weren’t systematic reviews, they were often more useful and timely than what most health professionals could access or create on their own. Further down the line, after many questions, I theorised that evidence accumulation and ‘correctness’ probably looked like this:
In other words you can – in most cases – get the right answer quite quickly and then after that it becomes a law of diminishing returns… In the graph above I would include Q&A in the ‘rapid review’ space.
Back at PHW, their strong reputation – and professionalism – means they’re understandably cautious about producing anything that could be seen as unreliable. Two key themes emerged in our discussion: transparency and reproducibility. Both are tied to concerns about the ‘black box’ nature of large language models: while you can see the input and the output, what happens in between isn’t always clear.
With their insights and suggestions, I’ve started sketching out a plan to address these concerns:
Transparency ‘button’ – While this may not be included in the initial open beta, the idea is to let users see what steps the system has taken. This could include the search terms used and which documents were excluded (from the top 100+ retrieved).
Peer review – Our medical director will regularly review a sample of questions and responses for quality assurance.
Encourage feedback – The system will allow users to flag responses they believe are problematic.
Reference check – We’ll take a sample of questions, ask them three separate times, and compare the clinical bottom lines and the references used.
This last point ties directly to the reproducibility challenge. We already know that LLMs can generate different answers to the same question depending on how and when they’re asked. The key questions are: How much do the references and answers vary? And more importantly, does that variation meaningfully affect the final clinical recommendation? That might make a nice research study!
If you have any additional suggestions for strengthening the Q&A system’s quality, I’d love to hear them.
Two final reflections:
First, it was incredibly valuable to gain an external perspective on our Q&A system and to better understand their scepticism and viewpoint (thank you PHW).
Second, AI is advancing rapidly, and evidence producers – regardless of their focus – need to engage with it now and start planning for meaningful integration.
We expect to receive a large number of clinical questions and need an effective way to organise them for easy access. While users will be able to search the questions, browsing will also be supported through a classification scheme.
We plan to classify the questions in three ways:
Clinical area (e.g. cardiology, oncology) – we have a 38, from Allergy & Immunology to Urology
Question type (e.g. diagnosis, treatment)
Quality of evidence – a simple system to indicate how robust the evidence is in answering the question, this will be high, medium or low
The question type classification is an interesting one and the full list is:
Causes & Risk Factors
Screening, Detection & Diagnosis
Initial Management
Long-term Management
Complications & Adverse Effects
Special Considerations
Outlook & Future Care
We developed this approach to reflect the natural timeline of a condition – from risk factors and diagnosis through to treatment and prognosis. The idea was inspired by clinical guidelines, which provide comprehensive overviews of condition management but can’t address every possible clinical scenario. By linking relevant Q&As to each stage of the guideline, we can fill in those gaps – and potentially even allow users to submit specific questions directly from within the guideline itself.
ATTRACT was a clinical Q&A system that began in Gwent, Wales, in 1997. Members of the primary care team could submit questions by post, email, phone – or even fax – and we would provide an evidence-based answer. It was the inspiration behind the creation of Trip, designed to speed up the question-answering process. ATTRACT expanded from Gwent to cover all of Wales, and a few years later, I led the national Q&A service for England through the NeLH/NLH, alongside a number of other initiatives. Altogether, these services have answered over 10,000 questions – but ATTRACT remains closest to my heart. My first professional love, perhaps?
Now that we’ve successfully tested our automated Q&A system, we’re working hard to launch it as an open beta in the near future. One outstanding task is the site design – and while revisiting old ideas, I came across an image of the homepage from 2005:
Given my experience in answering clinical questions the ability to automatically answer clinical questions is still staggering to me – and I’ve been working on this project for over a year. Before, in the ATTRACT days, a question would take 3-6 hours to answer and now they take 20-30 seconds. In fact I took the above question What is the efficacy of ginger in osteoarthritis of the knee or hip? and tried it in our system. it took 24 seconds to generate the answer (which can be viewed below). As I say staggering.
One final thought, what do we call the Q&A service. Do we call it Trip Answers, Trip Q&A, Ask Trip? These all indicate what the service is about – so that’s a positive for them. However, a more evocative (possibly even romantic) name would be ATTRACT. And, many will not know this, ATTRACT was an acronym for Ask Trip To Rapidly Alleviate Confused Thoughts – so still perfectly relevant!
Answer to the Q What is the efficacy of ginger in osteoarthritis of the knee or hip? [Note: this is before the improvements from the beta testing have been instigated, so future answers will be even better]
Clinical Bottom Line
Ginger supplements may provide symptomatic relief in knee osteoarthritis due to their anti-inflammatory properties, but the evidence is mixed. It should be considered as an adjunct to primary treatments, with attention to individual responses and study quality variability.
Answer
Evidence from Systematic Reviews and Meta-analyses
Knee Osteoarthritis
A systematic review and meta-analysis (32214292) focused on evaluating ginger’s efficacy in knee osteoarthritis. While oral ginger did not show significant evidence to improve pain relief and function compared to placebo, topical ginger also did not provide statistically significant improvements in pain or knee function compared to standard treatments.
Mixed Findings on Efficacy
Another systematic review reviewing ginger in osteoarthritis, including studies on both knee and hip joints, reported mixed findings with some studies showing ginger may not be superior in improving pain when compared to ibuprofen and noted that evidence supporting ginger for changes in disability and functional capacity is weak (27819836).
Randomized Controlled Trials (RCTs)
Phase I Clinical Trial
A study evaluating a ginger-honey mixture, “G-Rup® syrup,” showed significant improvements in joint stiffness, pain, and functional scores in knee osteoarthritis patients (13070751). This suggests potential symptomatic benefits of a ginger-honey mixture in knee osteoarthritis.
Add-on Effect with Diclofenac
In a double-blind placebo-controlled trial involving ginger and diclofenac for knee osteoarthritis, ginger in conjunction with diclofenac demonstrated superior improvements compared to either treatment alone (24617168).
Steamed Ginger Extract
An RCT examining steamed ginger extract found significant improvements in pain and functional indices in patients with mild knee osteoarthritis, indicating its potential utility as a functional food in OA management (39212491).
Mechanism of Action and Safety Profile
Anti-inflammatory Properties
Ginger’s active compounds, such as gingerol and shogaol, possess anti-inflammatory properties that can reduce levels of pro-inflammatory cytokines (TNF-α, IL-1β) and inflammatory markers like CRP in knee osteoarthritis patients ([BI]).
Safety and Tolerability
Across various studies, ginger was generally well-tolerated with occasional mild gastrointestinal side effects compared to NSAIDs like ibuprofen, making it a safer alternative for long-term use (27819836).
Conclusion: Evidence on the efficacy of ginger in knee or hip osteoarthritis is mixed, with some RCTs demonstrating significant symptomatic benefits, particularly in knee osteoarthritis. However, the quality and consistency of evidence vary. While not a first-line treatment, ginger’s anti-inflammatory properties suggest it may be a valuable adjunctive therapy, especially given its favorable safety profile compared to NSAIDs.
Recent Comments