Search

Trip Database Blog

Liberating the literature

Category

Uncategorized

A great example of the power of vector search

I posted an outline of vector search the other week and shortly afterwards became aware of some work Google has been involved in: Accelerate medical research with PubMed data now available in BigQuery. In essence, Google has created a vector-based search system over PubMed Central.

We set up a small test-bed to explore whether this approach could be incorporated into Beyond Trip. We’ve been testing it using clinical questions, traditional keyword searches, and queries that sit somewhere in between – and the results have been genuinely impressive.

One search in particular, “Creatine use and cognitive ability”, clearly demonstrated the value of vector search. When we ran this query through the test-bed, the top result was The effects of creatine supplementation on cognitive function in adults: a systematic review and meta-analysis.

I then repeated the same search in Trip using the identical wording (Creatine use and cognitive ability), and that paper was not retrieved. However, when I reran the Trip search using “cognitive function” instead of “cognitive ability”, the paper appeared immediately. The content of the paper is the same in both cases – only the phrasing of the query changed. So, seamlessly the vector search was able to ‘understand’ that cognitive function and cognitive ability are virtual synonyms.

What this neatly illustrates is how vector search moves beyond literal term matching and begins to reflect clinical meaning. By recognising that “cognitive ability” and “cognitive function” are effectively synonymous, vector search bridges the gap between how clinicians think and how evidence is described in the literature. For tools like Beyond Trip, this has the potential to reduce missed evidence, lower the cognitive burden of searching, and make high-quality research easier to find – even when the wording doesn’t line up perfectly.

A fresh new look for Bookmarks & Search History in Trip

We’ve just released a visual overhaul of Bookmarks and Search History in Trip. The functionality is exactly the same — but it now looks a million times better.

Bookmarks

Bookmarks let logged-in users save articles they’ve read or want to come back to later.

  • On the results page, you’ll see a bookmark icon on each article.
  • Click the icon to bookmark the article.
  • Once saved, the icon appears darker, confirming it’s been bookmarked.

Viewing your bookmarks

To see all your saved articles:

  1. Go to My Account (1)
  2. Select Bookmarked (2)

Search history

In the same area of My Account, you’ll find Search History.

Search History keeps a record of:

  • The searches you’ve run in Trip
  • The articles you’ve viewed

This makes it easy to revisit previous searches, retrace your steps, and quickly find evidence you’ve already explored.

Where Evidence Is Thinnest: What 100 Real Clinical Questions Reveal

Looking across clinical questions over time can be revealing, but looking specifically at those where the evidence is limited is particularly instructive. The analysis below draws on the most recent 100 questions submitted to AskTrip that were judged to have limited supporting evidence. Together, they offer a clear view of where uncertainty persists in everyday clinical practice.

Rather than cataloguing all 100 questions, it is more useful to look for patterns. When grouped thematically, these questions highlight recurring areas where research evidence struggles to keep pace with the complexity of real patients, real decisions, and real healthcare systems.

1. Defining “Normal” in Diverse Populations

Illustrative questions

  • What are the standard reference ranges for hemoglobin levels in neonates, and how do these vary by gestational age?
  • What is the prevalence of supraspinatus tendinosis in individuals over the age of 30?

These questions expose a recurring challenge: concepts such as “normal” or “prevalence” are often less stable than assumed. Reference ranges may vary by population, measurement method, or clinical context, while prevalence estimates are frequently drawn from heterogeneous or dated studies. The result is uncertainty at precisely the point where clinicians are expected to be definitive.


2. High-Risk Decisions Without Definitive Trials

Illustrative questions

  • Is there a recommended waiting period to start anticoagulant treatment with rivaroxaban or apixaban after intracranial hemorrhage in a patient with atrial fibrillation?
  • What are the recommended therapeutic strategies for patients with both COPD and CCF, and are there any contraindications for treatments?

These are situations where the consequences of error are substantial, yet randomised trials are difficult or impossible. Evidence tends to be indirect, observational, or consensus-based, requiring clinicians to weigh competing harms rather than follow clear algorithms.


3. Pharmacology at the Edges of the Evidence Base

Illustrative questions

  • How does flucloxacillin’s penetration of the blood–brain barrier compare to other beta-lactam antibiotics for CNS infections?
  • Can patients who develop a rash from prednisone take prednisolone as an alternative?

Questions like these reveal how much clinical pharmacology relies on extrapolation. Differences in molecular structure, metabolism, or formulation are assumed to translate into clinical effects, but direct comparative evidence is often lacking – particularly in uncommon or high-risk scenarios.


4. Mental Health, Culture, and Neurodiversity

Illustrative questions

  • How do cultural expressions of mental health differ among Yoruba, Igbo, Swahili, Arabic, and Twi-speaking communities, and how can understanding these differences improve AI-based mental health assessments?
  • How does neurodivergence in older adults influence the manifestation and management of hoarding behaviors compared to neurotypical populations?

These questions highlight areas where biomedical models alone are insufficient. Evidence is frequently qualitative, context-specific, or culturally bound, yet these considerations are increasingly important – particularly as digital and AI-driven tools are used to assess and support mental health.


5. How Care Is Organised and Delivered

Illustrative questions

  • What is the quality of and adherence to moving and handling safety precautions for patients post-CABG via median sternotomy in ICU, and does this correlate with patient outcomes?
  • How does collaboration between nurses and health assistants affect patient outcomes in a clinical setting?

Here, uncertainty arises not from disease mechanisms but from systems of care. These questions affect safety, efficiency, and patient experience, yet they are often under-studied because they sit outside traditional disease-focused research frameworks.


6. Highly Context-Specific Clinical Judgements

Illustrative questions

  • What dose of cefuroxime is appropriate for a frail patient weighing 40 kg with urosepsis?
  • What is the appropriate action for a pediatric patient taking methylphenidate with a heart rate of 115 bpm?

These questions reflect the everyday reality of clinical work: patients rarely match trial populations. Evidence may exist in general terms, but applying it to a specific individual often requires judgement, adaptation, and tolerance of uncertainty.


Concluding Reflection

Taken together, these 100 questions show that limited evidence is not evenly distributed across medicine. It clusters where patients are complex, risks are high, contexts vary, or outcomes resist simple measurement. In many cases, the absence of strong evidence reflects the limits of current research methods rather than a lack of clinical importance.

Recognising where evidence is thinnest is not an admission of failure. It is a necessary step toward more honest clinical decision-making, better conversations with patients, and a clearer sense of where future research effort might be most usefully directed.

Moving Fast: How User Feedback Is Powering AskTrip’s Evolution

Earlier this week I published AskTrip Hits 8,000 Q&As: A Perfect Moment to Deepen User Engagement, noting the importance of building closer connections with our users to elevate the AskTrip experience, and we’ve wasted no time.

AskTrip User Group

We’ve now built a strong AskTrip user group and expect to start gathering their insights from next week. It’s not too late to join. If you’re interested – no heavy lifting required, we’re simply looking for your perspectives – just email me at jon.brassey@tripdatabase.com

Question feedback mechanism

We’ve introduced a new feedback form beneath every question:

Within 24 hours we received a steady stream of comments – mostly positive – but two responses were especially valuable. They not only validated an issue we’d suspected but also pointed us toward a strong opportunity for a major upgrade.

We’ll now take this feedback and quietly get to work. There’s a lot to do behind the scenes, but the direction is clear, and we’ll be ready to share something new in the near future. As ever, user insight remains one of the most valuable drivers of AskTrip’s evolution – every comment helps us refine, rethink, and improve.

We’re looking forward to more question-level feedback and to engaging even more closely with our user community. Our users really are remarkable: they tell us what they need, they spot what matters, and they ultimately shape what AskTrip becomes. In the end, they always know what they want – and we’re here to build it.

AskTrip Hits 8,000 Q&As: A Perfect Moment to Deepen User Engagement

The question What is the first-line treatment for heart failure? became the 8,000th clinical question AskTrip has answered. It’s a milestone that highlights both the scale and the value of what we’re building. And it comes at the perfect moment, as we step back to reflect on how AskTrip can evolve – and how we can make it even better for the clinicians who rely on it.

We realised we were flying blind

Shortly after hitting that milestone, Phil, our main AskTrip techie, and I were talking about what we should improve next. Should we focus on better recall? Provide more detailed answers? Make the system even faster?
We tossed around ideas, but it slowly dawned on us: we were making assumptions. For all the analytics and backend tinkering, we were still flying blind. The obvious question was the one we hadn’t asked: what do our users actually want us to improve?
It was a slightly humbling moment, but a helpful one. If AskTrip exists to serve our users, then users should help shape where it goes next.

Launching the AskTrip User Group

So we’re changing how we work.

We’re going to start an AskTrip User Group — a simple, open way for us to connect with the people who use AskTrip every day. This might mean occasional emails from us sharing ideas or updates, but just as importantly, we hope it creates an easy avenue for users to talk to us.

We’d love you to be part of it. If you use AskTrip and are willing to share your thoughts — big or small — please get in touch.
Email me at jon.brassey@tripdatabase.com if you’d like to participate.

Ask us questions. Tell us what works. Tell us what doesn’t. Point out what’s confusing, what’s missing, or what’s brilliant. If AskTrip is going to get better, it should get better with its users, not just for them.

A built-in feedback mechanism

Alongside the user group, we’re also adding a lightweight, easy-to-use feedback mechanism to every answer AskTrip generates. A quick tap or comment will help us understand what landed well and what needs attention.
This kind of direct, contextual feedback – at the point where a user is actually reading the answer – will be invaluable. It means we can spot patterns, fix blind spots, and continuously refine how AskTrip responds.

Looking ahead

Crossing 8,000 clinical questions is a milestone, but it’s just one marker on a much longer journey. What matters now is making AskTrip sharper, faster, clearer, and more aligned with the needs of the people who trust it.
By opening up new ways for users to talk to us, and by building feedback directly into the product, we’re hoping to create a virtuous cycle: better communication leading to better answers, leading to a better AskTrip.
We’re excited for the next 8,000 questions – and this time, we won’t be flying blind.

Do You Really Need Vitamin C With Iron? What Four AskTrip Answers Reveal

At AskTrip, we often see clusters of very similar or overlapping clinical questions submitted by users. When this happens, we like to review the answers side-by-side to check for consistency and to understand how our system handles closely related prompts. Recently, we noticed several questions about whether vitamin C should be taken with oral iron, so we ran them together to compare the outputs. While the core clinical message across the answers is the same, the emphasis and level of detail vary in interesting ways.

Q1. How much vitamin C is required to enhance oral iron absorption?

Clinical Bottom Line:
Vitamin C can enhance iron absorption, with suggested doses ranging from 100 mg to 500 mg. However, evidence on its additional benefit with iron supplements varies, indicating further individual assessment may be needed for supplementation decisions.


Q2. What are the recommended best practices for oral iron supplementation in patients with anemia, considering the addition of vitamin C?

Clinical Bottom Line:
While vitamin C is often suggested to enhance the absorption of oral iron, current evidence does not conclusively support its clinical benefit in treating iron deficiency anemia. More research is needed to establish definitive guidelines.


Q3. Does vitamin C help with iron absorption?

Clinical Bottom Line:
Vitamin C enhances the absorption of non-heme iron, but its impact on clinical outcomes in anemia treatment alongside iron supplements is inconclusive based on current evidence.


Q4. What is the effect of ascorbic acid on iron absorption in patients with iron deficiency anemia?

Clinical Bottom Line:
The evidence suggests that ascorbic acid does not significantly enhance the effectiveness of iron supplements in improving hemoglobin or ferritin levels in patients with iron deficiency anemia. Further research is needed to clarify its role.


So, Do These Answers Say the Same Thing?

In practical terms, yes. All four answers point toward the same actionable advice for clinicians:

  • Vitamin C does enhance iron absorption biochemically.
  • But clinical trials have not shown meaningful improvements in hemoglobin or ferritin when vitamin C is added to oral iron therapy.
  • Therefore, adding vitamin C is optional, not essential.
  • It may still be useful in people with low dietary vitamin C intake or when iron is taken with food, but it is not required for most patients.

Where the answers differ is in emphasis. Some focus more on the theoretical mechanism, others on dosage, and others on clinical outcomes.


Why Do the Answers Differ?

Although all four answers communicate the same core clinical message, the differences in tone, emphasis, and detail reflect the non-deterministic nature of large language models (LLMs). These models don’t retrieve fixed responses; instead, they generate answers dynamically by predicting plausible language patterns based on their training data and the prompt. As a result, even when the underlying evidence is the same, each answer may frame the issue slightly differently, highlighting certain studies, focusing more on mechanisms or clinical outcomes, or varying in how strongly the conclusions are stated. This variability is normal for LLMs and explains why answers can align on substance while differing in style or emphasis.

Final Thoughts

This small experiment highlights both the strengths and quirks of AI-generated clinical answers. Even when content is broadly aligned, the framing can shift subtly, which matters if you’re a clinician looking for crisp guidance. The good news is that, in this case, the core message across all four answers is consistent and clear: iron works on its own, and vitamin C is optional.

UPDATE

Not long after posting we were contacted by a user asking about reference overlap, an excellent question! I asked ChatGPT to analyse the overlap (it feels less biased than me doing it): “Across the four Qs, there is a strong shared core of evidence—mainly one major RCT and one or two meta-analyses—plus a recurring guideline. Around this core are unique additions tailored to each question’s angle (mechanistic vs. clinical).

I’m fairly reassured by this extra analysis!

From Rejection to Guidance: How AskTrip Now Helps Users Fix Their Questions

At AskTrip, our goal is to help users get reliable answers to their clinical questions. We identified a recurring issue: occasionally, the system must reject a question, often because it contains patient-identifiable information, is too vague, or isn’t really a clinical query. Until now, users simply received an unhelpful message saying the question had been rejected, offering little guidance on what to do next.

We’ve now rolled out an improvement that changes this. Instead of a dead end, users receive clear, constructive feedback along with tailored alternative questions they can select with a single click, immediately starting the question-answering process. This turns a rejection into a smooth path toward getting the clinical information they need.

For example:

It’s a small change with a big impact: fewer dead ends, clearer questions, and faster access to trusted clinical answers.

HTML Scissors

When I first started in clinical Q&A nearly 30 years ago with ATTRACT, we often received questions from general practitioners that I knew could be answered by the excellent clinical guidelines available at the time (I think they were called Prodigy then). The challenge wasn’t the lack of guidance – it was that the guidelines were long, and pinpointing the relevant section was difficult. For many questions, our real task was simply to extract the key information buried within a mass of content, most of which wasn’t directly relevant.

Even then, I felt that if the guidelines were broken into bite-sized pieces, they would be far easier to use. I used to talk about taking a pair of “HTML scissors” to cut them up, so GPs could more easily find the specific information they needed for themselves.

Fast forward to today, and at AskTrip we face a related challenge – one that has reminded me of those early “HTML scissors” conversations. Our system searches documents and sends the entire text (guidelines, systematic reviews, and so on) to the AI model, asking it to identify and extract the relevant passage. If a document happens to be 5,000 words long, this process takes time – and incurs unnecessary computational cost – just to locate the key section.

By coincidence, the idea behind those old “HTML scissors” has become a recognised approach in modern information retrieval. It’s now a standard technique, widely used in AI pipelines, and it even has a name: chunking.

Chunking divides large documents into smaller, coherent sections to make them easier and faster to process. Instead of treating a guideline as a single 5,000-word block, chunking breaks it into major thematic units – such as causes, diagnosis, initial management, monitoring, or special populations. Within each of these larger chunks, the content can be divided even further into sub-chunks, which capture more granular pieces of information. For example, a diagnosis chunk might be split into sub-chunks for individual diagnostic tests, criteria, red flags, and decision pathways. These sub-chunks retain enough local context to stand alone, allowing the AI system to pinpoint very specific information without processing the entire guideline or even the full section.

The result is faster retrieval, lower computational cost, and more accurate matching between a clinician’s question and the part of the guideline that truly answers it. Because the AI is working with smaller, well-defined blocks of text, it can zero in on precise details – such as a dosing adjustment, a diagnostic threshold, or a management step – without being distracted by the surrounding material. This not only reduces latency and improves user experience but also increases reliability: the system is less likely to miss key details or return irrelevant passages, making the overall process both more efficient and more clinically useful.

So, our next major improvement to AskTrip is the introduction of chunking for large documents. This will allow us to deliver clearer, more precise answers, generated more quickly and at a much lower computational cost. And we’re not stopping there. To push performance even further, we’re developing vector search to improve how we target the most relevant chunks in the first place. I’ve written a brief explanation of vector search already, and I’ll share more updates as this work progresses—but together, these advances mark a significant step forward in making AskTrip faster, smarter, and more efficient for everyone who relies on it.

New on Trip: Linking RCTs to Trial Registrations and Systematic Reviews

Released today: We’ve added a new feature to Trip that helps you understand clinical trials in their full context. When you view an RCT, Trip now automatically attempts to links to:

  • its ClinicalTrials.gov registration, and
  • any systematic reviews that include the study.

This makes it easier to verify protocols, spot outcome discrepancies, and see how a trial fits into the wider evidence base – all without extra searching. This is how it looks:

In the top RCT we can see it links to 3 trial registrations. The second RCT links to 1 trial registration and is linked to 4 systematic reviews. And, finally, for the 3rd RCT we have not been able to find a trial registration or an inclusion in a systematic review. NOTE: Just because we can’t find a trial registration it doesn’t mean it’s not been registered, it simply means we have been able to identify it using the scraping technology we’ve employed.

If you click on the ‘Details’ link a drop-down appears:

This is really cool and it’s part of our ongoing effort to make high-quality evidence quicker and easier to use.

Blog at WordPress.com.

Up ↑