Search

Trip Database Blog

Liberating the literature

Look what the Easter Bunny brought, our 15,000 Q&A

But the more interesting story is growth.

  • AskTrip launchd on 25 June 2025
  • AskTrip hit 10,000 Q&As on January 15, 2026 (~343 per week)
  • AskTrip hit 15,000 Q&As on April 2nd, 2026 (~455 per week)

That’s a ~30% increase in weekly usage.

We hit 10,000 questions averaging 343 Q&As per week. So it’s not just growing — it’s accelerating.

The 15,000 question was Do both IA-2 and GAD antibodies need to be tested to diagnose type 1 diabetes?

And more to come.

We’ve just completed Phase 1 testing of a major upgrade. Phase 2 is about to start, followed by Phase 3. If all goes well, rollout will be early May – and we expect that to drive usage even further.

Two answers, one question: why we’re testing “standard” and “detailed” responses in AskTrip

One of the most consistent pieces of feedback we’ve had from users is simple: can we see more of the evidence behind the answer?

That’s led us to experiment with something new in AskTrip—two versions of the same response:

  • A standard answer: quick, focused, decision-ready
  • A detailed answer: longer, with more evidence, context, and transparency

At first glance, this looks like a question of length. The detailed version can be 50% to 3× longer, adding sections on safety, mechanisms, and research gaps, while the standard version sticks to the essentials.

But the more interesting finding is this:

The conclusion usually doesn’t change.

Across multiple examples—from migraine treatments to rare conditions like Dravet syndrome—both versions tend to land in the same place. The standard answer tells you what to do. The detailed answer shows you why that answer holds—and where it might not.

That distinction matters.

Because one of the known failure modes of AI-generated clinical answers is that they can sound confident even when the underlying evidence is thin, indirect, or inconsistent. The answer looks clean. The evidence behind it often isn’t.

The standard answer inevitably compresses that complexity. It has to—that’s what makes it useful. You get the headline: what works, how strong the evidence is, and what clinicians typically do.

The detailed answer reintroduces the complexity—but in a structured way. You start to see the scaffolding: the trials, the meta-analyses, the lack of head-to-head comparisons, the reliance on indirect evidence, the safety trade-offs. Not more opinion—more visibility.

Take a condition like Dravet syndrome. In practice, there are recognisable treatment patterns. But there isn’t a clean, evidence-based “algorithm” underpinning them—much of the approach is based on indirect comparisons and evolving consensus. A standard answer reflects the pattern. A detailed answer makes the gap explicit: this is what we do, but this isn’t backed by strong comparative evidence.

That’s the difference.

  • Standard = decision-ready summary
  • Detailed = evidence justification + context

And importantly:

The detailed answer doesn’t usually change what you do—
it changes how well you understand, and how far you trust, why you’re doing it.

If and when the conclusion does change between layers, that’s not a problem—it’s a signal. It tells us the evidence is more fragile than the headline suggests, and that’s exactly the kind of thing we want to surface.

This isn’t just about giving users “more.” It’s about addressing a real problem: how to avoid confident-sounding answers that mask uncertainty.

The two-layer approach is an attempt to separate two functions that are often forced together:

  • fast, usable decision support
  • transparent, honest representation of evidence

We’re still testing and refining this. But early signs suggest this split might be a better way for AI tools to handle clinical uncertainty—without forcing users to choose between speed and trust.

A record day for AskTrip

A couple of weeks ago we recorded the highest number of questions answered – 542

Yesterday we answered the most in a single day – 136

I feel we’re doing something right and it also demonstrates the need for such a service…

What clinicians really want to know: lessons from the most-viewed clinical questions

Clinical uncertainty is often discussed in abstract terms — gaps in evidence, unmet research needs, or variation in practice. But a more revealing perspective comes from looking at what clinicians actually choose to read.

When we examined a recent group of the most-viewed clinical questions on our site, a clear picture emerged. These were not obscure academic debates. They were practical, sometimes uncomfortable uncertainties that many clinicians appear to share.

Popular questions are rarely random

The most striking feature was that high-interest topics tended to appear in clusters rather than as isolated curiosities.

Several of the most-viewed questions focused on digital tools to improve medication adherence in adolescents. These did not simply ask whether such interventions are effective. They explored which approaches work best and what barriers prevent successful implementation. This suggests clinicians are moving beyond curiosity about digital health towards the harder question of how to make it work in real life.

Another group of widely read questions centred on complex diagnostic scenarios — patients with neurological symptoms, fever or unusual exposures. These are the moments when medicine becomes less about guidelines and more about judgement. The level of interest these questions attract is a reminder that uncertainty at the point of diagnosis remains one of the profession’s greatest challenges.

There was also strong engagement with questions about clinical processes and protocols, particularly in paediatric and critical care settings. Issues such as sedation weaning, transfusion reactions and pre-operative fasting may appear routine, but they carry significant safety implications. The popularity of these topics suggests clinicians are acutely aware that getting the details wrong can have serious consequences.

Some of the most-viewed questions revisited established procedures, such as arthroscopic lavage for osteoarthritis or the management of infected prostheses. These reflect a profession that is increasingly willing to question traditional practices in the light of evolving evidence.

Perhaps most tellingly, several high-interest topics extended beyond conventional biomedical decision-making. Questions about lifestyle influences, behavioural development and service innovations such as emergency department redirection hint at a broader shift in clinical thinking. Modern healthcare uncertainty is no longer confined to diagnosis and drug therapy. It increasingly includes systems, behaviours and patient expectations.

Strong evidence does not eliminate uncertainty

Looking at the strength of evidence behind these popular questions reveals a further, slightly uncomfortable truth.

Where the evidence base is relatively strong, clinicians are often still searching — not for answers about effectiveness, but for guidance on how to implement evidence safely and consistently. Questions about digital adherence interventions, procedural protocols and changing treatment pathways fall into this category. The challenge is not discovering what works, but applying it in complex real-world environments.

By contrast, the questions linked to more limited or moderate evidence often involve diagnostic ambiguity, rare clinical scenarios or organisational change. These are situations where clinicians cannot simply follow a recommendation. They must interpret incomplete information and make decisions under uncertainty.

In other words, stronger evidence does not remove doubt. It shifts the nature of clinical curiosity — from “does this work?” to “how do I use this in practice?”

A signal about modern clinical practice

The fact that these questions attract the most attention should make us pause. They represent collective uncertainty, not isolated gaps in knowledge. They highlight the everyday tensions clinicians face between evidence, experience and system pressures.

If we want decision-support tools and evidence resources to remain relevant, we need to recognise this reality. Clinicians are not only looking for definitive answers. They are looking for help navigating the messy, evolving landscape of modern healthcare.

Understanding what clinicians choose to read may therefore tell us more about the future of evidence-based practice than any guideline or research agenda.

Help us shape the next version of AskTrip

Before AskTrip officially launched, we were fortunate to have a fantastic group of clinicians and information specialists who volunteered to beta test the system. Their feedback was invaluable in helping us identify problems, refine features, and improve the overall experience.

Now, nine months on, we’re preparing the next phase of development – and we’d love to recruit a new group of volunteer testers to help us put a series of upcoming changes through their paces.

Many of these improvements come directly from user feedback. Others reflect things we’ve learned from analysing real-world questions and usage patterns. Together, we believe they represent a significant step forward for AskTrip, but we need your help to make sure we get them right.

A step-wise testing approach

We expect testing to take place in stages.

We’re making some substantial changes, and asking users to test everything at once could be overwhelming. It also risks more subtle issues being missed. Instead, we plan to introduce updates in phases so testers can focus on specific features and give more targeted feedback.

The first stage will focus on new work designed to reduce intent drift and avoid what we’ve previously described as “EBM wallpaper” (see this blog post for a fuller explanation).

Later stages are likely to include testing:

  • Longer, more detailed answers
  • A refreshed design and user interface
  • A new follow-up question / “continue the conversation” feature

Overall, we anticipate up to three testing phases.

What’s involved?

Taking part won’t be onerous. We’ll simply ask you to use the system as you normally would and share your impressions. This might include:

  • Trying specific types of questions
  • Comparing responses with the current version
  • Flagging anything confusing, unhelpful, or particularly good

We also hope there’ll be an element of fun in being among the first to try new features — and in helping shape a tool designed to support evidence-based clinical decisions.

Interested?

If you’d like to be involved, please get in touch (email: jon.brassey@tripdatabase.com)


We’d be delighted to have you help us shape the next evolution of AskTrip.

A record week for AskTrip

Last week marked a milestone for AskTrip; for the first time, we answered more than 500 clinical questions in a single week, reaching a new high of 542 questions answered.

Interestingly, the week began and ended with questions linked by a common theme – pain – yet illustrating the remarkable breadth of issues clinicians bring to AskTrip.

The first question of the week asked: What adverse effects might occur when carbamazepine and oxycodone are co-administered for pain management?

Here, the focus was on drug safety and interaction risk — a complex prescribing scenario involving multimorbidity, polypharmacy, and the need to balance analgesia with potential harms.

The final question of the week took us into a very different evidence space: What is the effectiveness of adding manual therapy to exercise therapy in reducing pain and disability in adults with chronic non-specific low back pain?

This reflects the non-pharmacological management of pain, where clinicians seek clarity on the value of physical and rehabilitative interventions supported by trials and systematic reviews.

Together, these two questions neatly capture what AskTrip is becoming known for – rapid, evidence-based answers across the full spectrum of clinical uncertainty. From medication safety to rehabilitation strategies, from individual prescribing decisions to broader questions of effectiveness, the diversity of questions continues to grow.

Surpassing 500 answers in a week is more than just a number. It reflects increasing trust from clinicians, expanding use at the point of care, and a widening recognition that high-quality evidence can, and should, be easier to access.

If this record week is any indication, the demand for fast, reliable clinical answers is only going in one direction.

How AI helped us find a hidden bug on Trip

Recently we had a brief problem on Trip where the site became unstable and temporarily crashed. What followed turned into an interesting example of how AI can help diagnose tricky technical issues.

The problem started when we noticed that some of our servers were repeatedly failing. At first, the cause wasn’t obvious. The system had been running smoothly, and the usual monitoring tools didn’t clearly show what was going wrong.

One of our developers downloaded the detailed system logs and tried something a little different. Instead of manually combing through thousands of lines of information, he asked Claude (an AI system) to analyse the logs and the relevant code.

Claude suggested a possible explanation: Under certain circumstances, the software could accidentally try to send two replies to the same request.

In web systems, each request must receive exactly one response. Once the system sends that reply, the connection is effectively finished. If the software tries to send another one, the server throws an error because the conversation is already closed.

Normally this wouldn’t happen often. But if it occurs repeatedly, those errors can accumulate and cause servers to fail.

And that’s exactly what happened.

It appears the issue was triggered by Google’s web crawler, which was sending a variety of unusual requests to the site. Those requests exposed a hidden bug in our code that had probably been sitting quietly there for some time.

Once the problem was identified, the fix was straightforward and has now been deployed.

The interesting part of the story is how quickly the issue was diagnosed. Debugging problems like this can often take hours of searching through logs and code. In this case, AI helped highlight the likely cause almost immediately.

It’s a small example of how AI is starting to act as a useful assistant for engineers, helping identify problems faster and keeping services running smoothly.

Learning from user feedback: how we’re improving AskTrip answers

Over the past few months we’ve received hundreds of individual pieces of feedback on AskTrip answers. Around 15% were low ratings. That might sound worrying, but I actually find the low scores the most valuable.

Why? Because they’re actionable.

People who are dissatisfied are far more likely to tell you about it, so the 15% is likely to be an overestimate of overall dissatisfaction. But each low score comes with something far more useful than a number: a clue about where the product isn’t meeting expectations. And when you look across hundreds of these, clear patterns start to emerge.

Here are the main things we learned.


1. Clinicians want answers that stay tightly focused on their question

One of the most common frustrations wasn’t that the information was wrong – it was that it drifted.

A clinician might ask a very specific question (a particular population, drug comparison, route, or clinical dilemma), but the answer sometimes broadened into a more general discussion of the topic.

Interesting? Yes.
Helpful for a decision? Not always.

The lesson for us is simple: relevance beats comprehensiveness. Staying locked onto the exact clinical question matters more than covering the wider subject area.


2. Confidence must match the strength of the evidence

Another pattern was what I think of as “EBM wallpaper” – answers that looked polished and evidence-based but were built on thin or indirect evidence.

Users don’t just want citations. They want honest calibration:

  • Strong evidence → clear conclusions
  • Limited evidence → say so early and plainly
  • No evidence → don’t dress it up

In other words, clinicians value honest uncertainty more than polished narrative.


3. When the evidence isn’t there, don’t guess

Sometimes there is no directly relevant research – or the question uses a term that isn’t recognised in the evidence.

In these situations, the risk for AI is to be “helpful” by filling the gap with general advice, assumptions, or plausible definitions. That can create confident answers that aren’t actually evidence-based.

Our approach will be different. When evidence is missing or uncertain, AskTrip will:

  • Say this clearly and early
  • Avoid speculation or invented interpretations
  • Suggest related questions that are more likely to return useful evidence

Sometimes the most helpful response isn’t a longer answer — it’s helping you ask the next, better question.


4. And finally… some people want more detail

Interestingly, the feedback wasn’t all about making answers shorter or tighter.

Around one third of users told us the opposite – they’d like longer, more detailed answers.

This highlights something important: clinicians use AskTrip in different ways. Some want a quick, decision-focused summary. Others want to explore the underlying evidence in depth.

So the challenge isn’t simply length – it’s flexibility.


What we’re changing next

This feedback isn’t just interesting – it’s directly shaping the next phase of AskTrip.

We’re actively working on two key improvements.

1. Better-calibrated answers
We’re refining how answers are generated so that they:

  • Stay tightly focused on the exact clinical question
  • Match confidence to the strength of the evidence
  • Say clearly when evidence is limited or absent
  • Avoid speculation or unnecessary narrative

2. A redesigned answer format
We’re moving toward a structure that supports different user needs:

  • A concise clinical summary by default – clear, decision-focused, and quick to read
  • Expandable detail – allowing users to explore the full evidence, studies, and context when they want more depth

In short:
Short by default. Deep on demand.


Why low scores are valuable

It’s easy to focus on average ratings or overall satisfaction. But the most useful feedback often comes from the edges, the cases where we didn’t meet expectations.

Those low scores aren’t failures. They’re signals.

And if we listen carefully, they help us do what AskTrip is designed to do in the first place:

Turn evidence into answers that clinicians can actually use – clearly, honestly, and at the level of detail they need.

When good evidence gets buried – and how Trip is fixing it

I introduced the idea of chunking in the post HTML Scissors towards the end of last year. Since then we’ve been working on delivering on the promise and things are starting to come online. Before expanding on that, I’ll restate the problem…

A significant element of how we order Trip search results is how relevant the search terms are to the documents in our index – and this is strongly influenced by term density: the more a document is focused on the topic, the higher it is likely to rank.

However, this creates an important problem.

Take a clinical guideline on asthma. It might be 10,000 words long, with a 1,000-word section devoted to diagnosis. That section is highly relevant to a search for asthma diagnosis. But across the document as a whole, only 10% of the content relates to diagnosis. From a search engine’s perspective, the topic is relatively diluted; so the guideline may be judged less relevant and appear lower in the results than shorter documents that focus entirely on diagnosis.

In other words, long, high-quality documents can be penalised simply because their relevant content is spread thinly.

So, we’re starting to work with chunking – cutting long documents into smaller, coherent elements. These chunks are appearing live in the Trip results and we’re getting quite excited! We haven’t ironed out all the issues yet, but using the technology live is the only way we’ll refine and improve it.

An example search that highlights chunking

A search for Meningococcal Chemoprophylaxis reveals the following top result:

A few things to point out:

The document title is Guidance for public health management of meningococcal disease in the UK and we have added Chemoprophylaxis in Healthcare Settings (Detailed) ‒ Chemoprophylaxis Recommendations in Healthcare Settings. As we chunk we assign a chunk title to sit alongside the actual title. Whether this continues to be displayed is an ongoing debate.

If you look at the the documents index:

You will see that only 6 pages (pages 24–30) are about chemoprophylaxis — less than 10% of the 63-page document. As a result, the document as a whole would score relatively low for this topic and would be unlikely to appear near the top of the results, even though those six pages are highly relevant.

By treating those pages as a separate unit, the content becomes highly concentrated on chemoprophylaxis — increasing its term density and allowing it to rank much more appropriately for the search.

In short, chunking helps Trip find the relevant part, not just the relevant document.

That means long, authoritative sources are no longer penalised for covering multiple topics – and clinicians are more likely to see the evidence they need, faster.

We’re just getting started, and your searches will help us make it better.

Quiet changes like this don’t always get noticed – but they make a real difference to turning research into practice.

Blog at WordPress.com.

Up ↑