One of the most consistent pieces of feedback we’ve had from users is simple: can we see more of the evidence behind the answer?
That’s led us to experiment with something new in AskTrip—two versions of the same response:
- A standard answer: quick, focused, decision-ready
- A detailed answer: longer, with more evidence, context, and transparency
At first glance, this looks like a question of length. The detailed version can be 50% to 3× longer, adding sections on safety, mechanisms, and research gaps, while the standard version sticks to the essentials.
But the more interesting finding is this:
The conclusion usually doesn’t change.
Across multiple examples—from migraine treatments to rare conditions like Dravet syndrome—both versions tend to land in the same place. The standard answer tells you what to do. The detailed answer shows you why that answer holds—and where it might not.
That distinction matters.
Because one of the known failure modes of AI-generated clinical answers is that they can sound confident even when the underlying evidence is thin, indirect, or inconsistent. The answer looks clean. The evidence behind it often isn’t.
The standard answer inevitably compresses that complexity. It has to—that’s what makes it useful. You get the headline: what works, how strong the evidence is, and what clinicians typically do.
The detailed answer reintroduces the complexity—but in a structured way. You start to see the scaffolding: the trials, the meta-analyses, the lack of head-to-head comparisons, the reliance on indirect evidence, the safety trade-offs. Not more opinion—more visibility.
Take a condition like Dravet syndrome. In practice, there are recognisable treatment patterns. But there isn’t a clean, evidence-based “algorithm” underpinning them—much of the approach is based on indirect comparisons and evolving consensus. A standard answer reflects the pattern. A detailed answer makes the gap explicit: this is what we do, but this isn’t backed by strong comparative evidence.
That’s the difference.
- Standard = decision-ready summary
- Detailed = evidence justification + context
And importantly:
The detailed answer doesn’t usually change what you do—
it changes how well you understand, and how far you trust, why you’re doing it.
If and when the conclusion does change between layers, that’s not a problem—it’s a signal. It tells us the evidence is more fragile than the headline suggests, and that’s exactly the kind of thing we want to surface.
This isn’t just about giving users “more.” It’s about addressing a real problem: how to avoid confident-sounding answers that mask uncertainty.
The two-layer approach is an attempt to separate two functions that are often forced together:
- fast, usable decision support
- transparent, honest representation of evidence
We’re still testing and refining this. But early signs suggest this split might be a better way for AI tools to handle clinical uncertainty—without forcing users to choose between speed and trust.
Leave a comment