At AskTrip, we’ve always believed that transparency builds trust. That’s why I want to talk about something that’s getting a lot of attention in the world of AI: hallucinations.

What are hallucinations?

In simple terms, hallucinations are when a large language model (LLM) generates something that sounds convincing but isn’t entirely accurate. These models are incredibly powerful, but they don’t “understand” in the way humans do. Most of the time this works brilliantly, but sometimes it can slip.

How we keep an eye on quality

We don’t just leave this to chance. AskTrip has an active quality control system in place that monitors for hallucinations and other errors. We log, track, and learn from every issue that we find. On top of that, we’re finalising a test bed – a safe environment where we can trial new methods specifically aimed at reducing hallucinations – and we’re doing this in collaboration with AI experts.

The kinds of hallucinations we’ve seen

Being upfront means sharing real examples. Here are three patterns we’ve spotted:

  1. Condition mismatch – A paper was returned as though it was relevant to one condition, but in fact, it wasn’t.
  2. Inserted numbers – The LLM provided a recovery figure. The number itself was correct (from the paper), but the way it was presented made it look like it came from somewhere else.
  3. Inference over quotation – Not quite a hallucination, but worth noting. Sometimes the LLM infers from a study instead of sticking strictly to the words on the page.

How often does this happen?

Thankfully, not very often. Importantly, none so far have drastically changed the clinical answer — but even minor inaccuracies can matter in a clinical setting. That’s why we take this so seriously, and why it’s equally important that users uphold their responsibility too. This is why we require all users to agree to a responsibility statement, which includes checking the facts and applying their own critical judgement.

What we’re doing about it

We’re working hard to make AskTrip even more reliable. That means:

  • Partnering with AI experts.
  • Stress-testing new approaches in our test bed.
  • Constantly monitoring, learning, and refining.

Why this matters for you

As a user, it’s important you know that hallucinations can happen. We’ll always be open about this. The frequency is low, we’re actively addressing it, and improvements are underway. But awareness is part of safe use – just as it is with any evidence-based tool.

Pulling it all together

So here’s the bottom line: hallucinations exist. We’re aware of them. We’re working hard to reduce them. And we want you, our users, to be aware too.

AskTrip is built on trust – and that means being transparent, even when it’s uncomfortable. By working together, we can keep improving and make evidence access safer and more reliable for everyone.