Why Usage Speaks Louder Than Words

In some ways, I didn’t even need to read the beta tester feedback. Why? Because the most compelling evidence was in the behavior itself: users kept coming back. That repeated engagement spoke volumes – it showed the system was delivering real value and gaining meaningful traction.

Positive Feedback Highlights

But we did ask for feedback and it was broadly very positive, the headlines:

  • 70% were health professionals
  • Most asked 3+ questions
  • Accuracy was deemed high
  • The answers were deemed relevant and trustworthy
  • Speed – 70% said ‘very fast’ and 30% said ‘reassuringly paced’
  • 100% of health professionals would recommend the system to their colleagues

Here are a few standout quotes:

  • Thanks for the opportunity – I feel a product from the Trip family has particular value given your history in information architecture and providing credible, evidence-tracked, healthcare information support
  • It is very impressive to see the speed and capacity to extract and summarise data from evidence resources
  • Amazing system – would use very frequently in clinical practice!
  • Please continue this excellent initiative
  • Honestly, overall the database is intriguing. It has a resiliency and foundation that lends itself to be far more trustworthy and clinically focused than most other databases. I see it also as a great tool to teach med students about building blocks of clinical reasoning and research.

What’s Next: Immediate and Future Enhancements

As well as the good there were lots of constructive feedback which falls into a number of stages of the Q&A process, with some examples of the issues:

  • Initial question processing – when a user submits the question we need to do some processing to better disambiguate questions, for instance one Q we received was simply liver elastography.
  • Answer creation – we need to better handle the search process e.g. send additional meta data, make the search more sensitive if too few result etc.
  • Answer design – the way we include references was problematic for many but also there was a wish for an overall strength/weakness of evidence statement to be included.
  • Answer placement – we need to add the Q&As to the Trip search index and to have systems in place to deal with duplications

All the above are seen as being ‘immediate’ action points, by that I mean these will be done before we roll this out as an open beta on Trip. There are some medium-long term improvements we need to do:

  • Add extra content types eg drug information resources.
  • Use location information – if the user is from the USA then favour American guidelines.
  • For each Q&A give additional prompts for follow-up questions. In other words if a user asks What are the pros and cons of prostate cancer screening? We might suggest follow-up questions such as What is the best screening tool for prostate cancer? or What are the different mortality rates at various cancer stages in prostate cancer?
  • Multi-lingual – allow users to ask Qs in their own language and get the answer back in their own language (see Apoyando el uso del idioma español en Trip Database.).

In conclusion

The beta test has been energising and insightful. With such a strong foundation and clear areas to build on, we’re more confident than ever that we’re creating something genuinely valuable for clinical decision-making. The next phase? Opening up the beta and continuing to learn, refine, and improve – together with our users.