Over 30 people signed up to test our automated Q&A system, though it’s unclear how many actively participated. That said, we received over 200 questions – which averages out to around 7 per tester. Realistically, some asked just one or two while others were clearly more enthusiastic, which is a great sign that we’re on the right track!
We’re now moving into the feedback phase and have asked testers to share their views across several key areas:
- User characteristics: Confirmation of professional status
- Usage frequency: Number of questions asked during the trial
- Perceived accuracy: Subjective judgement of how well answers reflected the evidence
- Clinical relevance: Relevance of responses to the clinical scenario posed
- Trustworthiness: Level of trust placed in the answer content
- Responsiveness: Perceived speed of system response
- Answer format: Feedback on the structure and style of the response (e.g. narrative vs. quantitative balance, referencing)
- Likelihood of recommendation: Willingness to recommend the tool to colleagues
- Improvement suggestions: Opportunities to improve usability, content quality, or design
- Overall impressions: General feedback on value, potential for routine use, and any concerns
We’re a mix of nervous and excited – but that’s the whole point of testing. We know it’s not perfect, and with thoughtful feedback, we’re confident we can make it significantly better.
Let’s call it nervously optimistic.
Leave a comment