Every question asked of AskTrip is a small signal: a clinician, somewhere, needed an answer they did not already have. Multiply that by 10,000 and a pattern begins to emerge – not just about what clinicians want to know, but about where medicine itself is falling short.
The pattern has two faces.
When a topic generates lots of questions and AskTrip can only return weak evidence, that is a research gap. The demand is real, the evidence base is thin, and the trials may need commissioning.
When a topic generates lots of questions and AskTrip can return strong evidence, yet clinicians keep asking the same things, that is a dissemination gap. The research has been done. The guidelines exist. But the knowledge is not reliably reaching the people making decisions.
Both represent gaps in the system, but they call for very different responses. One needs funders. The other needs better delivery.
A research gap: Functional Neurological Disorder
FND is increasingly recognised as a common, disabling and costly condition. Awareness has finally arrived. The evidence base has not caught up.
In the AskTrip dataset, FND generates a meaningful volume of questions, yet many return only Limited or Moderate evidence. Clinicians are asking about treatment effectiveness, inpatient costs, ward length of stay, and how to manage the condition in both adults and children. Too often, they are not getting confident answers, because the trials largely do not exist.
A similar story plays out for POTS. The same specific question, “Is gabapentin effective for POTS?” was asked independently by clinicians in different countries. None received a satisfactory answer, because one does not yet exist. Demand is rising, particularly in post-viral patients, yet the treatment evidence base remains thin and often observational.
A dissemination gap: atrial fibrillation
AF is one of the most questioned topics in the dataset, and the majority of those questions return High or Good quality evidence. The research is there. The guidelines exist in every major jurisdiction.
And yet “What is the best treatment for atrial fibrillation?” appears four times verbatim, asked by different clinicians at different institutions.
Rate versus rhythm control, anticoagulation thresholds in older adults, DOAC selection – these are areas with well-established, guideline-backed answers. The evidence is not obscure or absent. What this dataset captures is something different: established knowledge failing to travel the last mile to the clinician.
Why does the last mile fail? The dataset cannot yet tell us. The usual suspects are familiar: guidelines that are long or fragmented across jurisdictions, time pressure at the point of care, search tools that surface primary studies when a synthesis was needed, institutional habits that outlast their evidence. Most likely some combination, varying by topic. What the dataset does say, unambiguously, is that the gap is real – clinicians with access to a good search tool are still asking questions whose answers have been settled for years. Identifying which mile is failing, and where, is the next question.
Why this matters
Trip Database has been a search engine over the medical literature for nearly thirty years. AskTrip quietly turns it into something else as well.
Most of the infrastructure of evidence-based medicine is supply-side. Journals publish trials. Cochrane synthesises them. Guideline bodies translate evidence into recommendations. Each of those institutions can tell you, in different ways, what evidence exists. Far fewer can show, in real time, what clinicians are actually trying to find out.
AskTrip can.
Every question is a demand signal, and at scale those signals begin to describe the shape of clinical uncertainty itself: which trials should be commissioned, and which guidelines are not reaching their audience.
Two failure modes. One dataset. Visible at scale.
Leave a comment