Cardiology is often held up as one of medicine’s most evidence-rich specialties. It has large trials going back decades, mature drug classes, well-developed international guidelines, and clear acute pathways. So when clinicians use an AI evidence tool to ask cardiology questions, what do they actually ask?
We looked at 250 recent questions tagged as Cardiology on AskTrip. The short version: the evidence base is genuinely strong – but the questions cluster in the places where guidelines run into messy patients. Clinicians are rarely asking “what works?” in the abstract. They are asking how to apply known evidence safely to the patient in front of them.
The evidence profile
Of the 250 questions, 84 (34%) were rated High, 42 (17%) Good, 111 (44%) Moderate, and 13 (5%) Limited. Just over half land at Good or High – a stronger profile than most specialty samples we’ve looked at. But Moderate remains the largest category, and that is the interesting bit. Even in a specialty with thousands of RCTs and well-maintained guidelines, the plurality of real-world questions don’t have a clean, directly-applicable answer waiting in the literature.
The questions are also broad-front, not concentrated. Only two pairs of exact duplicates across 250 questions. There is no thesis student iterating one question dozens of times. This is many different clinicians asking many different things.
The uncertainty appears after the “and”
The structural pattern that explains the Moderate-heavy distribution is this: clinicians know the typical evidence; they want to know what happens after the “and”.
Atrial fibrillation is familiar. AF and two stents is harder. Pulmonary embolism is familiar. PE and elevated ALT is harder. Hypertension is familiar. Hypertension and dental extraction, and recent intracerebral haemorrhage, and weight loss to normal BMI are all harder. Heart failure is familiar. Heart failure and CKD stage 3–5 is harder. Anticoagulation is familiar. Anticoagulation and patent foramen ovale and prior stroke and upcoming non-cardiac surgery is harder.
Concrete examples run throughout the corpus: DOACs in obesity, atorvastatin in a teenager, isolated systolic hypertension with already-low diastolic in an elderly patient, a 76-year-old on rivaroxaban with patent foramen ovale facing non-cardiac surgery, an 80-year-old with GFR 15 needing an AV fistula. Most score Moderate. That isn’t a failure of evidence – it’s the honest picture. Trial populations rarely look quite like the patient in front of you.
Anticoagulation is the clearest anxiety signal
If one therapeutic thread runs through the corpus, it is anticoagulation. Thirty-three questions touch it, in almost every difficult context: obesity, surgery, colonoscopy, ERCP, coronary thrombus, stents, pregnancy, liver dysfunction, CKD, prior stroke, high bleeding risk.
Clinicians know anticoagulation works. What they want to know is when it is safe, who benefits most, and what to do when bleeding, liver function, surgery, pregnancy or thrombosis complicate the calculation.
A small but telling pattern: three separate questions ask whether aspirin has a role in AF for stroke prevention, all rated High. The well-supported answer is “no, anticoagulation, not aspirin” – yet the question keeps being asked. That is a textbook dissemination gap: clear evidence, ongoing clinical uncertainty about applying it.
A second pattern is drug switching and peri-procedural pausing. Five questions explicitly about transitioning between agents or stopping anticoagulation before an intervention, three of them rated Limited. The interruption of therapy is where guideline coverage runs thinnest.
Heart failure has shifted from “which drug?” to “how do we deliver it?”
A striking pattern is that heart failure questions no longer cluster around drug efficacy. Of 29 heart failure questions, only a handful are about which therapy works – and where they are (SGLT2 inhibitors in HFpEF; HFrEF evidence-based treatment), they tend to score High. The more numerous heart failure questions are about delivery: virtual wards for rapid GDMT up-titration, GDMT protocols in CKD stage 3–5, the cost-effectiveness of dedicated heart failure units, remote monitoring algorithms, engaging patients in self-management.
Heart failure has become, in clinical-question terms, an implementation specialty. Clinicians and service leads know what works; they want to know how to get it to patients reliably, especially in those with CKD, frailty, or complex comorbidity.
Acute care, devices and the front door
A meaningful slice of the corpus comes from the front door of healthcare – ambulance, ED, cath lab – where decisions are urgent. STEMI in endocarditis, MINOCA pharmacology, chest pain triage, OPQRST assessment, ambulance response times, hypotension, cardiogenic shock, septic shock with tachycardia, acute pulmonary oedema, whether oxygen is harmful in a heart attack with adequate saturations. Some of these need a synthesised guideline answer; some need a recent trial; some need a pragmatic “here is what most experienced people do.”
Cardiology is also a device, imaging and procedural specialty, and the questions reflect that. Twelve questions name a specific product: Visipaque versus Omnipaque for coronary angiography (a five-question mini-cluster), the Penumbra Element sheath, the Zebra catheter, Boston Scientific’s Embold coil system, the Impella device, the HeartInsight monitoring algorithm, Tebonin, Lanacordin. The Zebra catheter question — “Are there any published clinical studies on the Zebra catheter from Q’apel?” – rated Limited, which is the right answer for a niche single-vendor device. A well-calibrated “not much” is more useful than a confident-sounding hedge.
Where cardiology meets other specialties
One of the more interesting findings is how many cardiology questions are clearly asked by non-cardiologists. Twenty-five questions sit at specialty interfaces: clozapine in patients with pericardial effusion, upadacitinib in coronary stent patients, sildenafil’s visual side effects, QTc-prolonging medications in HR+/HER2− metastatic breast cancer, hypertension during dental extraction, anticoagulation around hernia surgery, stopping clopidogrel before colonoscopy in a stented patient, chest pain in long COVID, antihypertensives and psoriasis, tramadol-triggered hypertensive crises in paraganglioma, ADHD medications in heart failure, tamoxifen versus aromatase inhibitors in a stroke patient.
These are the questions psychiatrists, oncologists, dentists, GPs, gastroenterologists, dermatologists, rheumatologists and paediatricians need cardiology evidence for. A specialty-bounded textbook doesn’t answer them well, because they fall in the gap between disciplines. The question is genuinely “how do I avoid harming this patient’s heart while treating their other thing?”
A small aortic cluster that behaves differently
A coherent 14-question cluster on aortic disease has a distinct character. Unlike most of the corpus, these questions are not mainly about treatment. They are about recognition and monitoring: how aortic dissection presents, what leads to its underdiagnosis, sex differences in presentation, Marfan syndrome and connective-tissue disorders, AAA surveillance intervals, when surgery is indicated for penetrating aortic ulcers. The questions score consistently well – five rated High, none rated Limited. Clinicians know the evidence is there; they want help navigating it for diagnosis and surveillance.
What the Limited ratings tell us
Thirteen questions came back rated Limited. Every one points at a clinically recognisable thin spot in the evidence: ivabradine-to-beta-blocker transition, bisoprolol-to-verapamil switch (the combination is contraindicated), refractory HOCM, anticoagulation in segmental PE with elevated ALT, combined guidelines for hypertension and hyperlipidaemia (real guidelines split them), 40 mg enoxaparin once-daily in AF, preoperative cardiovascular evaluation in an 80-year-old with GFR 15 needing AV fistula creation, DOACs versus enoxaparin in patients on supplemental oxygen, and a handful of others.
The Limited ratings cluster in three recognisable places: drug switching, peri-procedural anticoagulation timing, and the combinatorial complexity of multimorbid older patients. This is the opposite of confident handwaving. A calibrated “the evidence here genuinely thins out” is more clinically useful than a polished answer that papers over the gap.
The thread that runs through
Across drugs, devices, acute care, special populations, services and specialty interfaces, the same shape of question keeps recurring: I know roughly what the evidence says in the typical patient – but my patient is not the typical one, so how do I apply it safely here?
The trials exist. The guidelines exist. What clinicians are asking AI evidence tools for is the next step – translation into the specific case in front of them, with appropriate uncertainty when the literature can’t quite reach.
In cardiology, the evidence is often strong. But the patient is often complicated.
Leave a comment