Step 1 – the search. This was a bit fiddly! I didn’t want to simply search for SSRI, so I added a few named SSRIs as well (citalopram, escitalopram, fluoxetine, paroxetine and sertraline). I also wanted to search for hot flashes OR hot flushes. A bit of mucking around with the syntax gave me the following:
("hot flushes" or "hot flashes") and (SSRIs or citalopram or escitalopram or fluoxetine or paroxetine or sertraline)
This returned 41 possible trials in Trip.
Step 2 – the analysis. I then went through the titles and selected 13 articles which I felt matched the research question and pressed the ‘Analyse’ button which gave a score of +0.42 – this was after about 5 minutes (would have been quicker if I hadn’t messed up the search syntax).
Step 3 – checking. Trying to improve the quality I went through the articles to check the various components (effect and sample size) and found I had to adjust three of them (mainly sample size) and this gave a new figure of +0.31. NOTE: this step added an extra 3 minutes.
So, the final score is +0.31!
Now, time to check against the actual systematic review. They only included 11 trials (so 2 less than we included – this raises an issue around the lack of quality assessment of our articles). But, their conclusion:
“SSRI use is associated with modest improvement in the severity and frequency of hot flashes but can also be associated with the typical profile of SSRI adverse effects.”
So, both systems indicate that SSRIs are potentially useful in managing menopausal hot flushes/flashes. So, on a broad level Trip Rapid Review got the same result as the systematic review.
One issue relating to harm to note, in the results they report “Adverse events did not differ from placebo” which is interesting as in the conclusion they report “...but can also be associated with the typical profile of SSRI adverse effects.” Essentially, the systematic review didn’t find any particular harm issues, therefore I assume the authors are relying on external data. I raise this as the Trip Rapid Review system is not – yet – trained to look for adverse events. But, in this case, I imagine a user could look at any drug reference system (e.g. BNF) to highlight potential adverse events. Being even more controversial they could even look at the adverse events list in Wikipedia.
So, same result using Trip Rapid Review in eight minutes, I shudder to think how long it took the seven authors of the systematic review – perhaps a year!?
So, what are the lessons learnt? For me, the following:
We use minimal critical appraisal, our main effort (but the most important) is checking it's a randomised trial or not. In future we could easily adapt the system to alter scores based on them being appraised or not. For instance, if the trial appears in Evidence Updates we assume it's valid.
I still have issues of the number/final score. We reported 0.31 but what does that mean? I favour trying to assign various narratives based on the score, for instance:
- 1 >> 0.5 = Intervention is highly likely to be beneficial.
- 0.49 >> 0.25 = Intervention is likely to be beneficial.
- 0.24 >> -0.24 = Evidence is weak or ambiguous.
- -0.25 >> -0.49 = Intervention is unlikely to be beneficial.
- -0.5 >> -1 = Intervention is highly unlikely to be beneficial.
Trip Rapid Reviews is probably a lot more than interesting!
Remember Trip Rapid Reviews is free and really easy to use - why not try it for yourself?