In February I posted an article discussing ‘the near instantaneous meta-analysis‘. In a nutshell – is there a way to very rapidly combine the results of multiple-trials?
Since then we have been quite busy working on this project, helped by some external expertise and a recent research grant. Trip funded phase one, a proof of concept phase that allowed me to appreciate the challenges, limitations and opportunities that our approach presented. The results were great and since then we have been awarded a grant to move forward to phase two.
Phase two will create a working model for people to use. This will be quite a simple solution and will only be aimed at synthesising placebo-controlled trials (more complex, comparator trials will form phase three). The working model will work as follows:
- User will use a modified search box telling us the condition and the intervention (e.g. acne and antibiotics).
- Our system will search just the controlled trials portion of Trip to identify suitable trials.
- We will then analyse these and present a score (more below on the scoring system).
- We will then have an area that explains the results, how we arrived at them and the ability for the user to alter certain aspects. This last bit is important as we’re relying on machines to ‘read’ the documents and extract pertinent information. This is unlikely to be foolproof and while the system ‘learns’ it’ll need some feedback from users. But, if the user does make alterations we will then re-analyse based on the updated information.
Quite simple really and steps 1-3 will take less than a second (we hope).
As for the score, that’s an interesting area and we’re sure it’ll change over time. But the thinking at the moment is – what is most clinically useful? After all, our audience for this will be practicing clinicians, not academics. As such we’re thinking that an effect size is not particularly great/intuitive. I really like the Clinical Evidence system for rating interventions e.g. ‘Likely to be beneficial’, ‘Unknown effectiveness’.
However, I’m also struck by the systems used by Amazon and TripAdvisor to rate articles. A product/holiday is given an overall score but you can easily see how the score is arrived at. When I use these I always look at the reasons people have given for 1 or 2 stars (ie people who have rated the item poor). Whichever system we use we’ll make it very easy for users to differentiate good and bad aspects of an intervention.
Hopefully, phase two will be released in 6-8 weeks, probably not on broad release to start. This will be a gentle release, to a few people to start with. This will allow us to alter the algorithms, allow for further machine learning etc.
I see this whole ‘instant review’ system taking a minimum of four phases. Hopefully, if phase two works as well as we think it will, funding will follow to allow us to move to phase three.