I recently gave an update on the progress of the system (see Automated rapid reviews). In this I highlight the variables we’ll be able to automatically assess for each paper (RCT or systematic review):
- P – population/disease
- I – intervention
- C – comparison (if there is one)
- Sentiment – does the trial favour the intervention or not
- Sample size – is this a large or small trial
- Risk of Bias – via RobotReviewer, which is already on the site
As our systems processes all the articles we have to figure out how to create an output. The outline brief I’ve suggested to our designers is:
Clearly I’m no designer! But I hope you get the picture! The design works on two levels:
- Top level – for a given condition each ‘blob’ will consist of a single intervention. Size of blob will indicate the size of the evidence (based on sample size of trials), horizontal axis will represent the date of the first trial for that particular intervention, while the vertical axis will indicate likely effectiveness,
- Second level – If a user clicks on a blob in the top level, this will be unpacked to break down each intervention in to the component trials. Again, using similar plotting methods (sample size = size of blob, date of individual trial of horizontal access and effectiveness on the vertical).
It will look nicer and we’re exploring other visualisation techniques such as this one.
This needs to be ready by the end of September, so just over three weeks!
Leave a Reply