It’s all hands on deck here as we rush to get a robust prototype model of our automated review system ready for viewing by the EU (next week in Luxembourg). Much of this work has been funded as part of our participation in the Horizon 2020 funded KConnect project (led by TUW, Vienna) and the EU like to see what they’re getting for their money.
Now we’ve had change to play with the data generated by our systems, two things are apparent:
- This should, broadly, be a viable approach
- Full-automation is still a way off (Note, we’re not attempting to reproduce manual systematic reviews). We think the automation stage will get you ‘so far’ but it’ll still require a second level of ‘polish’ to increase the robustness. We’re hoping this ‘polish’ stage will take no more than 5-10 minutes.
I’m going to share one image below:
In the image above we are showing a number of things, all generated automatically:
- Each trial or systematic review is shown as a single ‘blob’.
- Classification (x-axis) of trials based on perceived efficacy (does the drug work or not).
- Sample size – bigger the ‘blob’ the bigger the trial.
- Intervention name (y-axis).
What you’re not seeing is the fact that each trial is being automatically assessed for bias via RobotReviewer.
For next week we need to keep improving the data quality and – for each intervention – create a single estimate of effectiveness (our version of meta-analysis) and make it look nice!