After over a year of hard-work we have just released the latest version of Trip. We had to re-write ALL the website, replacing code that, in some places, was over 15 years old!
We’ve tested this extensively so we’re hoping any issues will be minor, but if you spot an issue then please let me know: jon.brassey@tripdatabase.com.
Sometime tomorrow morning (1st July) we will switch over from the old site to the new site.
This has been a massive rewriting of code – it’s taken over 12 months – and it has been well tested over the last three months. However, it’d be naïve to think they’ll be no issues. But our development team are primed to act quickly so, if there are any disruptions, then they shouldn’t take too long to fix.
If a user conducts a search for, say, prostate cancer screening we can say these terms are linked. Now, if someone else searches for breast cancer screening you can see there are linkages between those three terms. But, you can also link back to the previous search via the terms cancer screening. So, why not map them? The below images are based on a really small sample of our clickstream data, but map the connections between search terms.
The above is based on a small sample of search terms around UTI. The one below uses a linked, but different technique:
You can see that there are different circle sizes (representing popularity of search) and some lines are thicker than others, showing these we searched together more often. Below is an easier to read sample of the above:
So what? Why am I sharing?
I can’t help feeling this is useful for highlighting search terms of interest for reviews. For instance, you may have 5 terms in your search, by harnessing the power of linked terms a system may suggest a further ten that may be useful! A form of query expansion perhaps?!
The Trip algorithm is great. To explain, the algorithm is the ‘behind the scenes’ way we order the results you see on the screen. As mentioned, it works great.
However, that’s not to mean it can’t be improved and we are currently working with a number of academics to try to use our data to improve search methods generally (not just Trip). We have an accompanying paper TripClick: The Log Files of a Large Health Web Search Engine. The idea is that, by using our clickstream data (what people search for, what they click on etc), machine learning techniques can be used to improve search results.
What’s particularly exciting is that we have created a competition, pitting different academic centres against each other, to see who returns the best results. Yesterday we had our first academic centre to report results:
I’m happy for a number of reasons, mainly:
The improvement over baseline was large
It was from a team headed by Prof Allan Hanbury at TU Wein, the wonderful lead of Trip’s Horizon 2020 work a few years back.
The competition is likely to run for months and after that it’s a question of taking stock and seeing how we can utilise the techniques within Trip.
If we can improve on our search results, even marginally, it’ll be a great result.
Trip prides itself on the comprehensive coverage of clinical guidelines; we’re not aware of any resource that comes close in this regard. Given the importance we have spent considerable resource trying to improve it further. One aspect has been our guideline grading project, which we hope to instigate sometime in the next 3-4 months.
More immediately, we are delighted to announce the first output from our project to mine PubMed for clinical guidelines. Up until this weekend all our guidelines were found via guideline repositories, professional website associations etc. This is great, but we’re aware that, for a number of reasons, this misses many guidelines. So, we have started a project to locate guidelines in PubMed and the first results are now available. We have focussed on national guidelines and have extracted guidelines from Japan, Poland, Brazil, France, Germany, Spain, Saudi Arabia Italy and many others.
Due to this work we have added just under 1,000 new guidelines with more to follow over the next few months.
We’ve had a fair bit of feedback on the new site so I thought I’d share it and respond to a few comments. And, as means of an update, we’re working on a ‘snagging’ list of things that need fixing. With a good wind, we’ll move the new site over by the end of May.
The vast majority of users found it easy or very easy to use and a similar number felt confident in using the new system although a few people fed back that they’ll need a bit of time to understand all the changes (BTW this key might help).
When we asked specifically what they liked, this was the sort of feedback:
Colours
Shift to the left-hand side of the ‘filter by evidence type’ – nice to see as it was a major worry for me!
Some specific comments being:
“Pretty much all of it. Particularly the quality rating for the primary studies.” “Beautiful and simple“
When asked what they disliked:
‘Nothing’ was, thankfully, a common response
Access to certain features (eg Latest and Greatest, LibKey) – these we’re dealing with in our ‘snagging list’
Evidence maps – this is not a feature we’re going to retain for now. It served a purpose, but it’s out of date, things have moved on and I’d want to reinstate it if we had the resource to do it properly!
Evidence pyramids – I feel your pain, it was something we gave up as part of the wider redesign ‘look and feel’
It’s not too late to give feedback yourself, please use this form.
And, finally, an important issue is that the new site has not crashed once 🙂
Our new site is in beta version (try it here https://labs2020.tripdatabase.com/Home) and we’re pretty close to working through the issues raised in beta-testing.
However, we want as much feedback as possible so please, after trying the new site, please complete this form to let us know what you think.
We’ve been working on this for a while (eg Grading guidelines, an update). Well the grading has continued and we’ve now graded over 250 guideline producers covering the vast majority of the guideline we cover.
The grading is a score from 0-8, with 8 being the highest score (click here for an explanation of the scoring system) and the distribution is as follows:
Y-axis is the score while the X-axis is the number of producers scoring that particular score. So, around 50 producers score the maximum 8!
Nearly 50% score 7 or 8, which is encouraging. Lots also score 0 and this likely reflects the inaccessibility of methods for us to score. Their guideline might be great (seems unlikely) but if a user can’t see the methods how can they assess the ‘worth’ of the guideline?
After an enormous amount of work and testing we have released the new site in beta mode! Given the huge rewrite of code and irrespective of all the testing, we felt it was a big risk to simply replace the current/old site with the new one. So, for a short period of time they’ll run in parallel with a link from the top of the current/old site:
This takes you to the homepage (this one being for Pro subscribers, free users get a green colour scheme):
And here is the Pro results page:
Where to start with the changes?
Possibly the biggest change has been the shift of the filter by evidence type from the right to the left of the results page. I was remarkably attached to the old way, but our designer convinced me of the shift.
Other than that the design has been cleaned up and there are some new icons:
Lots of other things to explore and click on (for instance the coloured ‘lozenges’ highlighting the evidence type – go on, click on one of them).
So, please go and try it now (via https://labs2020.tripdatabase.com/Home) and let us know what you think via our feedback form. We’ve already received some feedback and they’re being worked on, thankfully most of them appear to be minor. Alternatively contact me directly: jon.brassey@tripdatabase.com, it’s always great to hear from you!
Recent Comments