Search

Trip Database Blog

Liberating the literature

Category

Uncategorized

List of outcomes

Our automated review system seeks to locate the PIC elements of PICO.  We didn’t have the resource to do the ‘O’ (outcomes) which seems a shame.  But we want to explore incorporating it and an important starting block is having a list of outcomes.  Such a list doesn’t – obviously – exist.  I put an email request to the EBHC maillist and got a number of responses that have started me down the route of attempting to compile a list that might prove useful to me and others.

Another option to explore is scraping the outcomes listed in ct.gov, a fairly straightforward task but the outcomes aren’t ‘clean’.  By that, if you look at these example outcomes from the site they’re quite long, which can be problematic!

  • Measure Metformin Induced effects in phosphorylation of S6K, 4E-BP-1 and AMPK via immunohistochemical analysis
  • Antibody response rate to measles at 6 weeks postvaccination
  • Change in Maximum Forced Expiratory Volume at One Second (FEV1)
  • Biomarkers will be identified to help predict future prostate cancer risks and patients likely to benefit from preventive strategies

A further option is the COMET Initiative (Core Outcome Measures in Effectiveness Trials) and I’ve reached out to them for help.

So, very much a work in progress! As I type/write this we have just one list but we are hopeful that we’ve located two other lists, one in CVD and one in schizophrenia.

List to date:

From the SONG Initiative (Standardised Outcomes in Nephrology)

fatigue
cardiovascular disease
vascaular access
mortality
ability to travel
ability to work
anemia
blood pressure
anaemia
depression
dialysis adequacy
dialysis-free time
drop in blood pressure
hospitalization
hospitalisation
impact on friends
impact on family
infection
immunity
mobility
pain
potassium
target weight
washed out after dialysis
anxiety
stress
bone health
calcium
cognition
cramps
financial impact
food enjoyment
taste
itching
nausea
vomiting
parathyroid hormone
phosphate
restless legs
sexual function
sleep
insomnia

Advertisements

Upgrade to the site

We’ve just rolled out a broad range of upgrades to the site:

Latest and greatest – for a given topic you can now easily view the very latest for a topic as well as those most frequently visited (within the last 12 months).

Tour – to help make it easier to orientate yourself to the site, available via the top navigation bar:

How to use Trip is now clearer and easier to use.

Top of the results has been tidied up with the images and videos being moved to the ‘refine’ area of the results page (on right-hand side).

Sort by popularity is now possible via the following drop-down:

Differences between free and Pro more clearly laid out – click here to view.

Search engine optimisation (SEO) to make it easier for users to find our content via Google and other search engines.

Trip Answers, our repository of clinical Q&As has been refreshed to make it more user friendly.

Sources searched by Trip

We get asked this fairly regularly and due to the breadth of coverage it’s really difficult to succinctly answer.  But this image is our best attempt at capturing it:

Note our use of the evidence pyramid!

With regard PubMed (and PubMed Central) this is a bit more information:

  • All content from the top 500 journals only (based – broadly – on impact factor)
  • All the RCTs in whichever of the 5,000+ journals are in PubMed
  • All systematic reviews in whichever of the 5,000+ journals are in PubMed (Trip Pro only)
  • All peer-reviewed full-text articles from PubMed Central (Trip Pro only)

I suspect this page will be a ‘living’ document, frequently edited to improve it. So, if you have any questions please let me know – jon.brassey@tripdatabase.com

How searches for ‘alcohol’ reveal clinical interest/uncertainty

As part of the KConnect project we were able to create a wonderful set of analytic tools to analyse how people use the site.  My current favourite is one we’re calling ‘topics’.  For a given search terms it analyses all the article titles that users have clicked on and groups them based on meaning.

This is important as searches for ‘alcohol’ reveal they are interested in the term, but it is only when they click on a title do they ‘reveal’ there likely intention.  That is because search terms are typically 1-3 terms while document titles contain many more terms.  Below is an analysis of search for ‘alcohol’:

This shows that the most popular subject relates to alcohol withdrawal (as that is the major topic in the most popularly clicked titles).  But we can look at even more detail.  So, within alcohol withdrawal we can see that baclofen, dexmedetomidine and benzodiazepines are the most popular sub-topics.

I hope this is clear!

My conclusion is that this gives a clear insight into the Trip user (almost exclusively health professionals are and mainly using Trip to obtain trusted answers to their clinical questions).  But, more than that it surely reflects the uncertainties/questions of the health service, making it an important component of research procurement – ensuring the topics funded meet the needs of the eventual users.

Oh yes, if you want me to generate some examples for topics of your interest then let me know.  I’m sure I can find time to generate a few more examples!

Automated reviews – timeline to release

We’ve revealed our automated review system but it’s still not ready for everyone to use but we now have a timeline:

  • End of November – all the variables used are generated automatically and all of these are having significant work done on them to improve them.
  • December – the images we’ve shown do not do it justice so we’ve employed the services of a data visualisation expert.  She’s in demand so will not start till December and will deliver the results by the end of the month.
  • January – all systems will be plugged in and then released to a small number of people to beta-test
  • End of January – system released to the world.

So, there you have it, just over three months time we should have our currently good system made truly amazing.

Boolean, truncation and other such things

Below are some search tips to help users get the most out of Trip! If we’ve missed anything then please let us know by emailing jon.brassey@tripdatabase.com. Thanks to Igor Brbre for assisting with this! We’ll highlight:

  • Boolean
  • Truncation
  • Phrase searching
  • Proximity searching
  • Date range
  • Title versus body of text
  • Other key ways to maximise results in Trip

Boolean

We support simple and complex boolean searches using the operators AND, OR and NOT:

Trip’s advanced search (Pro only feature) tends to make this easier!

Truncation

Use the asterisk:

Phrase searching

Use quotation marks:

Proximity searching

This finds terms that are close to each other, how close is up to you:

Date range

Title versus body of text

By default Trip searches all of the document that we have indexed (so this might be an abstract or the full-text, depending on what the publisher has made freely available) but you can restrict to title only:

In the latter examples it retrieves articles with ‘prostate cancer’ in the title and ‘screening’ and ‘psa’ anywhere in the body of the text.

Advanced search (Pro only feature)

Advanced search makes all the boolean much easier:

Combination searches (Pro only feature)

As ‘combination searches’ is too long for the navigation, we’ve called in ‘recent’ (open to suggestions for a better term!).  It allows you to combine complex searches:

Other stuff!

See image below for a number of key features:

NOTE: below each result there is a line of tools:

  • Tweet this – if you’re on Twitter you can tweet the result
  • Star this – if you’re signed in this will save it to view later (via the ‘Starred‘ link at the top of the page)
  • Report broken link – if you find a dead link, press this and let us know
  • Related – finds related articles
  • ‘clicks’ – Pro feature only, shows how many times an article has been clicked

Other features

  • Smart Search – helps prevent you missing vital references, really easy and powerful feature
  • Assessment of bias – for all controlled trials we use the latest automation techniques to assess trials for bias (critical appraisal)
  • Answer engine – using various algorithms we find the best available answer to the question (which we’ve also inferred from your search terms)

Automated reviews – why?

In my previous post on our automated review system I concentrated on describing the product. This is all fine, but one quotation I’m trying to remember in my day-to-day life is:

So, why do what we’re doing?  The reasons are multiple and include:

  • Years ago I had a conversation with someone and I ended up sending them a link to a set of Trip search results.  They said that was fine, if he had a few hours to read them – which he didn’t.  I started to wonder if there was a way of automatically scanning the literature to allow people to really easily get a feel for a topic area.  So, you might want to see what interventions are useful for a given condition (perhaps the first line has failed).  Our system will automatically generate a list of interventions with an estimate of likely effectiveness.
  • The Trip Answer Engine is hugely popular and we can use the technology to boost the content.  So, it solves a problem for Trip (how to boost coverage) and it solves the problem of quick access to information for a specific clinical question.  So, a user might want to search for acne and lasers we can pull through an answer along the lines of “Lasers have been studied in 12 RCTs and have been broadly favourable” we can link to all the articles (if the user has the time) but we can also rank the intervention, saying something like “Lasers are ranked 3 out of 8 for interventions in acne“.  This seems incredibly useful to me.
  • Updating guidelines and reviews are problematic.  With our system you could ‘watch’ an intervention and get alerts when new evidence is generated.  Or, even more useful, when new research is published that contradicts the previous findings.
  • Rapid reviews are increasingly important and the automated system could form a core part of any semi-manual rapid review tool.
  • Intellectual challenge!

So, one product – multiple problems solved!

Automated reviews – very positive progress

NOTE: since publishing this we have a linked post Automated reviews – why?, which compliments this one!

This work has been supported via the KConnect project (a Horizon 2020 EU-funded project) and on Wednesday we had to present the work of the whole consortium at the EU in Luxembourg.  The response was overwhelmingly positive and so I wish to share a bit more of the work.

A slight bit of context; I have been involved in a number of automation projects and these were typically seen as methods to supplement the human methodology.  For instance speeding up risk of bias assessment via tools such as RobotReviewer. While these are great initiatives I wanted to explore what could be done fully automatically.

At Luxembourg we presented our ‘product’. The product is a system that automatically synthesises randomised controlled trials (and potentially systematic reviews).  It is not finished but we have a very good first effort.  I’m not sure whether to classify it as a ‘proof of concept’ or ‘alpha’ version – not sure it matters!

All the images below are based on asthma and I have deliberately blurred out the intervention names.  The reason being that we are still improving the results (more below) and I would hate to think people will be put off the system by making judgements based on a system that has yet to be optimised.

The first image is the current default result which is to present the interventions (y-axis) alphabetically while the likely effectiveness is presented on the x-axis. Note, as there are an awful lot of interventions for asthma we’ve can’t show them all – so we’re simply showing a single screen-grab, the actual graph is considerably taller!

To orientate you:

  • Each ‘blob’ represents a single intervention.  The size of the blob indicates the total population used in the trials.  So, a big blob indicates more participants in the various trials.
  • The horizontal positioning is based on our estimate of effectiveness (not effect size) and the further right it is the more effective the system estimates the intervention to be.  This is further indicated by the colours – green being better and red worse (traffic light colouring)!
  • Above the graph there is the ability to sort the interventions by number of trials, sample size, score etc.
  • The sample size refinement allows users to exclude trials that are below the size entered – as we know small trials tend to be less reliable.
  • The risk of bias allows you to automatically remove trials that are not considered low risk of bias – so another reliability measure.

This second graph shows the results arranged by effectiveness:

And, to reiterate, this is fully automatic and always up to date.  As new RCTs are published (PubMed only at present) they will be automatically added.  To me that’s incredibly cool.

While I would love to share this more widely there is still some work to do before I’m happy to open it up.  In internal testing we have identified a number of areas that could, realistically, be improved.  Nothing major, we’re just being sure we’ve done as much as we can – under the fully automatic banner – to ensure the biggest impact.

As far as I know, this is the first fully automated evidence synthesis/review tool. This is such a disruptive bit of technology people will need to be convinced of it’s worth and that will come with use and understanding.  David Moher wrote an editorial about the various synthesis methods being part of a family.  Is our technique the screaming baby of evidence synthesis, the eccentric uncle, the angry adolescent? You tell me….!

Automated reviews, something to show…

It’s all hands on deck here as we rush to get a robust prototype model of our automated review system ready for viewing by the EU (next week in Luxembourg).  Much of this work has been funded as part of our participation in the Horizon 2020 funded KConnect project (led by TUW, Vienna) and the EU like to see what they’re getting for their money.

Now we’ve had change to play with the data generated by our systems, two things are apparent:

  • This should, broadly, be a viable approach
  • Full-automation is still a way off (Note, we’re not attempting to reproduce manual systematic reviews).  We think the automation stage will get you ‘so far’ but it’ll still require a second level of ‘polish’ to increase the robustness.  We’re hoping this ‘polish’ stage will take no more than 5-10 minutes.

I’m going to share one image below:

In the image above we are showing a number of things, all generated automatically:

  • Each trial or systematic review is shown as a single ‘blob’.
  • Classification (x-axis) of trials based on perceived efficacy (does the drug work or not).
  • Sample size – bigger the ‘blob’ the bigger the trial.
  • Intervention name (y-axis).

What you’re not seeing is the fact that each trial is being automatically assessed for bias via RobotReviewer.

For next week we need to keep improving the data quality and – for each intervention – create a single estimate of effectiveness (our version of meta-analysis) and make it look nice!

Create a free website or blog at WordPress.com.

Up ↑