Search

Trip Database Blog

Liberating the literature

Month

November 2022

Connected/related articles

This post is an attempt to ‘think aloud’ about connected/related articles…. By that I mean, if you find an article you like how can you quickly find others that are similar. We know that searching is imprecise and a user might find articles that match their intention at say result #2, #7, #12 and then they may lose interest and miss ones at #54 or #97.

At Trip we have had something called SmartSearch for years. This mines the Trip weblogs to highlight articles that have been co-clicked in the same search session. So, if a user clicks on articles #2, #7 and #12 we infer a connection. We have successfully mapped these connections and it reveals a structure in the data. In the example below it’s a small sample of connections taken from searches for urinary tract infections:

Each blue square represents a document and the lines/edges are connections made by co-clicking the documents within the same search session. You can see from the annotation that these form clusters around topics. However, co-clicking is not perfect!

Fortunately, there are other types of connections that I think we can use – semantic similarity and citations.

Semantic similarity: I’m thinking principally of PubMed’s related articles. This uses statistical methods to find articles with similar textual information e.g.

At the top is the document of interest and below are the articles deemed semantically similar. So, these articles are all related and one could make connections between them.

Citations: Articles typically list a bunch of references – so the article is citing these. And any article can itself be cited. So, you have forward and backward citations. Again, these have been shown as connections and mapped, e.g (source):

So, three types of connections: co-click, semantic similarity and citations. In isolation all have their issues but combined it could be something incredibly powerful. Well, that’s the theory….

While I believe SmartSearch is brilliant, I don’t think we’ve implemented it particularly well. The main issue I have is that a user needs to ‘call’ the results. On one hand that’s not a big deal but it looks like this:

I’ve highlighted it in red so you don’t miss it (an important issue in itself) but also it’s not really telling the user why they should click. In other words, it has a weak ‘call to action’. In part this is because it’s not ‘real time’ – a user clicks a button and the system calculates the related articles. I’m thinking if we told users that there were, say, 25 closely connected articles and 7 very closely connected articles, possibly teasing what these were, it would be much more compelling.

Another consideration, the notion of connected articles can work on two levels: the individual article and a collection of articles.

Individual articles: Each article within Trip could feature other connected articles be it co-clicks, semantic similarity or citations. It could be that we create a badge (thinking of the Altmetric Donut) that helps indicate to users how many connections there might be.

Collection of articles: If a user clicks on more than one article we, in effect, add up the information from the individual article data. This allows for some clever weightings to be brought in to highlight particularly important/closely connected articles.

But what information is important/useful to the user? I’m seeing two types of display:

List: A list of articles, arranged by some weighting to reflect ‘closeness’ – so those at the top are deemed closer to the article(s) chosen. We could enhance that by indication which are systematic reviews, guidelines etc

Chronological list: As above but arranged by date. The article(s) chosen would be shown and then a user could easily see more recent connected papers and also more historical papers. The former being particularly useful for updating reviews!

Right, those are my thoughts, for now. They seem doable and coherent but am I missing something? Could this approach be made more useful? If you have any thoughts please let me know either in the comments or via email: jon.brassey@tripdatabase.com

UPDATE

One excellent bit of feedback is to add connections between clinical trial registries and subsequent studies. This should be feasible. Similarly, link PROSPERO records (register of ongoing systematic reviews).

UPDATE TWO

Another excellent idea – add retraction data

Assessing systematic review quality (automatically)

We’re keen to help users use the best quality evidence to inform their decisions. While we use the pyramid to help express the hierarchy of evidence there is a danger of that being too simplistic. For instance, not all systematic reviews are high-quality and some are, frankly, terrible.

We have been working on quality scores for RCTs and guidelines for some time and these should both be released by early 2023. However, of equal importance, is scoring systematic reviews. Given Trip covers hundreds of thousands of systematic reviews, any tool we introduce needs to be automated. Well, we’ve taken the first tentative steps…

We have devised a scoring system, capable of automation, and trialled this on a sample of 32 systematic reviews. We knew the assessments of the 32 prior to starting and the scoring was done by a 3rd party and was freely available data on the web (using ROBIS the scores were low, high or unclear risk of bias). We then correlated our scores against the ROBIS scores and this is what the graph looks like:

The Y-axis is our score (range from -3 to 8) and the x-axis is simply the number of the systematic review (so 15 were graded as low risk of bias, 9 as high risk of bias and 8 as unclear).

For a first attempt the results are impressive and shows the validity of the approach. The average score per risk of bias category is as follows:

  • Low – 5.3
  • Unclear – 3.75
  • High – 0.78

We clearly need to spend more time on this trying to understand why, for instance, the 3rd ‘low risk of bias’ systematic review scored so low in our system. But there’s time for that, time to adjust weighting, possibly add or remove scoring elements.

Bottom line: we’re well on the way to rolling out an automated systematic review scoring system that can help Trip users make better use of the evidence we cover

Systematic reviews in Trip – a quick update

After our recent post on the subject I thought I’d explore the new systematic reviews added to Trip. So, for the last week we uploaded 829 new systematic reviews from PubMed. To give a flavour of the coverage, here are the sample of the most recent:

  • The role of noninvasive scoring systems for predicting cardiovascular disease risk in patients with nonalcoholic fatty liver disease: a systematic review and meta-analysis.
  • A systematic review on microplastic pollution in water, sediments, and organisms from 50 coastal lagoons across the globe.
  • The effects of exposure to environmentally relevant PFAS concentrations for aquatic organisms at different consumer trophic levels: Systematic review and meta-analyses.
  • Provisional Versus Dual Stenting of Left Main Coronary Artery Bifurcation Lesions (from a Comprehensive Meta-Analysis).
  • The Impact of Cognitive Impairment on Clinical Outcomes After Transcatheter Aortic Valve Implantation (from a Systematic Review and Meta-Analysis).
  • A meta-analysis of the genetic contribution estimates to major indicators for ketosis in dairy cows.
  • Heterojunction photocatalysts for the removal of nitrophenol: A systematic review.
  • The effect of rhythmic movement on physical and cognitive functions among cognitively healthy older adults: A systematic review and meta-analysis.
  • Effectiveness of multicomponent training on physical performance in older adults: A systematic review and meta-analysis.
  • Molecular mechanism of the anti-inflammatory effects of plant essential oils: A systematic review.

An interesting mix, that’s for sure, and we should possibly explore removing non-human studies!

The above is a sample from PubMed, we also get systematic reviews from other sources:

  • Grey literature, which we explore on a manual and monthly basis – this includes a host of Health Technology Assessments
  • Third-party sources

The latter is not yet automated, but will be shortly. So, it wouldn’t surprise me if we don’t add 1,000+ systematic reviews to Trip every week!

There’s an awful lot of systematic reviews being carried out!

Systematic reviews in Trip

The move to a new, stable system, has allowed us to start really improving the quality of Trip. Trip is a hugely valuable tool, but it isn’t perfect and the old system was creaking.

One immediate area for attention has been the way we grab systematic reviews. We have three main ways of adding systematic reviews to Trip:

  • A number of publishers are considered producers of systematic reviews and their content is not routinely added to PubMed – so we manually grab those records.
  • PubMed – we use a filter to identify systematic reviews
  • Others – we try to identify systematic reviews from a small number of third-party sources

The middle one, PubMed filter, is a complex area to navigate given the tension between sensitivity and specificity. Too sensitive (to identify ALL systematic reviews) and you bring in a load of false positives. Too specific (to only identify TRUE systematic reviews) and you miss a load of systematic reviews – false negatives.

So, we’ve been carrying out a lot of tests on PubMed and have plumped for this filter:

(systematic review[sb] OR meta analy*[TI] OR metaanaly*[TI] OR “Meta-Analysis”[PT] OR “Systematic Review”[PT] OR “Systematic Reviews as Topic”[MeSH] OR “systematic review” [TI] OR “health technology assessment” [TI] OR “Technology Assessment, Biomedical”[Mesh])

At the time of writing the above search identifies 372,212 results (click here to try it yourself). We estimate the other sources contribute an additional 80-100,000 systematic reviews. So, we’re on our way to half a million!

The new PubMed filter will also be checked much more regularly than previously and the third option (third-party sources) are next – again improved filter and more regular checking.

Systematic reviews are hugely important in the EBM world and therefore we’re delighted with progress and we hope our users will be too.

 

Blog at WordPress.com.

Up ↑