This article Google’s Marissa Mayer: Social search is the future is fascinating!
We run various Q&A services and a popular theme for questions relates to lab tests. Questions include ‘When should we do one?’, ‘When should we re-test?’ etc.
Around two year ago the Journal of Clinical Pathology ran a series of papers ‘Best practice in primary care pathology’ which proved very useful for our Q&A activities. Now, these papers have been released, free access, on the web – BetterTesting. To make it even more useful the papers have been ‘cut up’ into nearly 120 clinical Q&As. An example Q is:
Why is this resource so important? The disclaimer says:
“Clinical guidance is more difficult to formulate for laboratory medicine as the evidence base is frequently weaker than for medical interventions. The guidance contained in this site must therefore be used in the clinical context and cannot be taken in isolation, as either a minimum or acceptable standard.”
Basically, there is little evidence out there. This collection is a (sort of) current state of our knowledge style resource and is very welcome.
Ever one to be easily pleased, I have to say I’m delighted with the progress of the specialist search engines!
- So if you have any suggestions for improvements, let me know!
Following on from the December 2007 BMJ article Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study is a new study reported in the UK’s Guardian:
Update: A letter in today’s BMJ reports:
“In the first clinic after Christmas we encountered a child who was complaining of tiredness and a sore shoulder, having been inactive over the past three years through illness. He had been given a new generation computer game for Christmas, and having checked everything else, we diagnosed Wii shoulder.
Anyone who has played interactive games on Wii sport will relate to this shoulder pain in muscles that have not been used for a long time. Within a day the discomfort wears off, as with exercise after a period of inactivity.”
Mahalo is the world’s first human-powered search engine, or so their website goes. Basically, it uses users to create the search results for a given search term. An interesting, and yet unproven, concept. However, I have a degree of sympathy with the approach. For a given search term you have ten (possibly up to thirty) key documents to deliver. Can an algorithm really figure out which are the ten best. This problem is compounded by users frequently using unsophisticated search terms that return lots of results. Take TRIP, our popular search terms are things such as asthma, hypertension etc.
The introduction of the specialist searches on TRIP has given us some additional information. For instance if a user goes to the cardiology site and searches for the term ‘asthma’ we know that this is likely to be related to cardiology.
However, 95% of our users search TRIP via the main search. So we can assume very little about their intentions when they search. True, if they search for ‘prostate cancer screening’ that’s a specific search and there are not masses of results. But what about the search terms ‘asthma’ or ‘hypertension’? One obvious option is to produce search hints, to allow users to easily select search modifiers. For instance the search term asthma might produce search modifiers such as ‘children’, ‘steroids’, ‘allergy’, ‘education’ etc. A simple click on one of the suggested terms produces a more focused search. This is fairly easy to do and I hope to introduce something sometime this year.
However, I’m particularly interested in the habits of previous searches of a particular term. Can we aggregate these habits to produce a user list of results based on click-through? So, instead of using the algorithm (or as an alternative) why not say – the most popular documents viewed by previous users for the term asthma are… and then list the top ten. I have serious doubts, for instance I know the results will be skewed to the top ten search results. Also, clicking on a paper doesn’t demonstrate that it’s useful – just that the title makes the document sound useful.
I think I need to get the designers in to see if we can re-jig the results page. In that way we might be able to squeeze in a relatively small box that highlights the most popular 5-10 results for a given search. Certainly one to ponder.
A question I’ve been wrestling with for a fair old while and am now approaching a method to answer it. Arguably people vote with their keyboards and the 600,000+ searches per month indicates we’re doing something right!
However, we’ve arrived a different view. We know that each result gets a ‘score’ based on the relevancy of a particular document to the entered search term(s). This can range from 0 to over 10. However, the best matches rarely score over 5. The score is based on a number of factors such as whether the search term occurs in the document title, relative density of the search term in the text etc.
So in the near future we’ll be grabbing the average score for the top ten results for 10,000+ live searches. At the same time we’ll manually review a set of search results and classify them to see what we consider a reasonable score. So we might say that an average score of 1 is great, 0.7 is good, 0.5 is ok and anything less is poor.We’ll then see how well the real searches on TRIP compare with our classification.
So what will the results tell us?
Well, a few things. I think it’ll give us an idea what proportion of or searches return credible results. If we get a significant proportion of ‘poor’ results it suggests we’ll need to address that. Who knows, we may even write it up as a paper!
At last, Search Wikia has been released, I’ve had a very quick look at this. I like the very simple look and feel on both the home and results pages.
Results are interesting.
A search of TRIP Database finds TRIP first result.
Prostate cancer returns reasonable results.
prostate cancer screening cochrane systematic review (I’m looking for the Cochrane SR on the topic) did not find the anything close (such as the record on the cochrane.org site).
evidence based medicine pretty poor.
prrostate returned no results and there was no spelling correction offered.
For each search term there is the ability to write a mini article about a particular search term – I guess for introductions to the topic. Also, there is the ability to discuss these results which brings up a wiki-style editor. I imagine that’s the area where you state that ‘this result is bad, why not….’
As the site says “Wikia’s search engine concept is that of trusted user feedback from a community of users acting together in an open, transparent, public way. Of course, before we start, we have no user feedback data. So the results are pretty bad. But we expect them to improve rapidly in coming weeks, so please bookmark the site and return often.“
I’m intrigued; it’s way too early to see if this will develop into a rival to Google (or even a Google-killer). But in the forthcoming weeks/months I’ll return to see how the above searches improve.