Thursday, January 31, 2008
Monday, January 28, 2008
Around two year ago the Journal of Clinical Pathology ran a series of papers 'Best practice in primary care pathology' which proved very useful for our Q&A activities. Now, these papers have been released, free access, on the web - BetterTesting. To make it even more useful the papers have been 'cut up' into nearly 120 clinical Q&As. An example Q is:
"Clinical guidance is more difficult to formulate for laboratory medicine as the evidence base is frequently weaker than for medical interventions. The guidance contained in this site must therefore be used in the clinical context and cannot be taken in isolation, as either a minimum or acceptable standard."
Basically, there is little evidence out there. This collection is a (sort of) current state of our knowledge style resource and is very welcome.
Wednesday, January 23, 2008
- So if you have any suggestions for improvements, let me know!
Sunday, January 20, 2008
Thursday, January 17, 2008
Dexterity boost from games consoles hones surgery skills
Update: A letter in today's BMJ reports:
"In the first clinic after Christmas we encountered a child who was complaining of tiredness and a sore shoulder, having been inactive over the past three years through illness. He had been given a new generation computer game for Christmas, and having checked everything else, we diagnosed Wii shoulder.
Anyone who has played interactive games on Wii sport will relate to this shoulder pain in muscles that have not been used for a long time. Within a day the discomfort wears off, as with exercise after a period of inactivity."
Wednesday, January 16, 2008
Sunday, January 13, 2008
Mahalo is the world's first human-powered search engine, or so their website goes. Basically, it uses users to create the search results for a given search term. An interesting, and yet unproven, concept. However, I have a degree of sympathy with the approach. For a given search term you have ten (possibly up to thirty) key documents to deliver. Can an algorithm really figure out which are the ten best. This problem is compounded by users frequently using unsophisticated search terms that return lots of results. Take TRIP, our popular search terms are things such as asthma, hypertension etc.
The introduction of the specialist searches on TRIP has given us some additional information. For instance if a user goes to the cardiology site and searches for the term 'asthma' we know that this is likely to be related to cardiology.
However, 95% of our users search TRIP via the main search. So we can assume very little about their intentions when they search. True, if they search for 'prostate cancer screening' that's a specific search and there are not masses of results. But what about the search terms 'asthma' or 'hypertension'? One obvious option is to produce search hints, to allow users to easily select search modifiers. For instance the search term asthma might produce search modifiers such as 'children', 'steroids', 'allergy', 'education' etc. A simple click on one of the suggested terms produces a more focused search. This is fairly easy to do and I hope to introduce something sometime this year.
However, I'm particularly interested in the habits of previous searches of a particular term. Can we aggregate these habits to produce a user list of results based on click-through? So, instead of using the algorithm (or as an alternative) why not say - the most popular documents viewed by previous users for the term asthma are... and then list the top ten. I have serious doubts, for instance I know the results will be skewed to the top ten search results. Also, clicking on a paper doesn't demonstrate that it's useful - just that the title makes the document sound useful.
I think I need to get the designers in to see if we can re-jig the results page. In that way we might be able to squeeze in a relatively small box that highlights the most popular 5-10 results for a given search. Certainly one to ponder.
Thursday, January 10, 2008
However, we've arrived a different view. We know that each result gets a 'score' based on the relevancy of a particular document to the entered search term(s). This can range from 0 to over 10. However, the best matches rarely score over 5. The score is based on a number of factors such as whether the search term occurs in the document title, relative density of the search term in the text etc.
So in the near future we'll be grabbing the average score for the top ten results for 10,000+ live searches. At the same time we'll manually review a set of search results and classify them to see what we consider a reasonable score. So we might say that an average score of 1 is great, 0.7 is good, 0.5 is ok and anything less is poor.We'll then see how well the real searches on TRIP compare with our classification.
So what will the results tell us?
Well, a few things. I think it'll give us an idea what proportion of or searches return credible results. If we get a significant proportion of 'poor' results it suggests we'll need to address that. Who knows, we may even write it up as a paper!
Monday, January 07, 2008
Results are interesting.
A search of TRIP Database finds TRIP first result.
Prostate cancer returns reasonable results.
prostate cancer screening cochrane systematic review (I'm looking for the Cochrane SR on the topic) did not find the anything close (such as the record on the cochrane.org site).
evidence based medicine pretty poor.
prrostate returned no results and there was no spelling correction offered.
For each search term there is the ability to write a mini article about a particular search term - I guess for introductions to the topic. Also, there is the ability to discuss these results which brings up a wiki-style editor. I imagine that's the area where you state that 'this result is bad, why not....'
As the site says "Wikia's search engine concept is that of trusted user feedback from a community of users acting together in an open, transparent, public way. Of course, before we start, we have no user feedback data. So the results are pretty bad. But we expect them to improve rapidly in coming weeks, so please bookmark the site and return often."
I'm intrigued; it's way too early to see if this will develop into a rival to Google (or even a Google-killer). But in the forthcoming weeks/months I'll return to see how the above searches improve.
Thursday, January 03, 2008
Wednesday, January 02, 2008
Google launched Scholar towards the end of 2004 and has changed little in that time. I use it occasionally but only as a last resort. I find the link-outs to various publishers sites confusing and painful. In the normal Google I search, click and get the result. In Scholar, I find, click, have to decide which publisher site to go to, they then try and flog you the full-text, you then get to the abstract. I think that indicates how spoilt I am!
What next for Scholar? I imagine Dean (at the UBC Academic Search - Google Scholar Blog) has much greater insight into this sort of thing. But if it was my business/project you'd sit up with figures like this. The figures show that people aren't using it (or are moving away) and therefore there are two options:
- Leave it as it is and let it die
- Intervene, pump in some resources, figure out why people are switching off and improve.
I'd prefer 2. but it's not as though they make any money out of Scholar and also they have dropped projects in the past.
Hopefully, we'll see the November 2008 figures...
Tuesday, January 01, 2008
We’re going to roll out a few tweaks on the TRIP Database, nothing major, just a few issues that have arisen since the last major upgrade. With the new project not likely to go live till mid-2008 I doubt we’ll see any significant developments to the main TRIP. Although the specialist search engines are likely to see some activity.
An even more pointless blog than normal. I just wanted to post something using the new technology!