A question I’ve been wrestling with for a fair old while and am now approaching a method to answer it. Arguably people vote with their keyboards and the 600,000+ searches per month indicates we’re doing something right!

However, we’ve arrived a different view. We know that each result gets a ‘score’ based on the relevancy of a particular document to the entered search term(s). This can range from 0 to over 10. However, the best matches rarely score over 5. The score is based on a number of factors such as whether the search term occurs in the document title, relative density of the search term in the text etc.

So in the near future we’ll be grabbing the average score for the top ten results for 10,000+ live searches. At the same time we’ll manually review a set of search results and classify them to see what we consider a reasonable score. So we might say that an average score of 1 is great, 0.7 is good, 0.5 is ok and anything less is poor.We’ll then see how well the real searches on TRIP compare with our classification.

So what will the results tell us?

Well, a few things. I think it’ll give us an idea what proportion of or searches return credible results. If we get a significant proportion of ‘poor’ results it suggests we’ll need to address that. Who knows, we may even write it up as a paper!