- Transparency, you're not doing a systematic review - so make that clear. I think the onus is on the Q&A service to ensure users are acutely aware of potential drawbacks in the service.
- Feedback, be it from the person who asked the question or from other readers of the answers (assuming things are web-based). We receive lots of feedback, but we could make it easier - perhaps something as straightforward as a digg style thumbs up or thumbs down.
- Process-standards. These are things such as length of answer, speed of answer, referencing of material etc. Not the most interesting!
- People skills. It's fine to say that the person asking the question is competent (or even excellent) in searching Medline, but that certainly doesn't make them good at answering questions! Understanding of the question is important before you consider a search. How can you make a standard around that?
- Quality control. Is there a robust system in place?
The above is not a complete list but some of the more memorable ones.
The biggest drawback I found is that there is very little research in this area on which to base standards! In the area of systematic reviews there are vast amounts of research on the actual process of conducting systematic reviews. In Q&A there is virtually nothing.
For me, and this notion hasn't changed in ten years, is that we're not trying to do a systematic review, we're just trying to improve on what a clinician would do. I remember 8-9 years ago receiving some criticism from a civil servant suggesting what I did was negligent. I suggested to her that if our service is negligent then surely she was in providing Medline to clinicians who are poorly trained to search on them.
Although hard work it has proved very useful preparing for this talk, it's an area I've not spent a great deal of thought on in the past. And there's still over a week to go, so plenty of time for reflection.