ISSN:
1432-2218
Keywords:
Key words: Clinical practice guidelines — Consensus development conference — Literature search — Publication bias — Retrieval bias
Source:
Springer Online Journal Archives 1860-2000
Topics:
Medicine
Notes:
Abstract Background: Ideally, a consensus panel combines expert knowledge with external evidence derived from the literature. To date, many consensus conferences do not use a structured approach to search the literature, but simply compile an add-on reference list from all papers cited by the panelists. This study examined how well such panelists retrieved the relevant literature. Methods: We used the reference lists of nine surgeons who took part in a consensus conference on common bile duct stones. We included all papers that were referred to as randomized controlled trials (RCTs). We then compared this list with a database search in order to calculate sensitivity and specificity. Results: The nine experts cited between 35 and 518 papers, but only eight papers on average were RCTs. Of the 49 papers that the experts believed to be RCTs, only 23 actually were RCTs. The sensitivity resp. specificity for correctly identifying an RCT was 0.21 (95% Cl, 0.11–0.30) resp. 0.80 (95% Cl; 0.64–0.95). RCTs that included the word ``randomized'' in their title were significantly more likely to be identified (relative risk, 1.31; 95% Cl, 1.18–1.45). Conclusion: Our data indicate that consensus panelists usually do not perform systematic literature searches, but simply use their favorite papers to back up their arguments. Because this may lead to a biased selection of the evidence base on which the consensus statements are founded, a systematic search of all relevant articles should become a mandatory task in any consensus or guideline process.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1007/s004640000283
Permalink