Quantifying accuracy improvement in sets of pooled judgments: does dialectical bootstrapping work?

Details

Ressource 1Download: BIB_C60340BE1A69.P001.pdf (102.38 [Ko])
State: Public
Version: Author's accepted manuscript
Serval ID
serval:BIB_C60340BE1A69
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Quantifying accuracy improvement in sets of pooled judgments: does dialectical bootstrapping work?
Journal
Psychological science
Author(s)
White C.M., Antonakis J.
ISSN
1467-9280 (Electronic)
ISSN-L
0956-7976
Publication state
Published
Issued date
01/01/2013
Peer-reviewed
Oui
Volume
24
Number
1
Pages
115-116
Language
english
Notes
Publication types: Journal Article
Publication Status: ppublish
Abstract
Galton (1907) first demonstrated the "wisdom of crowds" phenomenon by averaging independent estimates of unknown quantities given by many individuals. Herzog and Hertwig (2009; hereafter H&H in Psychological Science) showed that individuals' own estimates can be improved by asking them to make two estimates at separate times and averaging them. H&H claimed to observe far greater improvement in accuracy when participants received "dialectical" instructions to consider why their first estimate might be wrong before making their second estimates than when they received standard instructions. We reanalyzed H&H's data using measures of accuracy that are unrelated to the frequency of identical first and second responses and found that participants in both conditions improved their accuracy to an equal degree.
Keywords
Feedback, Psychological, Humans, Judgment, Meta-Analysis as Topic, Problem Solving, Statistics as Topic/methods
Pubmed
Web of science
Create date
22/04/2012 9:55
Last modification date
20/08/2019 15:41
Usage data