Publikationsserver der Universitätsbibliothek Marburg

Titel:Social desirability in survey research: Can the list experiment provide the truth?
Autor:Gosen, Stefanie
Weitere Beteiligte: Wagner, Ulrich (Prof. Dr.)
Veröffentlicht:2014
URI:https://archiv.ub.uni-marburg.de/diss/z2014/0228
DOI: https://doi.org/10.17192/z2014.0228
URN: urn:nbn:de:hebis:04-z2014-02288
DDC: Psychologie
Titel (trans.):Soziale Erwünschtheit in der Umfrageforschung: Kann das Listenexperiment die Wahrheit offenbaren?
Publikationsdatum:2014-04-30
Lizenz:https://rightsstatements.org/vocab/InC-NC/1.0/

Dokument

Schlagwörter:
soziale Erwünschtheit, list experiment, social desirability, survey research, sensitive questions, indirekte Surveymethoden, Umfrageforschung, indirect survey methods, Listenexperiment, sensitive Fragen

Summary:
The phenomenon of social desirability response bias in survey research has been discussed in social psychology and social science for many years. Distortions often occur when a question or a topic of interest is ‘sensitive’ (Lee, 1993), meaning that it has a potentially embarrassing, threatening or stigmatizing character (Dalton, Wimbush, & Daily, 1994). In order to avoid socially desirable responses in self-reports, indirect survey methods were applied. These techniques should guarantee the respondents’ anonymity and thus receive more valid self-reports (Tourangeau & Yan, 2007). One of the methods that is supposed to achieve this goal is the list experiment. In general, the list experiment is able to create an estimate of the proportion of people who agree to a sensitive item. In order to determine the social desirability bias, the estimation of the list experiment is then compared to direct self-report questions. If there is a social desirability bias, the estimate of the list experiment should be higher than the direct self-report question. However, the literature does not provide a consistent picture of the functionality of the list experiment. Furthermore, a few published studies show complications with data collection and the results of the list experiment (Biemer et al., 2005). The reasons for these inconsistencies are often not apparent. Therefore, the aim of this dissertation was to make proper propositions about its validity, consistency, and to find specific factors that determine its ineffectiveness. The dissertation consists of two manuscripts that both evaluate the validity of the list experiment. Manuscript #1 was able to prove the inconsistency of the list experiment in the field of prejudice research on the basis of three different studies including two different survey modes and a panel dataset. In Study 1 (N=229, representative), the list experiment provided results in the expected direction and produced a higher estimate than the direct self-report question. Study 2, (modified repetition, N=445, representative), did not show a significant difference within the two conditions of the list experiment, and the direct self-report item yielded a higher approval rate than the list experiment. In order to test the validity and to find factors that explain the failure of the list experiment, Study 3 (N=1,569, non-representative) compared three different list experiments to each other. The three list experiments provided inconsistent results once again. Furthermore, it could be found a factor that explain the inconsistent results. The essential question was whether the increase on mean level occurs simply because of the higher number of items in the test condition. Hence, four nonsensitive items were compared to five nonsensitive items. The analysis revealed a significant mean difference between the condition with four and the condition with five nonsensitive items. This result implies enormous consequences for the validity of the list experiment itself because the increase of the mean in the test condition depends not only on the content of the particular items but also on the number of items. An additional test-retest panel analysis revealed that respondents give a more stable answer over time when the baseline condition includes only four nonsensitive items. Manuscript #2 was able to find various factors that can partly explain the inconsistent results of the list experiment. Study 1 used cognitive interviews (N=7) to demonstrate that the list experiment was predominantly understood by the respondents, and that the sensitive item was only partially perceived as such. In Study 2 (experimental online study, N=1,878) it was tested whether the sensitive item influenced the agreement to the nonsensitive items (item difficulty). It was found that the approval rate to the nonsensitive items increases when a sensitive item is included. For the list experiment, this result means that the mean level in the test condition increases due to a shift in item difficulty and not due to the content of the sensitive item, as the list experiment presupposes. In Study 3 (replication of Study 2, N=948) the hypotheses were tested again in a slightly varied design. Here, the first hypothesis was confirmed with exclusively nonsensitive items. Study 3 could corroborate the hypothesis that the procedure to indicate the number of yes answers is distorted in general. This finding implies that within the list experiment the indication of the number of items is biased in the baseline and also in the test condition. In sum the results of the two Manuscripts indicate that list experiment is unable to obtain valid and consistent results. The results of this dissertation suggest that in the process of answering a list experiment factors arise that cause distortions and affect the overall functioning of the list experiment. In total, three moderating factors were found that occurred independently of one another or together.

Bibliographie / References

  1. Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132, 692–731.
  2. Nosek, B. A. (2005). Moderators of the relationship between implicit and explicit evaluation. Journal of Experimental Psychology: General, 134,565–584.
  3. Bosson, J. K., Swann, W. B., & Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: the blind men and the elephant revisited? Journal of Personality and Social Psychology, 79, 631–643. 10. References 156
  4. Crowne, D., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24, 349–354.
  5. Hofmann, W., Gschwendner, T., & Schmitt, M. (2005). On implicit-explicit consistency: The moderating role of individual differences in awareness and adjustment. European Journal of Personality, 19, 25–49.
  6. Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859–883.
  7. McConahay, J. B. (1986). Modern racism, ambivalence, and the modern racism scale. In J. F.
  8. Meta-analysis of randomized response research. Sociological Methods and Research, 33, 319–348.
  9. Droitcour, J., Caspar, R. A., Hubbard, M. L., Parsley, T. L., Visscher, W., & Ezzati, T. M. (1991). The item count technique as a method of indirect questioning: A review of its development and a case study application. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N.
  10. Tversky, A., & Kahneman, D. (1974). Judgment and uncertainly: Heuristics and biases. Public Science, 185, 1125–1131.
  11. Banse, R., Seise, J., & Zerbes, N. (2001). Implicit attitudes towards homosexuality: Reliability, validity, and controllability of the IAT. Zeitschrift für Experimentelle Psychologie, 48, 145–160.
  12. Gawronski, B. (2002). What does the implicit association test measure? A test of the convergent and discriminant validity of prejudice-related IATs. Experimental Psychology, 49, 171–180.
  13. Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46, 598–609.
  14. Lee, R. M. (1993). Doing research on sensitive topics. Sage, London.
  15. Nosek, B. A., Banaji, M. R., & Greenwald, A. G. (2002). Harvesting implicit group attitudes and beliefs from a demonstration website. Group Dynamics, 6, 101–115.
  16. Gawronski, B. (2009). Ten frequently asked questions about implicit measures and their frequently supposed, but not entirely correct answers. Canadian Psychology, 50, 141–150.
  17. Tsuchiya, T., Hirai, Y., & Ono, S. (2007). A study of the properties of the item count technique. Public Opinion Quarterly, 71, 253–272.
  18. Holbrook, A. L., & Krosnick, J. A. (2010). Social desirability bias in voter turnout reports: Tests using the item count technique. Public Opinion Quarterly, 74, 37–67.
  19. Butz, D. A., & Plant, A. (2009). Prejudice control and interracial relations: The role of motivation to respond without prejudice. Journal of Personality, 77, 1311–1342.
  20. Baron, A. S., & Banaji, M. R. (2006). The development of implicit attitudes: Evidence of race evaluations from ages 6 and 10 and adulthood. Psychological Science, 17, 52–58.
  21. Fazio, R. H., & Olson, M. A. (2003). Implicit measures in social cognition research: Their meaning and use. Annual Review of Psychology, 54, 297–327.
  22. Frantz, C., Cuddy, A. J., Burnett, M., Ray, H., & Hart, A. (2004). A threat in the computer: The race implicit association test as a stereotype threat experience. Journal of Personality and Social Psychology Bulletin, 30, 1611–1624.
  23. Nier, J. A. (2005). How dissociated are implicit and explicit racial attitudes? A bogus pipeline approach. Group Processes & Intergroup Relations, 8, 39–52.
  24. Cunningham, W. A., Preacher, K. J. & Banaji, M. R. (2001). Implicit attitude measures: Consistency, stability, and convergent validity. Psychological Science, 121, 163–170.
  25. De Houwer, J. (2006). What are implicit measures and why are we using them. In R. W.
  26. Dovidio, J. F., Kawakami, K., & Gaertner, S. L. (2002). Implicit and explicit prejudice and interracial interaction. Journal of Personality and Social Psychology, 82, 62–68.
  27. Miller, J. D. (1984). A new survey technique for studying deviant behavior. Unpublished doctoral dissertation, George Washington University. Zusammenfassung 167
  28. Appendix A 161
  29. Biemer, P. P., Jordan, B. K., Hubbard, M., & Wright, D. (2005). A test of the item count methodology for estimating cocaine use prevalence. In Kennet, J., and J. Goefrer (Eds.), Evaluating and improving methods used in the national survey on drug use and health (DHHS Publication No. SMA 05-4044, Methodology Series M-5) (pp. 149-174).
  30. Gawronski, B, & Conrey, F. R. (2004). Der Implizite Assoziationstest als Maß automatisch aktivierter Assoziationen: Reichweite und Grenzen [The implicit association test as a measure of actived associations: Scope and limits]. Psychologische Rundschau, 55, 118–126. 10. References 158
  31. Egloff , B., & Schmukle, S. T. (2003). Does social desirability moderate the relationship between implicit and explicit anxiety measures? Personality and Individual Differences, 35, 1697–1706. 10. References 157
  32. Schlauch, R. C., Lang, A. R., Plant, E. A., Christensen, R., & Donohue, K. F. (2009). Effect of alcohol on race-biased responding: The moderating role of internal and external motivations to respond without prejudice. Journal of Studies on Alcohol and Drugs, 70, 328–336.
  33. Riketta, M. (2006). Gender and socially desirable responding as moderators of the correlation between implicit and explicit self-esteem. Current Research in Social Psychology, 11, 14–28.
  34. Nosek, B. A. (2007). Implicit-explicit relations. Association for Psychological Science, 16, 65–69.
  35. Epstein, S., Pacini, R., Denes-Raj, V., & Heier, H. (1996). Individual differences in intuitive- experiential and analytical-rational thinking styles. Journal of Personality and Social Psychology, 7, 390–405.
  36. Keller, J., Bohner, G., & Erb, H.-P. (2000). Intuitive und heuristische Verarbeitung - verschiedene Prozesse? Präsentation einer deutschen Fassung des " Rational-Experiential Inventory " sowie neuer Selbstberichtskalen zur Heuristiknutzung [Intuitive and heurisitc processing-different processes? Presentation of a German version of the " Rational- experiential inventory " as well as new self-reported scales for heuristic use]. Zeitschrift für Sozialpsychologie, 31, 87–101.
  37. Rockville, MD: Dept. of Health and Human Services Administration, Office of Applied Studies.
  38. A. Mathiowetz, & S. Sudman (Eds.), Measurement errors in surveys (pp. 185–210). New York: Wiley.
  39. Greenwald A.G., McGhee D.E., & Schwartz J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464–1480.
  40. Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. New York. Taylor & Francis.
  41. Egloff, B., & Schmukle, S. C. (2002). Predictive validity of an Implicit Association Test for assessing anxiety. Journal of Personality and Social Psychology, 83, 1441–1455.
  42. Appendix A: Content of enclosed CD-Rom Questionnaire and transcription of the cognitive interviews. Folder structure: Zusammenfassung Literatur Biemer, P. P., Jordan, B. K., Hubbard, M., & Wright, D. (2005). A test of the item count methodology for estimating cocaine use prevalence. In Kennet, J., and J. Goefrer (Eds.), Evaluating and improving methods used in the national survey on drug use and health (DHHS Publication No. SMA 05-4044, Methodology Series M-5) (pp. 149-174).
  43. Kuklinski, J. H., Sniderman, P. M., Knight, K., Piazza, T., Tetlock, P. E., Lawrence, G. R., & Mellers, B. (1997). Racial prejudice and attitudes toward affirmative action. American Journal of Political Science, 41, 402–419.
  44. Auspurg, K., Jann, B., Krumpal, I., & von Hermanni, H. (2012). Randomized-Response- Technik: Hope or Hype? Eine Meta-Analyse unter Berücksichtigung von Publication-Bias [Randomized-response-technique: Hope or hype? A meta-analysis in consideration of publication bias]. Paper presented at the First Mini-Conference of the Center of Quantitative Methods of the University of Leipzig. Asking Sensitive Questions: Theory and Data Collection Methods.
  45. Coutts, E., & Jann, B. (2011) Sensitive questions in online surveys: Experimental results for the randomized response technique (RRT) and the unmatched count technique (UCT). Sociological Methods and Research, 40, 169–193.
  46. Wiers, and A. W. Stacy (Eds.), The Handbook of Implicit Cognition and Addiction (pp. 11-28). Thousand Oaks, CA: Sage Publishers.
  47. Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The implicit association test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Automatic processes in social thinking and behavior (pp. 265–292). Psychology Press. 10. References 160
  48. Tourangeau, R., Rips, L. J., & Rasinski, K. A. (2000). The psychology of survey response. Cambridge, England: Cambridge University Press.
  49. Edwards, A.L. (1957). The social desirability variable in personality assessment and research. New York: Dryden.
  50. Dalton, D. R., Wimbush, J. C., & Daily, C. M. (1994). Using the unmatched count technique (UCT) to estimate base rates for sensitive behavior. Personnel Psychology, 47, 817–828
  51. Ahart, A. M., Sackett, P. R.: A new method of examining relationships between individual 10. References 155
  52. Hofmann, W., Gawronski, B., Gschwendner, T., Le, H., & Schmitt, M. (2005). A meta- analysis on the correlation between the implicit association test and explicit self-report measures. Personality and Social Psychology Bulletin, 31, 1369–1385.
  53. Stocké, V. (2004). Entstehungsbedingungen von Antwortverzerrungen durch soziale Erwünschtheit. Ein Vergleich der Rational-Choice Theorie und des Modells der Frame- Selektion. Zeitschrift für Soziologie, 33, 303–320.
  54. Huddy, L., & Feldman, S. (2009). On assessing the political effects of racial prejudice. Annual Review of Political Science, 12, 423–47. 10. References 159
  55. Greenwald, A.G., Poehlman, T.A., Uhlmann, E., & Banaji, M.R. (2009). Understanding and using the implicit association test: III. meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97, 17–41.


* Das Dokument ist im Internet frei zugänglich - Hinweise zu den Nutzungsrechten