Logo Logo
Hilfe
Kontakt
Switch language to English
Detection, avoidance, and compensation - three studies on extreme response style
Detection, avoidance, and compensation - three studies on extreme response style
Extreme Response Style (ERS) describes individual differences in selecting extreme response options in Likert scale items, which are stable over time (Weijters et al., 2010b; Wetzel, Lüdtke, et al., 2016) and across different psychological constructs (Wetzel, Carstensen, & Böhnke, 2013). This thesis contains three empirical studies on the detection, avoidance, and compensation of ERS: In the first study, we introduce a new method to detect ERS which uses an ERS index from heterogeneous items as covariate in partial credit trees (PC trees; Komboz et al., 2016). This approach combines the objectivity of ERS indices from heterogeneous items (Greenleaf, 1992) with the threshold interpretation of ERS known from analyses with the ordinal mixed-Rasch model (Rost, 1991). We analyzed personality facets of 11714 subjects from the German nonclinical normative sample of the Revised NEO Personality Inventory (NEO-PI-R; Ostendorf & Angleitner, 2004), and 3835 participants of the longitudinal panel of the GESIS - Leibniz-Institute for the Social Sciences (GESIS, 2015), who filled out the Positive and Negative Affect Schedule (Krohne et al., 1996), and the Questionnaire of Spatial Strategies (Münzer & Hölscher, 2011). ERS was detected in all analyzed scales. The resulting pattern suggests that ERS reflects a stable trait with a continuous structure. In the second study, we investigate whether data from items with dichotomous response formats are unaffected by ERS, as has been assumed in the literature (Wetzel, Carstensen, & Böhnke, 2013). In a paper and pencil questionnaire, 429 German psychology students completed the Shyness scale from the Revised Minnesota Multiphasic Personality Inventory (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) and the Achievement Orientation scale from the Revised Freiburger Persöhnlichkeitsinventar (Fahrenberg et al., 2001). ERS was assessed by an ERS index from heterogeneous items, a binary ERS measure based on the classification of an ordinal mixed-Rasch model, and a binary self-report measure of ERS. ERS measures were used as covariates in Rasch trees (Strobl et al., 2013) and DIF Lasso models (Tutz & Schauberger, 2015) of the dichotomous scales. We did not find any effect of ERS on dichotomous item responses. Adopting dichotomous response formats seems to be a reasonable strategy to avoid ERS. In the third study, we test whether instructions to give more or less extreme responses depending on participants’ individual response tendencies, can counterbalance the impact of ERS. In an online questionnaire, 788 German subjects completed the Impulsivity and Order facets of the NEO-PI-R three times under different ERS instructions. In the first round, a standard instruction was used. Participants in the experimental group received instructions for more or less extreme responses in the second and third round, while subjects in the control group responded under neutral instructions. ERS was measured by an ERS index from heterogeneous items and a self-report measure of ERS. Binary ERS classifications were used to create artificial datasets in which participants received an instruction which should either compensate or aggravate their individual response tendencies. Predictive performance of Random Forest models (Breiman, 2001), in which self-reported impulsive and orderly behaviors were predicted by the item responses, was compared between the compensation, aggravation, and control settings. No differences in predictive performance were observed between the settings. Likewise, PC tree analyses suggest that ERS was still present in the compensation setting. Including ERS measures as predictors did not increase predictive performance when items were answered under standard instructions. Our findings are in line with simulation studies suggesting that ERS has a small impact on applied psychological measurement (Plieninger, 2016; Wetzel, Böhnke, & Rose, 2016). Future research on ERS could improve psychological measurements by considering continuous models of ERS (Jin & Wang, 2014; Tutz et al., 2016). In light of recent calls to turn psychology into a more predictive science (Yarkoni & Westfall, 2017; Chapman et al., 2016), investigating the impact of ERS on criterion validity should also have high priority.
Not available
Pargent, Florian
2017
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Pargent, Florian (2017): Detection, avoidance, and compensation - three studies on extreme response style. Dissertation, LMU München: Fakultät für Psychologie und Pädagogik
[thumbnail of Pargent_Florian.pdf]
Vorschau
PDF
Pargent_Florian.pdf

3MB

Abstract

Extreme Response Style (ERS) describes individual differences in selecting extreme response options in Likert scale items, which are stable over time (Weijters et al., 2010b; Wetzel, Lüdtke, et al., 2016) and across different psychological constructs (Wetzel, Carstensen, & Böhnke, 2013). This thesis contains three empirical studies on the detection, avoidance, and compensation of ERS: In the first study, we introduce a new method to detect ERS which uses an ERS index from heterogeneous items as covariate in partial credit trees (PC trees; Komboz et al., 2016). This approach combines the objectivity of ERS indices from heterogeneous items (Greenleaf, 1992) with the threshold interpretation of ERS known from analyses with the ordinal mixed-Rasch model (Rost, 1991). We analyzed personality facets of 11714 subjects from the German nonclinical normative sample of the Revised NEO Personality Inventory (NEO-PI-R; Ostendorf & Angleitner, 2004), and 3835 participants of the longitudinal panel of the GESIS - Leibniz-Institute for the Social Sciences (GESIS, 2015), who filled out the Positive and Negative Affect Schedule (Krohne et al., 1996), and the Questionnaire of Spatial Strategies (Münzer & Hölscher, 2011). ERS was detected in all analyzed scales. The resulting pattern suggests that ERS reflects a stable trait with a continuous structure. In the second study, we investigate whether data from items with dichotomous response formats are unaffected by ERS, as has been assumed in the literature (Wetzel, Carstensen, & Böhnke, 2013). In a paper and pencil questionnaire, 429 German psychology students completed the Shyness scale from the Revised Minnesota Multiphasic Personality Inventory (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) and the Achievement Orientation scale from the Revised Freiburger Persöhnlichkeitsinventar (Fahrenberg et al., 2001). ERS was assessed by an ERS index from heterogeneous items, a binary ERS measure based on the classification of an ordinal mixed-Rasch model, and a binary self-report measure of ERS. ERS measures were used as covariates in Rasch trees (Strobl et al., 2013) and DIF Lasso models (Tutz & Schauberger, 2015) of the dichotomous scales. We did not find any effect of ERS on dichotomous item responses. Adopting dichotomous response formats seems to be a reasonable strategy to avoid ERS. In the third study, we test whether instructions to give more or less extreme responses depending on participants’ individual response tendencies, can counterbalance the impact of ERS. In an online questionnaire, 788 German subjects completed the Impulsivity and Order facets of the NEO-PI-R three times under different ERS instructions. In the first round, a standard instruction was used. Participants in the experimental group received instructions for more or less extreme responses in the second and third round, while subjects in the control group responded under neutral instructions. ERS was measured by an ERS index from heterogeneous items and a self-report measure of ERS. Binary ERS classifications were used to create artificial datasets in which participants received an instruction which should either compensate or aggravate their individual response tendencies. Predictive performance of Random Forest models (Breiman, 2001), in which self-reported impulsive and orderly behaviors were predicted by the item responses, was compared between the compensation, aggravation, and control settings. No differences in predictive performance were observed between the settings. Likewise, PC tree analyses suggest that ERS was still present in the compensation setting. Including ERS measures as predictors did not increase predictive performance when items were answered under standard instructions. Our findings are in line with simulation studies suggesting that ERS has a small impact on applied psychological measurement (Plieninger, 2016; Wetzel, Böhnke, & Rose, 2016). Future research on ERS could improve psychological measurements by considering continuous models of ERS (Jin & Wang, 2014; Tutz et al., 2016). In light of recent calls to turn psychology into a more predictive science (Yarkoni & Westfall, 2017; Chapman et al., 2016), investigating the impact of ERS on criterion validity should also have high priority.