Preference and unfolding models (25)

Chair: Mark de Rooij, Wednesday 22nd July, 9.55 - 11.15, Lowercroft, School of Pythagoras.

Rung-Ching Tsai, Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan and Ulf Böckenholt, McGill University, Montreal, Canada. Analysis of Preference Data: Beyond Scaling (24)

Shiu-Lien Wu, Department of Psychology, National Chung Chang University, Chai-Yi, Taiwan and Wen-Chung Wang, Department of Educational Psychology, The Hong Kong Institute of Education. The multidimensional generalized graded unfolding model. (030)

Anna Brown, SHL Group, UK and Alberto Maydeu-Olivares, University of Barcelona, SpainImproving forced-choice tests with IRT. (051)

Na Yang and Brian Habing, Statistics Department, University of South Carolina, USA. Distinguishing monotone and unfolding items when they both are present. (232)

ABSTRACTS

Analysis of Preference Data: Beyond Scaling (24)
Rung-Ching Tsai and Ulf Böckenholt
The random utility approach has been influential in the development of paired comparison and ranking models. In particular, Thurstonian models have been shown to be useful in scaling choice options and in providing an easy-to-interpret representation of preference data. Equally importantly, paired-comparison and ranking data can also provide a rich source of information about individual differences and perceived similarity relationships among choice alternatives. Recently, the modelling of such preference data was made easier by representing utilities as latent factors in a structural equation modeling (SEM) framework. Here we will extend such an SEM approach to model simultaneously preference data and other types of binary and/or ordinal data. This combination of both absolute and relative judgments data can enrich our understanding of individual differences in multiple domains including preferences and attitudes. A real-life survey dataset is analyzed to illustrate the use of the proposed approach.

The multidimensional generalized graded unfolding model. (030)
Shiu-Lien Wu and Wen-Chung Wang
Graded unfolding models have been applied to fit Likert items. All the existing models are unidimensional. If there are multiple scales of Likert items, they have to be analyzed consecutively, one scale at a time. This consecutive unidimensional approach does not take the interrelationship between latent traits into account to increase measurement precision. Besides, the correlation between latent traits will be underestimated due to measurement error, if the Pearson correlation is computed from the person measures. To resolve these problems in the unidimensional approach, we propose the multidimensional generalized graded unfolding model and apply the software WinBUGS to estimate the parameters. A real data set of the TIMSS 2003 background questionnaires about learning mathematics and sciences was analyzed, which consisted of 4 scales, each including 5 to 7 four-point Likert items. A higher score represented a higher agreement with the statements. The results showed that the multidimensional approach (joint analysis) yielded a higher reliability estimate (.89 ~ .93) for the 4 latent traits than the unidimensional approach (.84 ~.91), meaning that the 4 scales would have to increase 28% to 57% in test length for the unidimensional approach to achieve the same measurement precision as the multidimensional approach. Besides, the multidimensional approach yielded a higher correlation estimate between the 4 latent traits (.46 ~ .87) than the unidimensional approach (.25 ~ .54).

Improving forced-choice tests with IRT. (051)
Anna Brown and Alberto Maydeu-Olivares
Forced-choice formats in personality assessment can have significant advantages in reducing response biases. However, the traditional approach that treated relative decisions as if they were absolute (as for single-stimulus items), resulted in ipsative scales with problematic psychometric properties. We show how the decision process behind forced-choice responding can be suitably modelled by Thurstone’s theory, which attributes preferences to the relative utility value of the objects under comparison. Thurstonian models for ranking and paired comparisons, which are similar to second-order factor models for dichotomous data, can be applied to small ranking tasks as shown by Maydeu-Olivares and Böckenholt (2005). For forced-choice questionnaires with many scales and items, however, these structural models cannot be estimated due to heavy parameterization. We introduce an IRT model for large multi-trait forced-choice tests that bypasses the latent utilities of items, directly linking choices made to the broader latent traits measured by the test. We demonstrate how additional constraints on parameters follow from the Thurstonian models, and how individual scores on latent traits are estimated. Finally, we apply this approach to increase the efficiency of an existing forced-choice test. IRT modelling is used to select the best 75% of the items, and to produce individual scores that are not ipsative.


Distinguishing monotone and unfolding items when they both are present. (232)
Na Yang and Brian Habing
Item response theory (IRT) has been widely used in analyzing educational and psychological assessments. Readily available IRT implementations allow for two common types of models in IRT, monotone models for dominance scales (cf. Guttman 1950; Rasch 1960/1980; Birnbaum 1968; Mokken 1971) and unfolding models for proximity scales (cf. Roberts, Donoghue and Laughlin, 2000). However, currently these two types of items can only be utilized separately. A combined model allowing for simultaneous analysis of proximity and dominance items has been proposed and the initial estimation results were very promising. (Yang and Habing 2009). More complex data sets are examined to see how the estimation procedure performs in the case where there are more unfolding items (which are generally difficult to estimate in the dichotomous item case).