## Confirmatory and likelihood-based factor analysis (19)

*Chair: Vartan Choulikian, Wednesday 22nd July, 15.25 - 16.45, Dirac Room, Fisher Building.*

**S McKay-Curtis** and Elena A. Erosheva, *Department of Statistics, University of Washington, Seattle, USA*. An exploration of identiability in bifactor models. (124)

**Fan Yang-Wallentin**, Hao Luo, *Uppsala University, Sweden, *Karl G Jöreskog, *Norwegian School of Management, Norway*. Confirmatory factor analysis of ordinal variables with misspecified models. (119)

**Soonmook Lee** and Inhye Kim , *Department of Psychology, Sungkyunkwan University, Seoul, Korea*. Approaches to handle identification problems in analysis of single indicator MTMM data. (251)

**Frans Oort,** *Department of Education, University of Amsterdam, The Netherlands*. Likelihood based confidence intervals in exploratory factor analysis. (235)

ABSTRACTS

**An exploration of identiability in bifactor models**. (124)

*S McKay-Curtis and Elena A. Erosheva*

Identifiability of confirmatory factor analysis (CFA) models is one of the nagging problems in empirical research, particularly in the social sciences. Although many previous authors have proposed identifiability restrictions in special cases, necessary and sufficient conditions for identification of the general CFA model are not known, and identifiability issues must be investigated on a case-by-case basis. In this paper, we explore identifi- ability issues in bifactor models - a special case of the CFA model. The bifactor model defines a loading structure whereby all indicators load onto exactly two factors: a general factor, onto which all indicators load, and a secondary factor, onto which only a subset of other indicators load. We examine the identifiability of bifactor models by verifying theoretical results on the rank of the Jacobian matrix. Using these results we provide necessary conditions for bifactor models to be identified. We explore how identification problems can affect estimation and inference in both the Bayesian and frequentist statistical paradigms. We also present a simple "relabeling" scheme to apply to posterior draws from a Bayesian analysis when the model is known to be identified up to arbitrary sign changes of the loadings.

**Confirmatory factor analysis of ordinal variables with misspecified models.** (119)

*Fan Yang-Wallentin, Hao Luo and Karl G Jöreskog
*Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models (SEM) to ordinal data. This assumes that the observed measures have normal distributions which is not the case when the variables are ordinal. A better approach is to use polychoric correlations and fit the models using some robust method such as Unweighted Least Squares (RULS), Robust Maximum Likelihood (RML), Weighted Least Squares (WLS), or Diagonally Weighted Least Squares (DWLS). In this simulation evaluation we study the behavior of RULS, RML, WLS, and DWLS in combination with polychoric correlations when the models are misspecified. We also study the effect of model size and number of categories on the parameter estimates, their standard errors and the common chi-square measures of fit both when the models are correct and when they are misspecified.

**Approaches to handle identification problems in analysis of single indicator MTMM data**. (251)

*Soonmook Lee and Inhye Kim
*It has been widly recognized that confirmatory factor analysis (CFA) approach to specifying method factors suffer empirical identification problems in analyzing covariance structure of multitrait-multimethod (MTMM) data with single indicators. Although specifying method effects as correlated uniquenesses could avoid the problems in part, we are more interested in remedying this problem within the framework of CFA approach due to theoretical and substantive issues. Also it is desirable to keep all possible method factors in the model rather than removing one of them as shown elsewhere. In response to the current demand, introducing a (spurious) mean structure to the CFA model has been proposed(Williams, 2007), increasing the degree of freedom in the model. Another approach is to introduce a radom intercept factor that is orthonal to the factors in the current CFA model (Maydeu-Olivares & Coffman, 2006), decreasing the degree of freedom in the model. We are interested in which approach would be more effective in remedying the potential problems of empirical nonidentification in CFA models of MTMM data with single indicators. These two approaches will be examined with simulated data and empirical data.

**Likelihood based confidence intervals in exploratory factor analysis**. (235)

*Frans Oort
*Cudeck and O’Dell (1994) discuss how to estimate standard errors for the parameter estimates in unrestricted factor analysis. In maximum likelihood estimation standard errors can be derived from the asymptotic covariance matrix of the initial factor loadings and the transformation parameters. The necessary partial derivatives are obtained through numerical approximation. Cudeck and O’Dell also present confidence intervals for parameter estimates. However, with maximum likelihood estimation it is also possible to obtain likelihood-based confidence intervals (Meeker & Escobar, 1995) for all parameter estimates. The purpose of the present paper is to show how likelihood-based confidence intervals can be obtained for all parameter estimates in rotated factor solutions. As an illustrative example, an oblique five factor model will be fitted to the variance-covariance matrix of the 30 items of the NEO Personality Inventory, and intervals will be estimated for all 30 × 5 factor loadings and 10 factor correlations.

*References: Cudeck, R., & O'Dell, L. L. (1994). Applications of standard error estimates in unrestricted factor analysis: Significance tests for factor loadings and correlations. Psychological Bulletin, 115, 475-487; Meeker, W.Q., Escobar, L.A. (1995). Teaching about approximate confidence regions based on maximum likelihood estimation. The American Statistician, 49, 48-53.*