1. You Say Tomato, I Say Radish: Can Brief Cognitive Assessments in the U.S. Health Retirement Study Be Harmonized With Its International Partner Studies?
- Author
-
Kobayashi, Lindsay C, Gross, Alden L, Gibbons, Laura E, Tommet, Doug, Sanders, R Elizabeth, Choi, Seo-Eun, Mukherjee, Shubhabrata, Glymour, Maria, Manly, Jennifer J, Berkman, Lisa F, Crane, Paul K, Mungas, Dan M, and Jones, Richard N
- Subjects
Behavioral and Social Science ,Aging ,Basic Behavioral and Social Science ,Mental health ,Adolescent ,Adult ,Aged ,Aged ,80 and over ,Cognition ,Cognitive Aging ,Factor Analysis ,Statistical ,Female ,Health Surveys ,Humans ,Longitudinal Studies ,Male ,Memory ,Middle Aged ,Models ,Statistical ,Multicenter Studies as Topic ,Neuropsychological Tests ,Psychometrics ,Retirement ,United States ,Young Adult ,Cognitive function ,Health survey ,International comparison ,Item response theory ,Statistical harmonization ,Clinical Sciences ,Sociology ,Psychology ,Gerontology - Abstract
ObjectivesTo characterize the extent to which brief cognitive assessments administered in the population-representative U.S. Health and Retirement Study (HRS) and its International Partner Studies can be considered to be measuring a single, unidimensional latent cognitive function construct.MethodsCognitive function assessments were administered in face-to-face interviews in 12 studies in 26 countries (N = 155,690), including the U.S. HRS and selected International Partner Studies. We used the time point of the first cognitive assessment for each study to minimize differential practice effects across studies and documented cognitive test item coverage across studies. Using confirmatory factor analysis models, we estimated single-factor general cognitive function models and bifactor models representing memory-specific and nonmemory-specific cognitive domains for each study. We evaluated model fits and factor loadings across studies.ResultsDespite relatively sparse and inconsistent cognitive item coverage across studies, all studies had some cognitive test items in common with other studies. In all studies, the bifactor models with a memory-specific domain fit better than single-factor general cognitive function models. The data fit the models at reasonable thresholds for single-factor models in 6 of the 12 studies and for the bifactor models in all 12 of the 12 studies.DiscussionThe cognitive assessments in the U.S. HRS and its International Partner Studies reflect comparable underlying cognitive constructs. We discuss the assumptions underlying our methods, present alternatives, and future directions for cross-national harmonization of cognitive aging data.
- Published
- 2021