4,118 results on '"computerized adaptive testing"'
Search Results
2. ROAR-CAT: Rapid Online Assessment of Reading ability with Computerized Adaptive Testing.
- Author
-
Ma, Wanjing Anya, Richie-Halford, Adam, Burkhardt, Amy K., Kanopka, Klint, Chou, Clementine, Domingue, Benjamin W., and Yeatman, Jason D.
- Abstract
The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r =.78–.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1–8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r =.89 for first grade, r =.73 for second grade) with alternative 5–15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. Effect Of Content Balancing on Measurement Precision in Computer Adaptive Testing Applications.
- Author
-
ÜÇGÜL ÖCAL, İlkay and DOĞAN, Nuri
- Subjects
- *
STANDARD deviations , *COMPUTER adaptive testing , *MAXIMUM likelihood statistics , *EQUILIBRIUM testing , *GAUSSIAN distribution , *SAMPLE size (Statistics) - Abstract
This study aims to investigate the effect of content balancing, which involves equal and different weighting of content areas in dichotomous items in computerized adaptive testing (CAT), on measurement precision under different measurement conditions. Conducted as a simulation study, small sample sizes were set at 250, while large sample sizes comprised 500 individuals. The ability parameters of the individuals forming the sample were generated to display a normal distribution within the range of -3 to +3 for each sample. Using the three-parameter logistic (3PL) item response model, a pool of 750 items spanning five different content areas was developed for dichotomous items. The study considered different sample sizes, ability estimation methods (Maximum Likelihood Estimation and Expected A Posteriori), and termination rules (20 items, 60 items, and SE≤.30) as significant factors in the CAT algorithm for examining the effect of content balancing. For each CAT application, measurement precision was assessed by calculating the root mean square error (RMSE), bias, and fidelity coefficients, and these were analyzed comparatively. The results showed that bias values were close to zero under all conditions. RMSE values were lowest when the test was terminated at 60 items across all conditions, while standard error termination rules and situations where the test terminated at 20 items produced similar values. Considering all conditions, the highest fidelity coefficient was observed when the test terminated at 60 items. The fidelity coefficient did not vary significantly with other variables. Implementing content balancing in conditions using different ability estimation methods increased the average number of items by approximately one item. While the average number of items in the test slightly increased with content balancing, measurement precision was maintained. Overall, the maximum item exposure rate decreased with content balancing when content areas were weighted equally, whereas it increased when they were weighted disproportionately. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. An adaptive testing item selection strategy via a deep reinforcement learning approach.
- Author
-
Wang, Pujue, Liu, Hongyun, and Xu, Mingqi
- Subjects
- *
ARTIFICIAL neural networks , *DEEP reinforcement learning , *REINFORCEMENT learning , *DEEP learning , *ADAPTIVE testing , *COMPUTER adaptive testing - Abstract
Computerized adaptive testing (CAT) aims to present items that statistically optimize the assessment process by considering the examinee's responses and estimated trait levels. Recent developments in reinforcement learning and deep neural networks provide CAT with the potential to select items that utilize more information across all the items on the remaining tests, rather than just focusing on the next several items to be selected. In this study, we reformulate CAT under the reinforcement learning framework and propose a new item selection strategy based on the deep Q-network (DQN) method. Through simulated and empirical studies, we demonstrate how to monitor the training process to obtain the optimal Q-networks, and we compare the accuracy of the DQN-based item selection strategy with that of five traditional strategies—maximum Fisher information, Fisher information weighted by likelihood, Kullback‒Leibler information weighted by likelihood, maximum posterior weighted information, and maximum expected information—on both simulated and real item banks and responses. We further investigate how sample size and the distribution of the trait levels of the examinees used in training affect DQN performance. The results show that DQN achieves lower RMSE and MAE values than traditional strategies under simulated and real banks and responses in most conditions. Suggestions for the use of DQN-based strategies are provided, as well as their code. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Differential Performance of Computerized Adaptive Testing in Students With and Without Disabilities – A Simulation Study.
- Author
-
Ebenbeck, Nikola and Gebhardt, Markus
- Subjects
COMPUTER adaptive testing ,ADAPTIVE testing ,DIGITAL technology ,SPECIAL education ,DIGITAL learning - Abstract
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT performs differently for students with and without special educational needs (SEN). Two simulation studies were conducted using a sample of 709 third-grade students from general and special schools in Germany, who took a reading test. The results indicate that students with SEN were assessed with fewer items, reduced bias, and higher accuracy compared to students without SEN. However, measurement accuracy decreased, and test length increased for students whose abilities deviated more than two SD from the norm. We discuss potential adaptations of CAT for students with SEN in the classroom, as well as the integration of CAT with AI-supported feedback and tailored exercises within a digital learning environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Computerized Adaptive Testing Framework Based on Excitation Block and Gumbel-Softmax
- Author
-
Chengsong Liu and Yan Wei
- Subjects
Computerized adaptive testing ,Gumbel-Softmax ,excitation block ,potential factors ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Computerized Adaptive Testing (CAT) is a personalized assessment method that adaptively selects the most suitable questions for students of different abilities based on their response data. Its primary goal is to effectively measure students’ proficiency in a specific subject in a shorter period. The selection algorithm is pivotal in CAT. The current algorithms inadequately consider the impact of knowledge concept weights in question and student potential factors (e.g., memory) on question selection. In addition, most algorithms primarily focus on accurately predicting students’ abilities, neglecting critical factors such as concept diversity and question exposure rate, which are essential for model effectiveness. Therefore, this paper introduces a new framework for CAT, GECAT. It proposes a selection algorithm based on an excitation block to learn the weights of each knowledge concept in the questions and analyze the impact of student potential factors on their answering performance, thereby selecting more suitable questions for students. Additionally, it views CAT as reinforcement learning, introducing Gumbel-Softmax to provide students with diverse, non-repetitive, and valuable test questions. The experimental results on three real-world datasets demonstrate that the proposed CAT framework improves ACC and AUC by 0.71% and 0.86%, respectively, while reducing question exposure rate and overlap rate by 1.33% and 1.59%, respectively.
- Published
- 2025
- Full Text
- View/download PDF
7. Psychometric properties of computerized adaptive testing for chronic obstructive pulmonary disease patient-reported outcome measurement
- Author
-
Jiajia Wang, Yang Xie, Zhenzhen Feng, and Jiansheng Li
- Subjects
Chronic obstructive pulmonary disease ,Patient-reported outcome ,Computerized adaptive testing ,Item response theory ,Reliability ,Validity ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Computerized adaptive testing (CAT) is an effective way to reduce time, repetitious redundancy, and respond burden, and has been used to measure outcomes in many diseases. This study aimed to develop and validate a comprehensive disease-specific CAT for chronic obstructive pulmonary disease (COPD) patient-reported outcome measurement. Methods The discrimination and difficulty of the items from the modified patient-reported outcome scale for COPD (mCOPD-PRO) were analyzed using item response theory. Then the initial item, item selection method, ability estimation method, and stopping criteria were further set based on Concerto platform to form the CAT. Finally, the reliability and validity were validated. Results The item discrimination ranged from 1.05 to 2.71, and the item difficulty ranged from − 3.08 to 3.65. The measurement reliability of the CAT ranged from 0.910 to 0.922 using random method, while that ranged from 0.910 to 0.924 using maximum Fisher information (MFI) method. The content validity was good. The correlation coefficient between theta of the CAT and COPD assessment test and modified Medical Research Council dyspnea scale scores using random method was 0.628 and 0.540 (P
- Published
- 2024
- Full Text
- View/download PDF
8. A two‐step item bank calibration strategy based on 1‐bit matrix completion for small‐scale computerized adaptive testing.
- Author
-
Shen, Yawei, Wang, Shiyu, and Xiao, Houping
- Subjects
- *
COMPUTER adaptive testing , *ADAPTIVE testing , *ITEM response theory , *MISSING data (Statistics) , *PARAMETER estimation , *MULTIPLE imputation (Statistics) - Abstract
Computerized adaptive testing (CAT) is a widely embraced approach for delivering personalized educational assessments, tailoring each test to the real‐time performance of individual examinees. Despite its potential advantages, CAT�s application in small‐scale assessments has been limited due to the complexities associated with calibrating the item bank using sparse response data and small sample sizes. This study addresses these challenges by developing a two‐step item bank calibration strategy that leverages the 1‐bit matrix completion method in conjunction with two distinct incomplete pretesting designs. We introduce two novel 1‐bit matrix completion‐based imputation methods specifically designed to tackle the issues associated with item calibration in the presence of sparse response data and limited sample sizes. To demonstrate the effectiveness of these approaches, we conduct a comparative assessment against several established item parameter estimation methods capable of handling missing data. This evaluation is carried out through two sets of simulation studies, each featuring different pretesting designs, item bank structures, and sample sizes. Furthermore, we illustrate the practical application of the methods investigated, using empirical data collected from small‐scale assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. The General Psychopathology 'p' Factor in Adolescence: Multi-Informant Assessment and Computerized Adaptive Testing.
- Author
-
Jones, Jason D., Boyd, Rhonda C., Sandro, Akira Di, Calkins, Monica E., Los Reyes, Andres De, Barzilay, Ran, Young, Jami F., Benton, Tami D., Gur, Ruben C., Moore, Tyler M., and Gur, Raquel E.
- Subjects
- *
COMPUTER adaptive testing , *ADAPTIVE testing , *ITEM response theory , *PATHOLOGICAL psychology , *MODEL theory - Abstract
Accumulating evidence supports the presence of a general psychopathology dimension, the p factor ('p'). Despite growing interest in the p factor, questions remain about how p is assessed. Although multi-informant assessment of psychopathology is commonplace in clinical research and practice with children and adolescents, almost no research has taken a multi-informant approach to studying youth p or has examined the degree of concordance between parent and youth reports. Further, estimating p requires assessment of a large number of symptoms, resulting in high reporter burden that may not be feasible in many clinical and research settings. In the present study, we used bifactor multidimensional item response theory models to estimate parent- and adolescent-reported p in a large community sample of youth (11–17 years) and parents (N = 5,060 dyads). We examined agreement between parent and youth p scores and associations with assessor-rated youth global functioning. We also applied computerized adaptive testing (CAT) simulations to parent and youth reports to determine whether adaptive testing substantially alters agreement on p or associations with youth global functioning. Parent-youth agreement on p was moderate (r =.44) and both reports were negatively associated with youth global functioning. Notably, 7 out of 10 of the highest loading items were common across reporters. CAT reduced the average number of items administered by 57%. Agreement between CAT-derived p scores was similar to the full form (r =.40) and CAT scores were negatively correlated with youth functioning. These novel results highlight the promise and potential clinical utility of a multi-informant p factor approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Investigating Approaches to Controlling Item Position Effects in Computerized Adaptive Tests.
- Author
-
Ma, Ye and Harris, Deborah J.
- Subjects
- *
ADAPTIVE testing , *ITEM response theory , *PSYCHOMETRICS , *COMPUTER adaptive testing - Abstract
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item Response Theory (IRT)’s item parameter invariance assumption, which facilitates applications of IRT in various psychometric tasks such as computerized adaptive testing (CAT). Ignoring IPE might lead to issues such as inaccurate ability estimation in CAT. This article extends research on IPE by proposing and evaluating approaches to controlling position effects under an item‐level computerized adaptive test via a simulation study. The results show that adjusting IPE via a pretesting design (approach 3) or a pool design (approach 4) results in better ability estimation accuracy compared to no adjustment (baseline approach) and item‐level adjustment (approach 2). Practical implications of each approach as well as future research directions are discussed as well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Scoring the Outcomes of 1-3-3 Multistage Model of Computerised Adaptive Testing.
- Author
-
Razmadze, S.
- Subjects
- *
COMPUTER adaptive testing , *ADAPTIVE testing , *GAUSSIAN distribution , *EVALUATION methodology , *COMPARATIVE studies - Abstract
The given paper discusses an original method of the evaluation of outcomes of adaptive testing in the case of the strategy of the multilevel testing. The multiplicity/set of the outcomes of testing consists of atypical different-dimensional elements. The given paper defines the criteria of their comparison, describes the principles of ordering of the given multiplicity and draws the getting of a final score. The paper presented considers the ordering method of outcome set for multi-stage testing (MST) of 1-3-3 model. The ordering method of outcome set is used for the estimation of results of computerized adaptive testing (CAT). This method is not tied to a specific testing procedure. Acknowledgment of this is its usage for the 1-3-3 model, which is described in the paper. To sort the set of testing outcomes the function-criteria described in the initial article are used here and a comparative analysis of obtained results is performed. The criterion of ordering of the outcomes of testing may not be the only option. The given paper illustrates this fact through a comparative discussion of two samples. An original procedure of testing is used for the presentation of the essence of the method. The given procedure is aimed to be illustrative, because a described method of assessment can be used for similar strategies. The ordered outcome set is estimated by a hundred-point system according to the normal distribution. Applied results of our scientific research is developed as "Adaptester" portal and available on the following address: https://adaptester.com. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Methodological aspects of the highly adaptive testing design for PISA.
- Author
-
Fink, Aron, König, Christoph, and Frey, Andreas
- Subjects
ADAPTIVE testing ,ITEM response theory ,COMPUTER adaptive testing ,ALGORITHMS ,HAT design & hat making ,TEST design - Abstract
This methods paper describes the methodological and statistical underpinnings of the highly adaptive testing design (HAT), which was developed for the Programme for International Student Assessment (PISA). The aim of HAT is to allow for a maximum of adaptivity in selecting items while taking the constraints of PISA into account with appropriate computer algorithms. HAT combines established methods from the area of computerized adaptive testing (a) to improve item selection when items are nested in units, (b) to make use of the correlation between the dimensions measured, (c) to efficiently accomplish constraint management, (d) to control for item position effects, and (e) to foster students' test-taking experience. The algorithm is implemented using the programming language R and readers are provided with the necessary code. This should facilitate future implementations of the HAT design and inspire other adaptive testing designs that aim to maximize adaptivity while meeting constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Adaptive measurement of cognitive function based on multidimensional item response theory.
- Author
-
Gibbons, Robert D., Lauderdale, Diane S., Wilson, Robert S., Bennett, David A., Arar, Tesnim, and Gallo, David A.
- Subjects
COGNITIVE testing ,COMPUTER adaptive testing ,ADAPTIVE testing ,COGNITIVE processing speed ,ITEM response theory - Abstract
INTRODUCTION: Up to 20% of older adults in the United States have mild cognitive impairment (MCI), and about one‐third of people with MCI are predicted to transition to Alzheimer's disease (AD) within 5 years. Standard cognitive assessments are long and require a trained technician to administer. We developed the first computerized adaptive test (CAT) based on multidimensional item response theory (MIRT) to more precisely, rapidly, and repeatedly assesses cognitive abilities across the adult lifespan. We present results for a prototype CAT (pCAT‐COG) for assessment of global cognitive function. METHODS: We sampled items across five cognitive domains central to neuropsychological testing (episodic memory [EM], semantic memory/language [SM], working memory [WM], executive function/flexible thinking, and processing speed [PS]). The item bank consists of 54 items, with 9 items of varying difficulty drawn from six different cognitive tasks. Each of the 54 items has 3 response trials, yielding an ordinal score (0–3 trials correct). We also include three long‐term memory items not designed for adaptive administration, for a total bank of 57 items. Calibration data were collected in‐person and online, calibrated using a bifactor MIRT model, and pCAT‐COG scores validated against a technician‐administered neuropsychological battery. RESULTS: The bifactor MIRT model improved fit over a unidimensional IRT model (p < 0.0001). The global pCAT‐COG scores were inversely correlated with age (r = –0.44, p < 0.0001). Simulated adaptive administration of 11 items maintained a correlation of r = 0.94 with the total item bank scores. Significant differences between mild and no cognitive impairment (NCI) were found (effect size of 1.08 SD units). The pCAT‐COG correlated with clinician‐based global measure (r = 0.64). DISCUSSION: MIRT‐based CAT is feasible and valid for the assessment of global cognitive impairment, laying the foundation for the development of a full CAT‐COG that will draw from a much larger item bank with both global and domain specific measures of cognitive impairment. Highlights: As Americans age, numbers at risk for developing cognitive impairment are increasing.Aging‐related declines in cognition begins decades prior to the onset of obvious cognitive impairment.Traditional assessment is burdensome and requires trained clinicians.We developed an adaptive testing framework using multidimensional item response theory.It is comparable to lengthier in‐person assessments that require trained psychometrists. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Psychometric properties of computerized adaptive testing for chronic obstructive pulmonary disease patient-reported outcome measurement.
- Author
-
Wang, Jiajia, Xie, Yang, Feng, Zhenzhen, and Li, Jiansheng
- Subjects
COMPUTER adaptive testing ,ITEM response theory ,CHRONIC obstructive pulmonary disease ,ADAPTIVE testing ,PSYCHOMETRICS - Abstract
Background: Computerized adaptive testing (CAT) is an effective way to reduce time, repetitious redundancy, and respond burden, and has been used to measure outcomes in many diseases. This study aimed to develop and validate a comprehensive disease-specific CAT for chronic obstructive pulmonary disease (COPD) patient-reported outcome measurement. Methods: The discrimination and difficulty of the items from the modified patient-reported outcome scale for COPD (mCOPD-PRO) were analyzed using item response theory. Then the initial item, item selection method, ability estimation method, and stopping criteria were further set based on Concerto platform to form the CAT. Finally, the reliability and validity were validated. Results: The item discrimination ranged from 1.05 to 2.71, and the item difficulty ranged from − 3.08 to 3.65. The measurement reliability of the CAT ranged from 0.910 to 0.922 using random method, while that ranged from 0.910 to 0.924 using maximum Fisher information (MFI) method. The content validity was good. The correlation coefficient between theta of the CAT and COPD assessment test and modified Medical Research Council dyspnea scale scores using random method was 0.628 and 0.540 (P < 0.001; P < 0.001) respectively, while that using MFI method was 0.347 and 0.328 (P = 0.007; P = 0.010) respectively. About 11 items (reducing by 59.3%) on average were tested using random method, while about seven items (reducing by 74.1%) on average using MFI method. The correlation coefficient between theta of the CAT and mCOPD-PRO total scores using random method was 0.919 (P < 0.001), while that using MFI method was 0.760 (P < 0.001). Conclusions: The comprehensive disease-specific CAT for COPD patient-reported outcome measurement is well developed with good psychometric properties, which can provide an efficient, accurate, and user-friendly measurement for patient-reported outcome of COPD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Optimizing Oxford Shoulder Scores with computerized adaptive testing reduces redundancy while maintaining precision: an NHS England National Joint Registry analysis
- Author
-
Ahmed Barakat, Jonathan Evans, Christopher Gibbons, and Harvinder P. Singh
- Subjects
oxford shoulder score ,item response theory ,computerized adaptive testing ,patient-reported outcome measures ,national joint registry ,oxford shoulder scores ,patient-reported outcome measures (proms) ,root mean square ,shoulder surgeries ,shoulder ,arthroplasty ,shoulder arthroplasties ,intraclass correlation coefficient (icc) ,oxford knee score ,Diseases of the musculoskeletal system ,RC925-935 - Abstract
Aims: The Oxford Shoulder Score (OSS) is a 12-item measure commonly used for the assessment of shoulder surgeries. This study explores whether computerized adaptive testing (CAT) provides a shortened, individually tailored questionnaire while maintaining test accuracy. Methods: A total of 16,238 preoperative OSS were available in the National Joint Registry (NJR) for England, Wales, Northern Ireland, the Isle of Man, and the States of Guernsey dataset (April 2012 to April 2022). Prior to CAT, the foundational item response theory (IRT) assumptions of unidimensionality, monotonicity, and local independence were established. CAT compared sequential item selection with stopping criteria set at standard error (SE) < 0.32 and SE < 0.45 (equivalent to reliability coefficients of 0.90 and 0.80) to full-length patient-reported outcome measure (PROM) precision. Results: Confirmatory factor analysis (CFA) for unidimensionality exhibited satisfactory fit with root mean square standardized residual (RSMSR) of 0.06 (cut-off ≤ 0.08) but not with comparative fit index (CFI) of 0.85 or Tucker-Lewis index (TLI) of 0.82 (cut-off > 0.90). Monotonicity, measured by H value, yielded 0.482, signifying good monotonic trends. Local independence was generally met, with Yen’s Q3 statistic > 0.2 for most items. The median item count for completing the CAT simulation with a SE of 0.32 was 3 (IQR 3 to 12), while for a SE of 0.45 it was 2 (IQR 2 to 6). This constituted only 25% and 16%, respectively, when compared to the 12-item full-length questionnaire. Conclusion: Calibrating IRT for the OSS has resulted in the development of an efficient and shortened CAT while maintaining accuracy and reliability. Through the reduction of redundant items and implementation of a standardized measurement scale, our study highlights a promising approach to alleviate time burden and potentially enhance compliance with these widely used outcome measures. Cite this article: Bone Joint Res 2024;13(8):392–400.
- Published
- 2024
- Full Text
- View/download PDF
16. 结合作答时间的自适应测验组卷方法.
- Author
-
李弘, 罗照盛, and 喻晓锋
- Abstract
With the development of information technology and the widespread application of computerized adaptive testing (CAT), response time data during the testing process has become increasingly accessible. It's application in the field of psychological measurement has also become more widespread. Applying response time data and models to automated test assembly (ATA) holds significant importance for improving testing efficiency, precision of ability estimation for examinees during testing, and promoting educational equity. This paper systematically evaluates the test assembly methods and research results in CAT, comprehensively presenting test assembly methods that incorporate response time data. It explored how such methods address issues related to test duration constraints, testing efficiency, test bank security, and differences in examinees' response times. Further research on the ATA methods that incorporate response time data, as well as the application of research findings to large-scale assessment projects in China, are important research directions to be explored in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
17. Utilizing Real-Time Test Data to Solve Attenuation Paradox in Computerized Adaptive Testing to Enhance Optimal Design.
- Author
-
Chen, Jyun-Hong and Chao, Hsiu-Yi
- Abstract
To solve the attenuation paradox in computerized adaptive testing (CAT), this study proposes an item selection method, the integer programming approach based on real-time test data (IPRD), to improve test efficiency. The IPRD method turns information regarding the ability distribution of the population from real-time test data into feasible test constraints to reversely assembled shadow tests for item selection to prevent the attenuation paradox by integer programming. A simulation study was conducted to thoroughly investigate IPRD performance. The results indicate that the IPRD method can efficiently improve CAT performance in terms of the precision of trait estimation and satisfaction of all required test constraints, especially for conditions with stringent exposure control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Efficiency of computerized adaptive testing with a cognitively designed item bank.
- Author
-
Hao Luo and Xiangdong Yang
- Subjects
COMPUTER adaptive testing ,STANDARD deviations - Abstract
An item bank is key to applying computerized adaptive testing (CAT). The traditional approach to developing an item bank requires content experts to design each item individually, which is a time-consuming and costly process. The cognitive design system (CDS) approach offers a solution by automating item generation. However, the CDS approach has a specific way of calibrating or predicting item difficulty that affects the measurement efficiency of CAT A simulation study was conducted to compare the efficiency of CAT using both calibration and prediction models. The results show that, although the predictive model (linear logistic trait model; LLTM) shows a higher root mean square error (RMSE) than the baseline model (Rasch), it requires only a few additional items to achieve comparable RMSE. Importantly, the number of additional items needed decreases as the explanatory rate of the model increases. These results indicate that the slight reduction in measurement efficiency due to prediction item difficulty is acceptable. Moreover, the use of prediction item difficulty can significantly reduce or even eliminate the need for item pretesting, thereby reducing the costs associated with item calibration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Comparison of CAT Procedures at Low Ability Levels: A Simulation Study and Analysis in the Context of Students with Disabilities.
- Author
-
Şenel, Selma
- Subjects
COMPUTER adaptive testing ,MONTE Carlo method ,ADAPTIVE testing ,MAXIMUM likelihood statistics ,STUDENTS with disabilities ,GAUSSIAN distribution - Abstract
Copyright of Bartin University Journal of Faculty of Education is the property of Bartin University Journal of Faculty of Education and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
20. Investigating The Performance of Item Selection Algorithms in Cognitive Diagnosis Computerized Adaptive Testing.
- Author
-
AŞİRET, Semih and ÖMÜR SÜNBÜL, Seçil
- Subjects
- *
COGNITIVE ability , *COMPUTER adaptive testing , *ALGORITHMS , *PROBABILITY theory , *MATRICES (Mathematics) - Abstract
This study aimed to examine the performances of item selection algorithms in terms of measurement accuracy and computational time, using factors such as test length, number of attributes, and item quality in fixed-length CDCAT and average test lengths and computational time, using factors such as number of attributes and item quality in variable-length CD-CAT. In the research, two different simulation studies were conducted for the fixed and variable-length tests. Item responses were generated according to the DINA model. Two item banks, which consisted of 480 items for 5 and 6 attributes, were generated, and the item banks were used for both the fixed and variable-length tests. Q-matrix was generated item by item and attribute by attribute. In the study, 3000 examinees were generated in such a way that each examinee had a 50% chance of achieving each attribute. The cognitive patterns of the examinees were estimated by usingMAP. In the variable-length CD-CAT, the first-highest posterior probability threshold is 0.80, and the second-highest posterior probability threshold is 0.10. The CD-CAT administration and other analyses were conducted using R 3.6.1.At the end of the study in which the fixed-length CD-CAT was used, it was concluded that an increase in the number of attributes resulted in a decrease in the pattern recovery rates of item selection algorithms. Conversely, these rates improved with higher item quality and longer test lengths. The highest values in terms of pattern recovery rate were obtained from JSD and MPWKL algorithms. In the variable-length CD-CAT, it was concluded that the average test length increased with the number of attributes and decreased with higher item quality. Across all conditions, the JSD algorithm yielded the shortest average test length. Additionally, It has been determined that GDI algorithm had the shortest computation time in all scenarios, whereas the MPWKL algorithm exhibited the longest computation time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. The feasibility of computerized adaptive testing of the national benchmark test: A simulation study.
- Author
-
Ayanwale, Musa Adekunle and Ndlovu, Mdutshekelwa
- Subjects
COVID-19 pandemic ,ITEM response theory ,MONTE Carlo method ,COMPUTER software ,ALGORITHMS - Abstract
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the national benchmark tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT programs requires simulation studies. To assess the feasibility of implementing CAT in NBTs, SimulCAT, a simulation tool, was utilized. The SimulCAT simulation involved creating 10,000 examinees with a normal distribution characterized by a mean of 0 and a standard deviation of 1. A pool of 500 test items was employed, and specific parameters were established for the item selection algorithm, CAT administration rules, item exposure control, and termination criteria. The termination criteria required a standard error of less than 0.35 to ensure accurate abilities estimation. The findings from the simulation study demonstrated that fixed-length tests provided higher testing precision without any systematic error, as indicated by measurement statistics like CBIAS, CMAE, and CRMSE. However, fixed-length tests exhibited a higher item exposure rate, which could be mitigated by selecting items with fewer dependencies on specific item parameters (a-parameters). On the other hand, variable-length tests demonstrated increased redundancy. Based on these results, CAT is recommended as an alternative approach for conducting NBTs due to its capability to accurately measure individual abilities and reduce the testing duration. For high-stakes assessments like the NBTs, fixed-length tests are preferred as they offer superior testing precision while minimizing item exposure rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. 计算机自适应测验有效性检验的探索与优化.
- Author
-
李心钰, 王 超, and 陆 宏
- Abstract
Copyright of Modern Educational Technology is the property of Editorial Board of Modern Educational Technology, Tsinghua University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
23. Face and content validity of a mobile delirium screening tool adapted for use in the medical setting (eDIS‐MED): Welcome to the machine.
- Author
-
Eeles, Eamonn, Tronstad, Oystein, Teodorczuk, Andrew, Flaws, Dylan, Fraser, John F, and Dissanayaka, Nadeeka
- Subjects
MOBILE apps ,COMPUTER adaptive testing ,ELECTRIC power supplies to apparatus ,RESEARCH funding ,RESEARCH evaluation ,READABILITY (Literary style) ,HOSPITALS ,POCKET computers ,DESCRIPTIVE statistics ,TEST validity ,DELIRIUM ,MEDICAL screening ,EVALUATION - Abstract
Objectives: Following a user‐centred redesign and refinement process of an electronic delirium screening tool (eDIS‐MED), further accuracy assessment was performed prior to anticipated testing in the clinical setting. Methods: Content validity of each of the existing questions was evaluated by an expert group in the domains of clarity, relevance and importance. Questions with a Content Validity Index (CVI) <0.80 were reviewed by the development group for potential revision. Items with CVI <0.70 were discarded. Next, face validity of the entirety of the tests was conducted and readability measured. Results: A panel of five clinical experts evaluated the test battery comprising eDIS‐MED. The content validity process endorsed 61 items. The overall scale CVI was 0.92. Eighty‐eight per cent of the responses with regard to question relevancy, usefulness and appropriateness were positive. The questions were deemed fifth grade level and very easy to read. Conclusions: A revised electronic screening tool was shown to be accurate according to an expert group. A clinical validation study is planned. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Computerized Adaptive Testing
- Author
-
Shenghong, Dong and Kan, Zhang, editor
- Published
- 2024
- Full Text
- View/download PDF
25. The Impact of Generating Model on Preknowledge Detection in CAT
- Author
-
Gorney, Kylie, Chen, Jianshen, Bay, Luz, Wiberg, Marie, Kim, Jee-Seon, Hwang, Heungsun, editor, Wu, Hao, editor, and Sweet, Tracy, editor
- Published
- 2024
- Full Text
- View/download PDF
26. Adaptive measurement of cognitive function based on multidimensional item response theory
- Author
-
Robert D. Gibbons, Diane S. Lauderdale, Robert S. Wilson, David A. Bennett, Tesnim Arar, and David A. Gallo
- Subjects
Alzheimer's disease ,bifactor model ,cognitive impairment ,computerized adaptive testing ,multidimensional item response theory ,neuropsychological assessment ,Neurology. Diseases of the nervous system ,RC346-429 ,Geriatrics ,RC952-954.6 - Abstract
Abstract INTRODUCTION Up to 20% of older adults in the United States have mild cognitive impairment (MCI), and about one‐third of people with MCI are predicted to transition to Alzheimer's disease (AD) within 5 years. Standard cognitive assessments are long and require a trained technician to administer. We developed the first computerized adaptive test (CAT) based on multidimensional item response theory (MIRT) to more precisely, rapidly, and repeatedly assesses cognitive abilities across the adult lifespan. We present results for a prototype CAT (pCAT‐COG) for assessment of global cognitive function. METHODS We sampled items across five cognitive domains central to neuropsychological testing (episodic memory [EM], semantic memory/language [SM], working memory [WM], executive function/flexible thinking, and processing speed [PS]). The item bank consists of 54 items, with 9 items of varying difficulty drawn from six different cognitive tasks. Each of the 54 items has 3 response trials, yielding an ordinal score (0–3 trials correct). We also include three long‐term memory items not designed for adaptive administration, for a total bank of 57 items. Calibration data were collected in‐person and online, calibrated using a bifactor MIRT model, and pCAT‐COG scores validated against a technician‐administered neuropsychological battery. RESULTS The bifactor MIRT model improved fit over a unidimensional IRT model (p < 0.0001). The global pCAT‐COG scores were inversely correlated with age (r = –0.44, p < 0.0001). Simulated adaptive administration of 11 items maintained a correlation of r = 0.94 with the total item bank scores. Significant differences between mild and no cognitive impairment (NCI) were found (effect size of 1.08 SD units). The pCAT‐COG correlated with clinician‐based global measure (r = 0.64). DISCUSSION MIRT‐based CAT is feasible and valid for the assessment of global cognitive impairment, laying the foundation for the development of a full CAT‐COG that will draw from a much larger item bank with both global and domain specific measures of cognitive impairment. Highlights As Americans age, numbers at risk for developing cognitive impairment are increasing. Aging‐related declines in cognition begins decades prior to the onset of obvious cognitive impairment. Traditional assessment is burdensome and requires trained clinicians. We developed an adaptive testing framework using multidimensional item response theory. It is comparable to lengthier in‐person assessments that require trained psychometrists.
- Published
- 2024
- Full Text
- View/download PDF
27. Methodological aspects of the highly adaptive testing design for PISA
- Author
-
Aron Fink, Christoph König, and Andreas Frey
- Subjects
computerized adaptive testing ,PISA ,testing ,measurement ,item response theory ,Psychology ,BF1-990 - Abstract
This methods paper describes the methodological and statistical underpinnings of the highly adaptive testing design (HAT), which was developed for the Programme for International Student Assessment (PISA). The aim of HAT is to allow for a maximum of adaptivity in selecting items while taking the constraints of PISA into account with appropriate computer algorithms. HAT combines established methods from the area of computerized adaptive testing (a) to improve item selection when items are nested in units, (b) to make use of the correlation between the dimensions measured, (c) to efficiently accomplish constraint management, (d) to control for item position effects, and (e) to foster students’ test-taking experience. The algorithm is implemented using the programming language R and readers are provided with the necessary code. This should facilitate future implementations of the HAT design and inspire other adaptive testing designs that aim to maximize adaptivity while meeting constraints.
- Published
- 2024
- Full Text
- View/download PDF
28. The feasibility of computerized adaptive testing of the national benchmark test: A simulation study
- Author
-
Musa Adekunle Ayanwale and Mdutshekelwa Ndlovu
- Subjects
computerized adaptive testing ,fixed-length ,Item exposure ,item response theory ,Monte Carlo simulation ,measurement precision ,Education ,Education (General) ,L7-991 - Abstract
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the National Benchmark Tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT programs requires simulation studies. To assess the feasibility of implementing CAT in NBTs, SimulCAT, a simulation tool, was utilized. The SimulCAT simulation involved creating 10,000 examinees with a normal distribution characterized by a mean of 0 and a standard deviation of 1. A pool of 500 test items was employed, and specific parameters were established for the item selection algorithm, CAT administration rules, item exposure control, and termination criteria. The termination criteria required a standard error of less than 0.35 to ensure accurate abilities estimation. The findings from the simulation study demonstrated that fixed-length tests provided higher testing precision without any systematic error, as indicated by measurement statistics like CBIAS, CMAE, and CRMSE. However, fixed-length tests exhibited a higher item exposure rate, which could be mitigated by selecting items with fewer dependencies on specific item parameters (a-parameters). On the other hand, variable-length tests demonstrated increased redundancy. Based on these results, CAT is recommended as an alternative approach for conducting NBTs due to its capability to accurately measure individual abilities and reduce the testing duration. For high-stakes assessments like the NBTs, fixed-length tests are preferred as they offer superior testing precision while minimizing item exposure rates.
- Published
- 2024
- Full Text
- View/download PDF
29. Recommendation with item response theory
- Author
-
Veldkamp, Karel, Grasman, Raoul, and Molenaar, Dylan
- Published
- 2024
- Full Text
- View/download PDF
30. Initial Validation of a Computerized Adaptive Test for Substance Use Disorder Identification in Adolescents.
- Author
-
Adams, Zachary W., Hulvershorn, Leslie A., Smoker, Michael P., Marriott, Brigid R., Aalsma, Matthew C., and Gibbons, Robert D.
- Subjects
- *
SUBSTANCE abuse diagnosis , *COMPUTER adaptive testing , *MENTAL health , *RESEARCH funding , *RESEARCH methodology evaluation , *DESCRIPTIVE statistics , *TELEMEDICINE , *ODDS ratio , *VIDEOCONFERENCING , *CONFIDENCE intervals , *COMPARATIVE studies , *ADOLESCENCE - Abstract
Computerized adaptive tests (CATs) are highly efficient assessment tools that couple low patient and clinician time burden with high diagnostic accuracy. A CAT for substance use disorders (CAT-SUD-E) has been validated in adult populations but has yet to be tested in adolescents. The purpose of this study was to perform initial evaluation of the K-CAT-SUD-E (i.e., Kiddy-CAT-SUD-E) in an adolescent sample compared to a gold-standard diagnostic interview. Adolescents (N = 156; aged 11–17) with diverse substance use histories completed the K-CAT-SUD-E electronically and the substance related disorders portion of a clinician-conducted diagnostic interview (K-SADS) via tele-videoconferencing platform. The K-CAT-SUD-E assessed both current and lifetime overall SUD and substance-specific diagnoses for nine substance classes. Using the K-CAT-SUD-E continuous severity score and diagnoses to predict the presence of any K-SADS SUD diagnosis, the classification accuracy ranged from excellent for current SUD (AUC = 0.89, 95% CI = 0.81, 0.95) to outstanding (AUC = 0.93, 95% CI = 0.82, 0.97) for lifetime SUD. Regarding current substance-specific diagnoses, the classification accuracy was excellent for alcohol (AUC = 0.82), cannabis (AUC = 0.83) and nicotine/tobacco (AUC = 0.90). For lifetime substance-specific diagnoses, the classification accuracy ranged from excellent (e.g., opioids, AUC = 0.84) to outstanding (e.g., stimulants, AUC = 0.96). K-CAT-SUD-E median completion time was 4 min 22 s compared to 45 min for the K-SADS. This study provides initial support for the K-CAT-SUD-E as a feasible accurate diagnostic tool for assessing SUDs in adolescents. Future studies should further validate the K-CAT-SUD-E in a larger sample of adolescents and examine its acceptability, feasibility, and scalability in youth-serving settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Generalizing computerized adaptive testing for problematic mobile phone use from Chinese adults to adolescents.
- Author
-
Lei, Guo, Xiaorui, Liu, and Tour, Liu
- Subjects
COMPUTER adaptive testing ,ADAPTIVE testing ,CHINESE people ,CELL phones ,ITEM response theory ,ADULTS - Abstract
The number of mobile phone users worldwide has increased in recent years. As people spend more time on their phones, negative effects such as problematic mobile phone use (PMPU) have become more pronounced. Many researchers have dedicated their efforts to developing the questionnaires and revising the tools to more accurately evaluate PMPU. Previous studies had demonstrated that CAT-PMPU for adults could significantly enhance measurement accuracy and efficiency. However, most of the items in this scenario was developed for adults, and there were notable differences between adults and adolescents, making some items potentially unsuitable for the latter. Thus, this study aimed to generalize the CAT-PMPU adult version to make it suitable for both adult and adolescent populations. A total of 740 Chinese adolescents and 980 Chinese adults participated in this study, completing online or paper-pencil questionnaires. Empirical data was then used to simulate CAT-PMPU, and the measurement efficiency, accuracy and reliability were compared between adults and adolescents in different stopping rules. The results showed that generalized CAT-PMPU in this study had promising measurement efficiency and accuracy, which was consistent with adult version. In conclusion, the CAT-PMPU developed in this study not only exhibited satisfactory test reliability but also emerged as a novel technical support for evaluation of PMPU in both adolescent and adult populations, which demonstrated the potential applicability in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Novel item selection strategies for cognitive diagnostic computerized adaptive testing: A heuristic search framework.
- Author
-
Cao, Xi, Lin, Ying, Liu, Dong, Zheng, Fudan, and Duh, Henry Been-Lirn
- Subjects
- *
COMPUTER adaptive testing , *HEURISTIC , *COGNITIVE testing , *TEST design , *DIAGNOSIS methods - Abstract
The computerized adaptive form of cognitive diagnostic testing, CD-CAT, has gained increasing attention in the domain of personalized measurements for its ability to categorize individual mastery status of fine-grained attributes more accurately and efficiently through administering items tailored to one's ability progressively. How to select the next item based on previous response(s) is crucial for the success of CD-CAT. Previous item selection strategies for CD-CAT have often followed a greedy or semi-greedy approach, which makes it difficult to strike a balance between diagnostic performance and item bank utilization. To address this issue, this study takes a graph perspective and transforms the item selection problem in CD-CAT into a path-searching problem, in which paths refer to possible test construction and nodes refer to individual items. A heuristic function is defined to predict the prospect of a path, indicating how well the corresponding test can diagnose the current examinee. Two search mechanisms with different biases towards item exposure control are proposed to approximate the optimal path with the best prospect. The first unused item on the resulting path is selected as the next item. The above components compose a novel CD-CAT item selection framework based on heuristic search. Simulation studies are conducted under a variety of conditions regarding bank designs, bank-quality conditions, and testing scenarios. The results are compared with different types of classic item selection strategies in CD-CAT, showing that the proposed framework can enhance bank utilization at a smaller cost of diagnostic performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Computerized adaptive testing to screen pre-school children for emotional and behavioral problems.
- Author
-
Theunissen, Meinou H. C., Eekhout, Iris, and Reijneveld, Sijmen A.
- Subjects
- *
COMPUTER adaptive testing , *PRESCHOOL children , *EMOTIONAL problems of children , *BEHAVIOR disorders in children , *CHILD Behavior Checklist - Abstract
Questionnaires to detect emotional and behavioral (EB) problems in preventive child healthcare (PCH) should be short; this potentially affects their validity and reliability. Computerized adaptive testing (CAT) could overcome this weakness. The aim of this study was to (1) develop a CAT to measure EB problems among pre-school children and (2) assess the efficiency and validity of this CAT. We used a Dutch national dataset obtained from parents of pre-school children undergoing a well-child care assessment by PCH (n = 2192, response 70%). Data regarded 197 items on EB problems, based on four questionnaires, the Strengths and Difficulties Questionnaire (SDQ), the Child Behavior Checklist (CBCL), the Ages and Stages Questionnaire: Social Emotional (ASQ:SE), and the Brief Infant–Toddler Social and Emotional Assessment (BITSEA). Using 80% of the sample, we calculated item parameters necessary for a CAT and defined a cutoff for EB problems. With the remaining part of the sample, we used simulation techniques to determine the validity and efficiency of this CAT, using as criterion a total clinical score on the CBCL. Item criteria were met by 193 items. This CAT needed, on average, 16 items to identify children with EB problems. Sensitivity and specificity compared to a clinical score on the CBCL were 0.89 and 0.91, respectively, for total problems; 0.80 and 0.93 for emotional problems; and 0.94 and 0.91 for behavioral problems. Conclusion: A CAT is very promising for the identification of EB problems in pre-school children, as it seems to yield an efficient, yet high-quality identification. This conclusion should be confirmed by real-life administration of this CAT. What is Known: • Studies indicate the validity of using computerized adaptive test (CAT) applications to identify emotional and behavioral problems in school-aged children. • Evidence is as yet limited on whether CAT applications can also be used with pre-school children. What is New: • The results of this study show that a computerized adaptive test is very promising for the identification of emotional and behavior problems in pre-school children, as it appears to yield an efficient and high-quality identification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Multidimensional Hybrid Computerized Adaptive Testing Based on Multidimensional Item Response Theory
- Author
-
Mingyu Shao, Jianan Sun, Jingwen Li, Shiyu Wang, and Yinghui Lai
- Subjects
Ability estimation accuracy ,computerized adaptive testing ,item exposure control ,multidimensional hybrid computerized adaptive testing ,multidimensional item response theory ,multidimensional multistage adaptive testing ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Computerized adaptive testing (CAT) and multistage adaptive testing (MST) are widely used to deliver assessment questions in the fields of psychometrics, educational measurement, and medical assessments. Hybrid computerized adaptive testing (HCAT), as a novel and flexible approach that incorporates both modular and adaptively-selected items, effectively integrates the CAT and MST, and inherits their respective strengths. Current HCAT focuses on unidimensional assessments, yet practical applications often require multidimensional assessments. Multidimensional item response theory (MIRT) models can provide accurate measurement of examinees’ multidimensional latent traits. Based on the MIRT models, this study proposes an innovative approach for constructing multidimensional hybrid computerized adaptive testing (MHCAT), aimed at better accommodating complex testing demands. Simulation studies were conducted to evaluate MHCAT using both dichotomous and polytomous items. Results indicated that, the fixed-length MHCAT achieved similar estimation accuracy to the fixed-length multidimensional CAT (MCAT), and the variable-length MHCAT had slightly higher estimation accuracy than the variable-length MCAT. Regarding item exposure control, both the fixed-length and variable-length MHCAT performed better than the MCAT. Empirical studies further validated the feasibility of MHCAT with several MIRT models. In summary, the proposed MHCAT presents promising performance in assessing examinees’ abilities while maintaining satisfactory item exposure control, providing a valuable approach for multidimensional assessments.
- Published
- 2024
- Full Text
- View/download PDF
35. Closed formula of test length required for adaptive testing with medium probability of solution
- Author
-
T. Kárász, Judit, Széll, Krisztián, and Takács, Szabolcs
- Published
- 2023
- Full Text
- View/download PDF
36. Development and psychometric evaluation of item banks for memory and attention – supplements to the EORTC CAT Core instrument
- Author
-
AA Rogge, MA Petersen, NK Aaronson, T Conroy, L Dirven, F Fischer, EJJ Habets, JC Reijneveld, M Rose, C Sleurs, M Taphoorn, KA Tomaszewski, H Vachon, T Young, M Groenvold, and on behalf of the EORTC Quality of Life Group
- Subjects
Cancer ,Cognitive functioning ,Computerized adaptive testing ,EORTC QLQ-C30 ,Item bank ,Self-report ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Cancer patients may experience a decrease in cognitive functioning before, during and after cancer treatment. So far, the Quality of Life Group of the European Organisation for Research and Treatment of Cancer (EORTC QLG) developed an item bank to assess self-reported memory and attention within a single, cognitive functioning scale (CF) using computerized adaptive testing (EORTC CAT Core CF item bank). However, the distinction between different cognitive functions might be important to assess the patients’ functional status appropriately and to determine treatment impact. To allow for such assessment, the aim of this study was to develop and psychometrically evaluate separate item banks for memory and attention based on the EORTC CAT Core CF item bank. Methods In a multistep process including an expert-based content analysis, we assigned 44 items from the EORTC CAT Core CF item bank to the memory or attention domain. Then, we conducted psychometric analyses based on a sample used within the development of the EORTC CAT Core CF item bank. The sample consisted of 1030 cancer patients from Denmark, France, Poland, and the United Kingdom. We evaluated measurement properties of the newly developed item banks using confirmatory factor analysis (CFA) and item response theory model calibration. Results Item assignment resulted in 31 memory and 13 attention items. Conducted CFAs suggested good fit to a 1-factor model for each domain and no violations of monotonicity or indications of differential item functioning. Evaluation of CATs for both memory and attention confirmed well-functioning item banks with increased power/reduced sample size requirements (for CATs ≥ 4 items and up to 40% reduction in sample size requirements in comparison to non-CAT format). Conclusion Two well-functioning and psychometrically robust item banks for memory and attention were formed from the existing EORTC CAT Core CF item bank. These findings could support further research on self-reported cognitive functioning in cancer patients in clinical trials as well as for real-word-evidence. A more precise assessment of attention and memory deficits in cancer patients will strengthen the evidence on the effects of cancer treatment for different cancer entities, and therefore contribute to shared and informed clinical decision-making.
- Published
- 2023
- Full Text
- View/download PDF
37. An intelligent vocabulary size measurement method for second language learner
- Author
-
Tian Xia, Xuemin Chen, Hamid R. Parsaei, and Feng Qiu
- Subjects
Vocabulary size test ,Computerized adaptive testing ,Intelligent vocabulary size measurement ,Artificial neural network ,Long short-term memory ,Robot testers ,Language and Literature - Abstract
Abstract This paper presents a new method for accurately measuring the vocabulary size of second language (L2) learners. Traditional vocabulary size tests (VSTs) are limited in capturing a tester’s vocabulary and are often population-specific. To overcome these issues, we propose an intelligent vocabulary size measurement method that utilizes massive robot testers. They are equipped with randomized and word-frequency-based vocabularies to simulate L2 learners’ variant vocabularies. An intelligent vocabulary size test (IVST) is developed to precisely measure vocabulary size for any population. The robot testers “take” the IVST, which dynamically generates quizzes with varying levels of difficulty adapted to the estimated tester’s vocabulary size in real-time using an artificial neural network (ANN) through iterative learning. The effectiveness of the IVST is factually verified by their visible vocabularies. Additionally, we apply a long short-term memory (LSTM) model to further enhance the method’s performance. The proposed method has demonstrated high reliability and effectiveness, achieving accuracies of 98.47% for the IVST and 99.87% for the IVST with LSTM. This novel approach provides a more precise and reliable method for measuring vocabulary size in L2 learners compared to traditional VSTs, offering potential benefits to language learners and educators.
- Published
- 2023
- Full Text
- View/download PDF
38. New Dizziness Impact Measures of Positional, Functional, and Emotional Status Were Supported for Reliability, Validity, and Efficiency
- Author
-
Daniel Deutscher, PT, MScPT, PhD, Deanna Hayes, PT, DPT, MS, and Michael A. Kallen, PhD, MPH
- Subjects
Computerized Adaptive Testing ,Dizziness ,Dizziness Handicap Inventory ,Functional status ,Item response theory ,Patient-reported outcome measures ,Medicine (General) ,R5-920 - Abstract
Objective: To calibrate the 25 items from the Dizziness Handicap Inventory (DHI) patient-reported outcome measure (PROM), using item response theory (IRT), into 1 or more item banks, and assess reliability, validity, and administration efficiency of scores derived from computerized adaptive test (CAT) or short form (SF) administration modes. Design: Retrospective cohort study. Setting: Outpatient rehabilitation clinics. Participants: Patients (N=28,815; women=69%; mean age [SD]=60 [18]) included in a large national dataset and assessed for dizziness-related conditions who responded to all DHI items at intake. Interventions: Not applicable. Main Outcome Measures: IRT model assumptions of unidimensionality, local item independence, item fit, and presence of differential item functioning (DIF) were evaluated. Generated scores were assessed for reliability, validity, and administration efficiency. Results: Patients were treated in 976 clinics from 49 US states for either vestibular-, brain injury-, or neck-related impairments. Three unidimensional item banks were calibrated, creating 3 distinct PROMs for Dizziness Functional Status (DFS, 13 items), Dizziness Positional Status (DPS, 4 items), and Dizziness Emotional Status (DES, 6 items). Two items did not fit into any domain. A DFS-CAT and a DFS 7-item SF were developed. Except for 2 items by age groups and 1 item by main impairment, no items were flagged for DIF; DIF impact was negligible. Median reliability estimates were 0.91, 0.72, and 0.79 for the DFS, DPS, and DES, respectively. Scores discriminated between patient groups in clinically logical ways and had a large effect size (>0.8), with acceptable floor and ceiling effects (
- Published
- 2024
- Full Text
- View/download PDF
39. Location-Matching Adaptive Testing for Polytomous Technology-Enhanced Items.
- Author
-
Kang, Hyeon-Ah, Arbet, Gregory, Betts, Joe, and Muntean, William
- Subjects
- *
COMPUTER adaptive testing , *ADAPTIVE testing , *MONTE Carlo method , *DIRECT costing , *INFORMATION measurement - Abstract
The article presents adaptive testing strategies for polytomously scored technology-enhanced innovative items. We investigate item selection methods that match examinee's ability levels in location and explore ways to leverage test-taking speeds during item selection. Existing approaches to selecting polytomous items are mostly based on information measures and tend to experience an item pool usage problem. In this study, we introduce location indices for polytomous items and show that location-matched item selection significantly improves the usage problem and achieves more diverse item sampling. We also contemplate matching items' time intensities so that testing times can be regulated across the examinees. Numerical experiment from Monte Carlo simulation suggests that location-matched item selection achieves significantly better and more balanced item pool usage. Leveraging working speed in item selection distinctly reduced the average testing times as well as variation across the examinees. Both the procedures incurred marginal measurement cost (e.g., precision and efficiency) and yet showed significant improvement in the administrative outcomes. The experiment in two test settings also suggested that the procedures can lead to different administrative gains depending on the test design. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. The irtQR package: a user-friendly tool for item response theory- based test data analysis and calibration.
- Author
-
Hwanggyu Lim and Kyungseok Kang
- Subjects
COMPUTER adaptive testing ,STATISTICAL models ,RESEARCH methodology evaluation ,PROFESSIONAL licensure examinations ,PSYCHOMETRICS ,METADATA ,SOFTWARE architecture ,CALIBRATION ,COMPUTER assisted testing (Education) ,ALGORITHMS ,USER interfaces - Abstract
Computerized adaptive testing (CAT) has become a widely adopted test design for high-stakes licensing and certification exams, particularly in the health professions in the United States, due to its ability to tailor test difficulty in real time, reducing testing time while providing precise ability estimates. A key component of CAT is item response theory (IRT), which facilitates the dynamic selection of items based on examinees' ability levels during a test. Accurate estimation of item and ability parameters is essential for successful CAT implementation, necessitating convenient and reliable software to ensure precise parameter estimation. This paper introduces the irtQR package (http://CRAN.R-project.org/), which simplifies IRT-based analysis and item calibration under unidimensional IRT models. While it does not directly simulate CAT, it provides essential tools to support CAT development, including parameter estimation using marginal maximum likelihood estimation via the expectation-maximization algorithm, pre-test item calibration through fixed item parameter calibration and fixed ability parameter calibration methods, and examinee ability estimation. The package also enables users to compute item and test characteristic curves and information functions necessary for evaluating the psychometric properties of a test. This paper illustrates the key features of the irtQ package through examples using simulated datasets, demonstrating its utility in IRT applications such as test data analysis and ability scoring. By providing a user-friendly environment for IRT analysis, irtQ significantly enhances the capacity for efficient adaptive testing research and operations. Finally, the paper highlights additional core functionalities of irtQ emphasizing its broader applicability to the development and operation of IRT-based assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study.
- Author
-
Dong Gi Seo, Jeongwook Choi, and Jinha Kim
- Subjects
COMPUTER adaptive testing ,STATISTICAL models ,RESEARCH funding ,DATA analysis ,ACADEMIC medical centers ,DESCRIPTIVE statistics ,EDUCATIONAL tests & measurements ,SIMULATION methods in education ,MEDICAL students ,MEASUREMENT errors ,PSYCHOMETRICS ,STATISTICS ,COMPARATIVE studies ,STANDARDS - Abstract
Purpose: This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under 2 stopping rules (standard error of measurement [SEM]=0.3 and 0.25) using both real and simulated data in medical examinations in Korea. Methods: This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees' passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules. Results: Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/fail outcomes between the 2 SEM conditions, with a high correlation (r=0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data. Conclusion: The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Psychometric Assessment of an Item Bank for Adaptive Testing on Patient-Reported Experience of Care Environment for Severe Mental Illness: Validation Study.
- Author
-
Fernandes, Sara, Brousse, Yann, Zendjidjian, Xavier, Cano, Delphine, Riedberger, Jérémie, Llorca, Pierre-Michel, Samalin, Ludovic, Dassa, Daniel, Trichard, Christian, Laprevote, Vincent, Sauvaget, Anne, Abbar, Mocrane, Misdrahi, David, Berna, Fabrice, Lancon, Christophe, Coulon, Nathalie, El-Hage, Wissam, Rozier, Pierre-Emmanuel, Benoit, Michel, and Giordana, Bruno
- Subjects
COMPUTER adaptive testing ,PSYCHOTHERAPY patients ,BIPOLAR disorder ,CROSS-sectional method ,MEDICAL care research ,SCALE analysis (Psychology) ,MULTITRAIT multimethod techniques ,PEARSON correlation (Statistics) ,RESEARCH funding ,MENTAL health ,PSYCHIATRIC treatment ,ACADEMIC medical centers ,DATA analysis ,T-test (Statistics) ,RESEARCH methodology evaluation ,RESEARCH evaluation ,QUESTIONNAIRES ,LOGISTIC regression analysis ,SEVERITY of illness index ,SCHIZOPHRENIA ,DESCRIPTIVE statistics ,SIMULATION methods in education ,PSYCHOMETRICS ,RESEARCH methodology ,RESEARCH ,QUALITY of life ,STATISTICS ,ANALYSIS of variance ,HEALTH facilities ,CONFIDENCE intervals ,PATIENT satisfaction ,PUBLIC health ,QUALITY assurance ,DATA analysis software ,PSYCHOSOCIAL factors ,PATIENTS' attitudes ,MENTAL depression ,DISCRIMINANT analysis ,EVALUATION ,ADULTS - Abstract
Background: The care environment significantly influences the experiences of patients with severe mental illness and the quality of their care. While a welcoming and stimulating environment enhances patient satisfaction and health outcomes, psychiatric facilities often prioritize staff workflow over patient needs. Addressing these challenges is crucial to improving patient experiences and outcomes in mental health care. Objective: This study is part of the Patient-Reported Experience Measure for Improving Quality of Care in Mental Health (PREMIUM) project and aims to establish an item bank (PREMIUM-CE) and to develop computerized adaptive tests (CATs) to measure the experience of the care environment of adult patients with schizophrenia, bipolar disorder, or major depressive disorder. Methods: We performed psychometric analyses including assessments of item response theory (IRT) model assumptions, IRT model fit, differential item functioning (DIF), item bank validity, and CAT simulations. Results: In this multicenter cross-sectional study, 498 patients were recruited from outpatient and inpatient settings. The final PREMIUM-CE 13-item bank was sufficiently unidimensional (root mean square error of approximation=0.082, 95% CI 0.067-0.097; comparative fit index=0.974; Tucker-Lewis index=0.968) and showed an adequate fit to the IRT model (infit mean square statistic ranging between 0.7 and 1.0). DIF analysis revealed no item biases according to gender, health care settings, diagnosis, or mode of study participation. PREMIUM-CE scores correlated strongly with satisfaction measures (r=0.69-0.78; P<.001) and weakly with quality-of-life measures (r=0.11-0.21; P<.001). CAT simulations showed a strong correlation (r=0.98) between CAT scores and those of the full item bank, and around 79.5% (396/498) of the participants obtained a reliable score with the administration of an average of 7 items. Conclusions: The PREMIUM-CE item bank and its CAT version have shown excellent psychometric properties, making them reliable measures for evaluating the patient experience of the care environment among adults with severe mental illness in both outpatient and inpatient settings. These measures are a valuable addition to the existing landscape of patient experience assessment, capturing what truly matters to patients and enhancing the understanding of their care experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Item Selection Algorithm Based on Collaborative Filtering for Item Exposure Control.
- Author
-
Pan, Yiqin, Livne, Oren, Wollack, James A., and Sinharay, Sandip
- Subjects
- *
COMPUTER adaptive testing , *ALGORITHMS , *PERFORMANCES - Abstract
In computerized adaptive testing, overexposure of items in the bank is a serious problem and might result in item compromise. We develop an item selection algorithm that utilizes the entire bank well and reduces the overexposure of items. The algorithm is based on collaborative filtering and selects an item in two stages. In the first stage, a set of candidate items whose expected performance matches the examinee's current performance is selected. In the second stage, an item that is approximately matched to the examinee's observed performance is selected from the candidate set. The expected performance of an examinee on an item is predicted by autoencoders. Experiment results show that the proposed algorithm outperforms existing item selection algorithms in terms of item exposure while incurring only a small loss in measurement precision. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. A Diagnostic Tree Model for Adaptive Assessment of Complex Cognitive Processes Using Multidimensional Response Options.
- Author
-
Davison, Mark L., Weiss, David J., DeWeese, Joseph N., Ersan, Ozge, Biancarosa, Gina, and Kennedy, Patrick C.
- Subjects
ADAPTIVE testing ,PHILOSOPHY of education ,READING comprehension ,PSYCHOMETRICS - Abstract
A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional, computerized adaptive testing (CAT) strategy. Assessment of the first dimension, reading comprehension (RC), is based on the three-parameter logistic model. For diagnostic and intervention purposes, the second dimension, called process propensity (PP), is used to classify struggling students based on their pattern of incorrect responses. In the simulation studies, CAT item selection rules and stopping rules were varied to evaluate their effect on measurement accuracy along dimension RC and classification accuracy along dimension PP. For dimension RC, methods that improved accuracy tended to increase test length. For dimension PP, however, item selection and stopping rules increased classification accuracy without materially increasing test length. A small live-testing pilot study confirmed some of the findings of the simulation studies. Development of the assessment has been guided by psychometric theory, Monte Carlo simulation results, and a theory of instruction and diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Item Difficulty Constrained Uniform Adaptive Testing
- Author
-
Kishida, Wakaba, Fuchimoto, Kazuma, Miyazawa, Yoshimitsu, Ueno, Maomi, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Wang, Ning, editor, Rebolledo-Mendez, Genaro, editor, Dimitrova, Vania, editor, Matsuda, Noboru, editor, and Santos, Olga C., editor
- Published
- 2023
- Full Text
- View/download PDF
46. Improving the Item Selection Process with Reinforcement Learning in Computerized Adaptive Testing
- Author
-
Pian, Yang, Chen, Penghe, Lu, Yu, Song, Guangchen, Chen, Pengtao, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Wang, Ning, editor, Rebolledo-Mendez, Genaro, editor, Dimitrova, Vania, editor, Matsuda, Noboru, editor, and Santos, Olga C., editor
- Published
- 2023
- Full Text
- View/download PDF
47. A Modified Method of Balancing Attribute Coverage in CD-CAT
- Author
-
Hsu, Chia-Ling, Huang, Zi-Yan, Lin, Chuan-Ju, Chen, Shu-Ying, Wiberg, Marie, editor, Molenaar, Dylan, editor, González, Jorge, editor, Kim, Jee-Seon, editor, and Hwang, Heungsun, editor
- Published
- 2023
- Full Text
- View/download PDF
48. Development and psychometric evaluation of item banks for memory and attention – supplements to the EORTC CAT Core instrument.
- Author
-
Rogge, AA, Petersen, MA, Aaronson, NK, Conroy, T, Dirven, L, Fischer, F, Habets, EJJ, Reijneveld, JC, Rose, M, Sleurs, C, Taphoorn, M, Tomaszewski, KA, Vachon, H, Young, T, and Groenvold, M
- Subjects
PSYCHOMETRICS ,ITEM response theory ,ADAPTIVE testing ,CONFIRMATORY factor analysis ,COGNITIVE ability - Abstract
Background: Cancer patients may experience a decrease in cognitive functioning before, during and after cancer treatment. So far, the Quality of Life Group of the European Organisation for Research and Treatment of Cancer (EORTC QLG) developed an item bank to assess self-reported memory and attention within a single, cognitive functioning scale (CF) using computerized adaptive testing (EORTC CAT Core CF item bank). However, the distinction between different cognitive functions might be important to assess the patients' functional status appropriately and to determine treatment impact. To allow for such assessment, the aim of this study was to develop and psychometrically evaluate separate item banks for memory and attention based on the EORTC CAT Core CF item bank. Methods: In a multistep process including an expert-based content analysis, we assigned 44 items from the EORTC CAT Core CF item bank to the memory or attention domain. Then, we conducted psychometric analyses based on a sample used within the development of the EORTC CAT Core CF item bank. The sample consisted of 1030 cancer patients from Denmark, France, Poland, and the United Kingdom. We evaluated measurement properties of the newly developed item banks using confirmatory factor analysis (CFA) and item response theory model calibration. Results: Item assignment resulted in 31 memory and 13 attention items. Conducted CFAs suggested good fit to a 1-factor model for each domain and no violations of monotonicity or indications of differential item functioning. Evaluation of CATs for both memory and attention confirmed well-functioning item banks with increased power/reduced sample size requirements (for CATs ≥ 4 items and up to 40% reduction in sample size requirements in comparison to non-CAT format). Conclusion: Two well-functioning and psychometrically robust item banks for memory and attention were formed from the existing EORTC CAT Core CF item bank. These findings could support further research on self-reported cognitive functioning in cancer patients in clinical trials as well as for real-word-evidence. A more precise assessment of attention and memory deficits in cancer patients will strengthen the evidence on the effects of cancer treatment for different cancer entities, and therefore contribute to shared and informed clinical decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Computerized adaptive testing for the patient evaluation measure (PEM) in patients undergoing cubital tunnel syndrome surgery.
- Author
-
Teunissen, Joris S., Hovius, Steven E. R., Ulrich, Dietmar J. O., Issa, Fadi, Rodrigues, Jeremy N., and Harrison, Conrad J.
- Subjects
COMPUTER adaptive testing ,CUBITAL tunnel syndrome ,ITEM response theory ,MEASUREMENT errors ,ADAPTIVE testing - Abstract
In outcome measures, item response theory (IRT) validation can deliver interval-scaled high-quality measurement that can be harnessed using computerized adaptive tests (CATs) to pose fewer questions to patients. We aimed to develop a CAT by developing an IRT model for the Patient Evaluation Measure (PEM) for patients undergoing cubital tunnel syndrome (CuTS) surgery. Nine hundred and seventy-nine completed PEM responses of patients with CuTS in the United Kingdom Hand Registry were used to develop and calibrate the CAT. Its performance was then evaluated in a simulated cohort of 1000 patients. The CAT reduced the original PEM length from ten to a median of two questions (range two to four), while preserving a high level of precision (median standard error of measurement of 0.27). The mean error between the CAT score and full-length score was 0.08%. A Bland–Altman analysis showed good agreement with no signs of bias. The CAT version of the PEM can substantially reduce patient burden while enhancing construct validity by harnessing IRT for patients undergoing CuTS surgery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. An intelligent vocabulary size measurement method for second language learner.
- Author
-
Xia, Tian, Chen, Xuemin, Parsaei, Hamid R., and Qiu, Feng
- Subjects
VOCABULARY ,LANGUAGE teachers ,ROBOTS ,ADAPTIVE testing ,ARTIFICIAL neural networks - Abstract
This paper presents a new method for accurately measuring the vocabulary size of second language (L2) learners. Traditional vocabulary size tests (VSTs) are limited in capturing a tester's vocabulary and are often population-specific. To overcome these issues, we propose an intelligent vocabulary size measurement method that utilizes massive robot testers. They are equipped with randomized and word-frequency-based vocabularies to simulate L2 learners' variant vocabularies. An intelligent vocabulary size test (IVST) is developed to precisely measure vocabulary size for any population. The robot testers "take" the IVST, which dynamically generates quizzes with varying levels of difficulty adapted to the estimated tester's vocabulary size in real-time using an artificial neural network (ANN) through iterative learning. The effectiveness of the IVST is factually verified by their visible vocabularies. Additionally, we apply a long short-term memory (LSTM) model to further enhance the method's performance. The proposed method has demonstrated high reliability and effectiveness, achieving accuracies of 98.47% for the IVST and 99.87% for the IVST with LSTM. This novel approach provides a more precise and reliable method for measuring vocabulary size in L2 learners compared to traditional VSTs, offering potential benefits to language learners and educators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.