Back to Search Start Over

LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education

Authors :
Weissburg, Iain
Anand, Sathvika
Levy, Sharon
Jeong, Haewon
Publication Year :
2024

Abstract

With the increasing adoption of large language models (LLMs) in education, concerns about inherent biases in these models have gained prominence. We evaluate LLMs for bias in the personalized educational setting, specifically focusing on the models' roles as "teachers". We reveal significant biases in how models generate and select educational content tailored to different demographic groups, including race, ethnicity, sex, gender, disability status, income, and national origin. We introduce and apply two bias score metrics--Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)--to analyze 9 open and closed state-of-the-art LLMs. Our experiments, which utilize over 17,000 educational explanations across multiple difficulty levels and topics, uncover that models perpetuate both typical and inverted harmful stereotypes.<br />Comment: 46 Pages, 55 Figures, dataset release pending publication

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.14012
Document Type :
Working Paper