Back to Search
Start Over
Examining fairness in machine learning applied to support families: A case study of preventive services.
- Source :
-
Family Relations . Nov2024, p1. 14p. 1 Illustration. - Publication Year :
- 2024
-
Abstract
- Objective Background Methods Results Conclusion Implications To evaluate the fairness of a machine learning (ML) model designed to assess the need for home visiting services, focusing on its performance across family characteristics.ML models are increasingly used in family‐centered services; however, their fairness remains underexplored, particularly concerning family sociodemographic factors and service contexts.This study assessed the fairness of an ML model developed for home visiting services examining false negative rates (FNRs) across subgroups, particularly focusing on the intersection of maternal ethnicity and nativity.The ML model reduced FNRs from 52.9% to 22.1%, with the most notable improvements for children of Black mothers and with family characteristics associated with high risk. However, the model was less effective for children of Asian and foreign‐born Hispanic mothers.Although the ML model substantially reduced FNRs across various family subgroups, disparities were observed.Understanding fairness in ML models requires a thoughtful approach, considering service context and impact on the families from diverse backgrounds. Continued research and collaboration are necessary for fair and inclusive use of ML models for family‐centered services. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 01976664
- Database :
- Academic Search Index
- Journal :
- Family Relations
- Publication Type :
- Academic Journal
- Accession number :
- 180725095
- Full Text :
- https://doi.org/10.1111/fare.13114