1. MyThisYourThat for interpretable identification of systematic bias in federated learning for biomedical images.
- Author
-
Naumova K, Devos A, Karimireddy SP, Jaggi M, and Hartley MA
- Abstract
Distributed collaborative learning is a promising approach for building predictive models for privacy-sensitive biomedical images. Here, several data owners (clients) train a joint model without sharing their original data. However, concealed systematic biases can compromise model performance and fairness. This study presents MyThisYourThat (MyTH) approach, which adapts an interpretable prototypical part learning network to a distributed setting, enabling each client to visualize feature differences learned by others on their own image: comparing one client's 'This' with others' 'That'. Our setting demonstrates four clients collaboratively training two diagnostic classifiers on a benchmark X-ray dataset. Without data bias, the global model reaches 74.14% balanced accuracy for cardiomegaly and 74.08% for pleural effusion. We show that with systematic visual bias in one client, the performance of global models drops to near-random. We demonstrate how differences between local and global prototypes reveal biases and allow their visualization on each client's data without compromising privacy., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF