1. Towards Explainable Deep Domain Adaptation
- Author
-
Bobek, Szymon, Nowaczyk, Sławomir, Pashami, Sepideh, Taghiyarrenani, Zahra, Nalepa, Grzegorz J., Bobek, Szymon, Nowaczyk, Sławomir, Pashami, Sepideh, Taghiyarrenani, Zahra, and Nalepa, Grzegorz J.
- Abstract
In many practical applications data used for training a machine learning model and the deployment data does not always preserve the same distribution. Transfer learning and, in particular, domain adaptation allows to overcome this issue, by adapting the source model to a new target data distribution and therefore generalizing the knowledge from source to target domain. In this work, we present a method that makes the adaptation process more transparent by providing two complementary explanation mechanisms. The first mechanism explains how the source and target distributions are aligned in the latent space of the domain adaptation model. The second mechanism provides descriptive explanations on how the decision boundary changes in the adapted model with respect to the source model. Along with a description of a method, we also provide initial results obtained on publicly available, real-life dataset. © The Author(s) 2024., Funding: The paper is funded from the XPM project funded by the National Science Centre, Poland under the CHIST-ERA programme (NCN UMO2020/02/Y/ST6/00070) and the Swedish Research Council under grant CHIST-ERA19-XAI-012 and by a grant from the Priority Research Area (DigiWorld) under the Strategic Programme Excellence Initiative at Jagiellonian University.
- Published
- 2024
- Full Text
- View/download PDF