Back to Search
Start Over
Model Alignment Search
- Publication Year :
- 2025
-
Abstract
- When can we say that two neural systems are the same? The answer to this question is goal-dependent, and it is often addressed through correlative methods such as Representational Similarity Analysis (RSA) and Centered Kernel Alignment (CKA). How do we target functionally relevant similarity, and how do we isolate specific causal aspects of the representations? In this work, we introduce Model Alignment Search (MAS), a method for causally exploring distributed representational similarity. The method learns invertible linear transformations that align a subspace between two distributed networks' representations where causal information can be freely interchanged. We first show that the method can be used to transfer values of specific causal variables -- such as the number of items in a counting task -- between networks with different training seeds. We then explore open questions in number cognition by comparing different types of numeric representations in models trained on structurally different tasks. We then explore differences between MAS vs preexisting causal similarity methods, and lastly, we introduce a counterfactual latent auxiliary loss function that helps shape causally relevant alignments even in cases where we do not have causal access to one of the two models for training.
- Subjects :
- Computer Science - Machine Learning
Computer Science - Artificial Intelligence
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2501.06164
- Document Type :
- Working Paper