Back to Search Start Over

A Benchmark Suite for Systematically Evaluating Reasoning Shortcuts

Authors :
Bortolotti, Samuele
Marconato, Emanuele
Carraro, Tommaso
Morettin, Paolo
van Krieken, Emile
Vergari, Antonio
Teso, Stefano
Passerini, Andrea
Publication Year :
2024

Abstract

The advent of powerful neural classifiers has increased interest in problems that require both learning and reasoning. These problems are critical for understanding important properties of models, such as trustworthiness, generalization, interpretability, and compliance to safety and structural constraints. However, recent research observed that tasks requiring both learning and reasoning on background knowledge often suffer from reasoning shortcuts (RSs): predictors can solve the downstream reasoning task without associating the correct concepts to the high-dimensional data. To address this issue, we introduce rsbench, a comprehensive benchmark suite designed to systematically evaluate the impact of RSs on models by providing easy access to highly customizable tasks affected by RSs. Furthermore, rsbench implements common metrics for evaluating concept quality and introduces novel formal verification procedures for assessing the presence of RSs in learning tasks. Using rsbench, we highlight that obtaining high quality concepts in both purely neural and neuro-symbolic models is a far-from-solved problem. rsbench is available at: https://unitn-sml.github.io/rsbench.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.10368
Document Type :
Working Paper