Back to Search Start Over

On the Undecidability of Artificial Intelligence Alignment: Machines that Halt

Authors :
de Melo, Gabriel Adriano
Maximo, Marcos Ricardo Omena De Albuquerque
Soma, Nei Yoshihiro
de Castro, Paulo Andre Lima
Publication Year :
2024

Abstract

The inner alignment problem, which asserts whether an arbitrary artificial intelligence (AI) model satisfices a non-trivial alignment function of its outputs given its inputs, is undecidable. This is rigorously proved by Rice's theorem, which is also equivalent to a reduction to Turing's Halting Problem, whose proof sketch is presented in this work. Nevertheless, there is an enumerable set of provenly aligned AIs that are constructed from a finite set of provenly aligned operations. Therefore, we argue that the alignment should be a guaranteed property from the AI architecture rather than a characteristic imposed post-hoc on an arbitrary AI model. Furthermore, while the outer alignment problem is the definition of a judge function that captures human values and preferences, we propose that such a function must also impose a halting constraint that guarantees that the AI model always reaches a terminal state in finite execution steps. Our work presents examples and models that illustrate this constraint and the intricate challenges involved, advancing a compelling case for adopting an intrinsically hard-aligned approach to AI systems architectures that ensures halting.<br />Comment: Submitted for the Scientific Reports AI Alignment Collection

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.08995
Document Type :
Working Paper