Back to Search Start Over

MLTEing Models: Negotiating, Evaluating, and Documenting Model and System Qualities

Authors :
Maffey, Katherine R.
Dotterrer, Kyle
Niemann, Jennifer
Cruickshank, Iain
Lewis, Grace A.
Kästner, Christian
Publication Year :
2023
Publisher :
arXiv, 2023.

Abstract

Many organizations seek to ensure that machine learning (ML) and artificial intelligence (AI) systems work as intended in production but currently do not have a cohesive methodology in place to do so. To fill this gap, we propose MLTE (Machine Learning Test and Evaluation, colloquially referred to as "melt"), a framework and implementation to evaluate ML models and systems. The framework compiles state-of-the-art evaluation techniques into an organizational process for interdisciplinary teams, including model developers, software engineers, system owners, and other stakeholders. MLTE tooling supports this process by providing a domain-specific language that teams can use to express model requirements, an infrastructure to define, generate, and collect ML evaluation metrics, and the means to communicate results.<br />Comment: Accepted to the NIER Track of the 45th International Conference on Software Engineering (ICSE 2023)

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....a74fc95a1d2dc3641400f7c73d0d5312
Full Text :
https://doi.org/10.48550/arxiv.2303.01998