Back to Search Start Over

Has Your Pretrained Model Improved? A Multi-head Posterior Based Approach

Authors :
Aboagye, Prince
Zheng, Yan
Wang, Junpeng
Saini, Uday Singh
Dai, Xin
Yeh, Michael
Fan, Yujie
Zhuang, Zhongfang
Jain, Shubham
Wang, Liang
Zhang, Wei
Publication Year :
2024

Abstract

The emergence of pre-trained models has significantly impacted Natural Language Processing (NLP) and Computer Vision to relational datasets. Traditionally, these models are assessed through fine-tuned downstream tasks. However, this raises the question of how to evaluate these models more efficiently and more effectively. In this study, we explore a novel approach where we leverage the meta-features associated with each entity as a source of worldly knowledge and employ entity representations from the models. We propose using the consistency between these representations and the meta-features as a metric for evaluating pre-trained models. Our method's effectiveness is demonstrated across various domains, including models with relational datasets, large language models and image models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.02987
Document Type :
Working Paper