Back to Search Start Over

CL4CTR: A Contrastive Learning Framework for CTR Prediction

Authors :
Wang, Fangye
Wang, Yingxu
Li, Dongsheng
Gu, Hansu
Lu, Tun
Zhang, Peng
Gu, Ning
Publication Year :
2022

Abstract

Many Click-Through Rate (CTR) prediction works focused on designing advanced architectures to model complex feature interactions but neglected the importance of feature representation learning, e.g., adopting a plain embedding layer for each feature, which results in sub-optimal feature representations and thus inferior CTR prediction performance. For instance, low frequency features, which account for the majority of features in many CTR tasks, are less considered in standard supervised learning settings, leading to sub-optimal feature representations. In this paper, we introduce self-supervised learning to produce high-quality feature representations directly and propose a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals to regularize the feature representation learning: contrastive loss, feature alignment, and field uniformity. The contrastive module first constructs positive feature pairs by data augmentation and then minimizes the distance between the representations of each positive feature pair by the contrastive loss. The feature alignment constraint forces the representations of features from the same field to be close, and the field uniformity constraint forces the representations of features from different fields to be distant. Extensive experiments verify that CL4CTR achieves the best performance on four datasets and has excellent effectiveness and compatibility with various representative baselines.<br />Comment: WSDM 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.00522
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3539597.3570372