1. Pruner: An Efficient Cross-Platform Tensor Compiler with Dual Awareness
- Author
-
Qiao, Liang, Shi, Jun, Hao, Xiaoyu, Fang, Xi, Zhao, Minfan, Zhu, Ziqi, Chen, Junshi, An, Hong, Li, Bing, Yuan, Honghui, Wang, Xinyang, Qiao, Liang, Shi, Jun, Hao, Xiaoyu, Fang, Xi, Zhao, Minfan, Zhu, Ziqi, Chen, Junshi, An, Hong, Li, Bing, Yuan, Honghui, and Wang, Xinyang
- Abstract
Tensor program optimization on Deep Learning Accelerators (DLAs) is critical for efficient model deployment. Although search-based Deep Learning Compilers (DLCs) have achieved significant performance gains compared to manual methods, they still suffer from the persistent challenges of low search efficiency and poor cross-platform adaptability. In this paper, we propose $\textbf{Pruner}$, following hardware/software co-design principles to hierarchically boost tensor program optimization. Pruner comprises two primary components: a Parameterized Static Analyzer ($\textbf{PSA}$) and a Pattern-aware Cost Model ($\textbf{PaCM}$). The former serves as a hardware-aware and formulaic performance analysis tool, guiding the pruning of the search space, while the latter enables the performance prediction of tensor programs according to the critical data-flow patterns. Furthermore, to ensure effective cross-platform adaptation, we design a Momentum Transfer Learning ($\textbf{MTL}$) strategy using a Siamese network, which establishes a bidirectional feedback mechanism to improve the robustness of the pre-trained cost model. The extensive experimental results demonstrate the effectiveness and advancement of the proposed Pruner in various tensor program tuning tasks across both online and offline scenarios, with low resource overhead. The code is available at https://github.com/qiaolian9/Pruner.
- Published
- 2024