Back to Search
Start Over
DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs
- Publication Year :
- 2023
-
Abstract
- The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this paper we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.
Details
- Database :
- OAIster
- Notes :
- text, Borowiec, Damian and Yeung, Ging-Fung and Friday, Adrian and Harper, R.H.R. and Garraghan, Peter (2023) DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs. IEEE Transactions on Parallel and Distributed Systems. ISSN 1045-9219 (In Press), English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1381494071
- Document Type :
- Electronic Resource