1. Fast machine translation on parallel and massively parallel hardware
- Author
-
Bogoychev, Nikolay Veselinov, Lopez, Adam, and Heafield, Kenneth
- Subjects
418 ,machine translation systems ,memory speed ,machine translation ,efficient algorithms ,processing speed ,phrase tables ,GPU-based n-gram language model ,optimized CPU implementation ,Moses ,Marian - Abstract
Parallel systems have been widely adopted in the field of machine translation, because the raw computational power they offer is well suited to this computationally intensive task. However programming for parallel hardware is not trivial as it requires redesign of the existing algorithms. In my thesis I design efficient algorithms for machine translation on parallel hardware. I identify memory accesses as the biggest bottleneck to processing speed and propose novel algorithms that minimize them. I present three distinct case studies in which minimizing memory access substantially improves speed: Starting with statistical machine translation, I design a phrase table that makes decoding ten times faster on a multi-threaded CPU. Next, I design a GPU-based n-gram language model that is twice as fast per £ as a highly optimized CPU implementation. Turning to neural machine translation, I design new stochastic gradient descent techniques that make end-to-end training twice as fast. The work in this thesis has been incorporated in two popular machine translation toolkits: Moses and Marian.
- Published
- 2019