Back to Search Start Over

The Design Process for Google's Training Chips: TPUv2 and TPUv3.

Authors :
Norrie, Thomas
Patil, Nishant
Yoon, Doe Hyun
Kurian, George
Li, Sheng
Laudon, James
Young, Cliff
Jouppi, Norman
Patterson, David
Source :
IEEE Micro; Mar/Apr2021, Vol. 41 Issue 2, p56-63, 8p
Publication Year :
2021

Abstract

Five years ago, few would have predicted that a software company like Google would build its own computers. Nevertheless, Google has been deploying computers for machine learning (ML) training since 2017, powering key Google services. These Tensor Processing Units (TPUs) are composed of chips, systems, and software, all co-designed in-house. In this paper, we detail the circumstances that led to this outcome, the challenges and opportunities observed, the approach taken for the chips, a quick review of performance, and finally a retrospective on the results. A companion paper describes the supercomputers built from these chips, the compiler, and a detailed performance analysis [Jou20]. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02721732
Volume :
41
Issue :
2
Database :
Complementary Index
Journal :
IEEE Micro
Publication Type :
Academic Journal
Accession number :
149686576
Full Text :
https://doi.org/10.1109/MM.2021.3058217