1. 120 GOPS Photonic tensor core in thin-film lithium niobate for inference and in situ training.
- Author
-
Lin, Zhongjin, Shastri, Bhavin J., Yu, Shangxuan, Song, Jingxiang, Zhu, Yuntao, Safarnejadian, Arman, Cai, Wangning, Lin, Yanmei, Ke, Wei, Hammood, Mustafa, Wang, Tianye, Xu, Mengyue, Zheng, Zibo, Al-Qadasi, Mohammed, Esmaeeli, Omid, Rahim, Mohamed, Pakulski, Grzegorz, Schmid, Jens, Barrios, Pedro, and Jiang, Weihong
- Subjects
LITHIUM niobate ,WEIGHT training ,ARTIFICIAL intelligence ,PHOTONICS ,MULTIPLICATION - Abstract
Photonics offers a transformative approach to artificial intelligence (AI) and neuromorphic computing by enabling low-latency, high-speed, and energy-efficient computations. However, conventional photonic tensor cores face significant challenges in constructing large-scale photonic neuromorphic networks. Here, we propose a fully integrated photonic tensor core, consisting of only two thin-film lithium niobate (TFLN) modulators, a III-V laser, and a charge-integration photoreceiver. Despite its simple architecture, it is capable of implementing an entire layer of a neural network with a computational speed of 120 GOPS, while also allowing flexible adjustment of the number of inputs (fan-in) and outputs (fan-out). Our tensor core supports rapid in-situ training with a weight update speed of 60 GHz. Furthermore, it successfully classifies (supervised learning) and clusters (unsupervised learning) 112 × 112-pixel images through in-situ training. To enable in-situ training for clustering AI tasks, we offer a solution for performing multiplications between two negative numbers. The authors showcase a photonic tensor core in TFLN platform that achieves a computational speed of 120 GOPS for neural networks, with capabilities of in-situ training that support exciting prospects of negative number multiplication. The tensor core can efficiently process 112 × 112-pixel images, potentially scaling up AI tasks and offering nanosecond latency without needing a digital processor. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF