Back to Search Start Over

Toward Reproducing Network Research Results Using Large Language Models

Authors :
Xiang, Qiao
Lin, Yuling
Fang, Mingjun
Huang, Bang
Huang, Siyong
Wen, Ridi
Le, Franck
Kong, Linghe
Shu, Jiwu
Publication Year :
2023

Abstract

Reproducing research results in the networking community is important for both academia and industry. The current best practice typically resorts to three approaches: (1) looking for publicly available prototypes; (2) contacting the authors to get a private prototype; and (3) manually implementing a prototype following the description of the publication. However, most published network research does not have public prototypes and private prototypes are hard to get. As such, most reproducing efforts are spent on manual implementation based on the publications, which is both time and labor consuming and error-prone. In this paper, we boldly propose reproducing network research results using the emerging large language models (LLMs). In particular, we first prove its feasibility with a small-scale experiment, in which four students with essential networking knowledge each reproduces a different networking system published in prominent conferences and journals by prompt engineering ChatGPT. We report the experiment's observations and lessons and discuss future open research questions of this proposal. This work raises no ethical issue.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.04716
Document Type :
Working Paper