Back to Search Start Over

Database Tuning using Natural Language Processing

Authors :
Immanuel Trummer
Source :
ACM SIGMOD Record. 50:27-28
Publication Year :
2021
Publisher :
Association for Computing Machinery (ACM), 2021.

Abstract

Introduction. We have seen significant advances in the state of the art in natural language processing (NLP) over the past few years [20]. These advances have been driven by new neural network architectures, in particular the Transformer model [19], as well as the successful application of transfer learning approaches to NLP [13]. Typically, training for specific NLP tasks starts from large language models that have been pre-trained on generic tasks (e.g., predicting obfuscated words in text [5]) for which large amounts of training data are available. Using such models as a starting point reduces task-specific training cost as well as the number of required training samples by orders of magnitude [7]. These advances motivate new use cases for NLP methods in the context of databases.

Details

ISSN :
01635808
Volume :
50
Database :
OpenAIRE
Journal :
ACM SIGMOD Record
Accession number :
edsair.doi...........4eb4f93c4a3f60ab25eddbaa63df899b
Full Text :
https://doi.org/10.1145/3503780.3503788