Back to Search Start Over

Timely Reporting of Heavy Hitters Using External Memory.

Authors :
SINGH, SHIKHA
PANDEY, PRASHANT
BENDER, MICHAEL A.
BERRY, JONATHAN W.
FARACH-COLTON, MARTÍN
JOHNSON, ROB
KROEGER, THOMAS M.
PHILLIPS, CYNTHIA A.
Source :
ACM Transactions on Database Systems; Dec2021, Vol. 46 Issue 4, p1-35, 35p
Publication Year :
2021

Abstract

Given an input stream S of size N, a Φ-heavy hitter is an item that occurs at least ΦN times in S. The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = ΦN-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection (TED) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (Ω(N ) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multithreaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device's random I/O throughput, i.e., ≈100K observations per second. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
03625915
Volume :
46
Issue :
4
Database :
Complementary Index
Journal :
ACM Transactions on Database Systems
Publication Type :
Academic Journal
Accession number :
154231596
Full Text :
https://doi.org/10.1145/3472392