991 results
Search Results
2. Gender and Computing Conference Papers.
- Subjects
- *
GENDER , *CONFERENCE papers , *SCHOLARLY publishing , *COMPUTERS , *WOMEN in computer science - Abstract
The article reports on research which was conducted to investigate papers which were submitted to conferences of the Association for Computing Machinery (ACM) between 1996 and 2009 and to evaluate trends and influences on women’s authorship of computing-conference papers. Researchers found that the number of ACM conference papers grew from 1966 to 2009. They also found that while women remain severely underrepresented in computing, there was a substantial increase in their share of papers that they published. They concluded that the increase may be due to the fact that women earned more Ph.D. degrees in computing.
- Published
- 2011
- Full Text
- View/download PDF
3. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
- *
ALGEBRA , *POLYNOMIALS , *CIRCUIT complexity , *ALGORITHMS , *DIRECTED acyclic graphs , *LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F2), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
- *
ALGORITHMS , *SYSTEMS design , *CYBER physical systems , *COMPUTER scheduling , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Rebuttal How-To: Strategies, Tactics, and the Big Picture in Research: Demystifying rebuttal writing.
- Author
-
Danfeng (Daphne) Yao
- Subjects
- *
SCHOLARLY peer review ,RESEARCH evaluation - Abstract
The article offers suggestions for writing a rebuttal when one has submitted a paper to an academic conference and has received a response indicating that revisions are required for the possible acceptance of the paper.
- Published
- 2024
- Full Text
- View/download PDF
6. Electronic Paper's Next Chapter.
- Author
-
Kroeker, Kirk L.
- Subjects
- *
COLOR display systems , *ELECTRONIC paper , *INFORMATION display systems , *ELECTRONIC book readers , *ELECTRONIC books software , *INFORMATION display systems industry - Abstract
The article focuses on electronic paper technology for use in consumer electronic devices, such as electronic books. The author mentions that the biggest technological challenge is the electronic paper color displays. He suggests that new technology is necessary to render better-quality color inexpensively as well as to show moving images and other displays. The author discusses research into electronic paper technology including that by Prime View International, the company that manufactures the Amazon Kindle; Plastic Logic; and Philips Research.
- Published
- 2009
- Full Text
- View/download PDF
7. Are CS Conferences (Too) Closed Communities? Assessing whether newcomers have a more difficult time achieving paper acceptance at established conferences.
- Author
-
Cabot, Jordi, Cánovas Izquierdo, Javier Luis, and Cosentino, Valerio
- Subjects
- *
COMPUTER science conferences , *SOCIAL closure , *SCHOLARLY peer review , *INGROUPS (Social groups) , *MENTORING - Abstract
The article discusses research on whether computer science (CS) conferences are open to papers from outsiders. Topics include the notion of CS conferences as closed communities in relation to shared cultures, processes for review of paper proposals, and the possibility of mentoring programs for young researchers.
- Published
- 2018
- Full Text
- View/download PDF
8. Boosting Fuzzer Efficiency: An Information Theoretic Perspective.
- Author
-
Böhme, Marcel, Manès, Valentin J. M., and Sang Kil Cha
- Subjects
- *
ENTROPY (Information theory) , *UNCERTAINTY (Information theory) , *COMPUTER software testing , *INFORMATION theory - Abstract
This article discusses the concept of fuzzing as a learning process, using Shannon's entropy to quantify the efficiency of a fuzzer in discovering new behaviors of a program. The authors propose an entropy-based power schedule called "Entropic" for greybox fuzzing, assigning more energy to seeds that reveal more information about a program's behaviors. This approach is implemented in the popular greybox fuzzer LibFuzzer and has been integrated into Google and Microsoft's fuzzing platforms. The paper highlights that the efficiency of a fuzzer is determined by the average information each generated input reveals about a program's behaviors. The authors conducted experiments with over 250 open-source programs, demonstrating a substantial improvement in efficiency and confirming their hypothesis that an efficient fuzzer maximizes information.
- Published
- 2023
- Full Text
- View/download PDF
9. Bringing Industry Back to Conferences, and Paying for Results.
- Author
-
Patterson, David and Bugayenko, Yegor
- Subjects
- *
COMPUTER science conferences , *CONFERENCE papers , *COMPUTER programming , *LABOR productivity , *TELECOMMUTING - Abstract
The article presents blog postings on the topics of scientific papers at conferences and the problem of monitoring and paying for productivity from telecommuting computer programmers.
- Published
- 2020
- Full Text
- View/download PDF
10. Ethical Considerations in Network Measurement Papers.
- Author
-
PARTRIDGE, CRAIG and ALLMAN, MARK
- Subjects
- *
COMPUTER networks , *RESEARCH ethics , *EVALUATION methodology , *EXPERIMENTAL ethics - Abstract
The article discusses the need for ethical considerations in computer network measurement papers so authors can justify ethical foundations for experimental methodologies. It covers the evolution of the field of ethics, the ability to extract information from measurement data, and legal issues related to network measurement.
- Published
- 2016
- Full Text
- View/download PDF
11. Research for Practice: OS Scheduling: Better scheduling policies for modern computing systems.
- Author
-
KAFFES, KOSTIS
- Subjects
- *
COMPUTER operating systems , *SCHEDULING software , *CENTRAL processing units , *LOAD balancing (Computer networks) , *LINUX operating systems - Abstract
This article discusses operating systems (OS) scheduling with a look at three current papers discussing this topic selected and reviewed by Kostis Kaffes. The cited papers detail OS scheduling in regards to performance, extensibility, and policy choice with a look at the trade off between latency and utilization in scheduling, implementation of arbitrary scheduling policies, and policy choice on an application-by-application basis.
- Published
- 2023
- Full Text
- View/download PDF
12. Conference-Journal Hybrids.
- Author
-
Grudin, Jonathan, Mark, Gloria, and Riedl, John
- Subjects
- *
SCHOLARLY publishing , *CONFERENCE papers , *SCHOLARLY peer review , *CONFERENCES & conventions , *MANAGEMENT - Abstract
The article presents the author's views regarding the shift in the computer science profession from a focus on journal publications to conference publications. Topics include approaches to improve conference reviewing and management; information on the process of revision for science and engineering publications; and the different requirements for papers accepted for journal publications and conference presentations.
- Published
- 2013
- Full Text
- View/download PDF
13. ACM Europe Council's Best Paper Awards.
- Author
-
JORGE, JOAQUIM, GLENCROSS, MASHHUDA, and QUIGLEY, AARON
- Subjects
- *
COMPUTER science research , *RESEARCH papers (Students) , *COMPUTER science conferences - Abstract
The authors report on awards given by the Association for Computing Machinery (ACM) for student research papers in computer science. They mention the number of awards presented by the ACM Europe Council, the aim of the awards to foster and recognize excellence in computer science, and both short- and long-term impacts of the awards.
- Published
- 2019
- Full Text
- View/download PDF
14. DIAMetrics: Benchmarking Query Engines at Scale.
- Author
-
Deep, Shaleen, Gruenheid, Anja, Nagaraj, Kruthi, Naito, Hiro, Naughton, Jeff, and Viglas, Stratis
- Subjects
- *
BENCHMARKING (Management) , *SEARCH engines , *SOFTWARE measurement , *WEBOMETRICS , *PROGRAM transformation - Abstract
This paper introduces DIAMetrics: a novel framework for end-to-end benchmarking and performance monitoring of query engines. DIAMetrics consists of a number of components supporting tasks such as automated workload summarization, data anonymization, benchmark execution, monitoring, regression identification, and alerting. The architecture of DIAMetrics is highly modular and supports multiple systems by abstracting their implementation details and relying on common canonical formats and pluggable software drivers. The end result is a powerful unified framework that is capable of supporting every aspect of benchmarking production systems and workloads. DIAMetrics has been developed in Google and is being used to benchmark various internal query engines. In this paper, we give an overview of DIAMetrics and discuss its design and implementation. Furthermore, we provide details about its deployment and example use cases. Given the variety of supported systems and use cases within Google, we argue that its core concepts can be used more widely to enable comparative end-to-end benchmarking in other industrial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Research for Practice: The Fun in Fuzzing.
- Author
-
NAGY, STEFAN
- Subjects
- *
ANOMALY detection (Computer security) , *DEBUGGING , *COMPUTER security vulnerabilities , *COMPUTER software testing - Abstract
The article presents information on software fuzzing, a process used for automated bug and vulnerability testing. Several papers are cited and discussed by Stefan Nagy, an assistant professor at the University of Utah. Software fuzzing uses new and unexpected inputs which are systematically generated in order to test programs. The cited papers provide information on topics such as specific classes of bugs and finding bugs in compilers, demonstrating the advantage of combining traditional testing methods with innovative techniques and machine learning in order to identify bugs in complex software systems.
- Published
- 2023
- Full Text
- View/download PDF
16. The Promise of Flexible Displays.
- Author
-
Geller, Tom
- Subjects
- *
ELECTRONIC paper , *TECHNOLOGICAL innovations , *LIQUID crystal displays , *ELECTRONIC books , *ELECTRONICS ,DESIGN & construction - Abstract
The article discusses developments and innovations in computer displays that are lighter, more durable, and more flexible than liquid-crystal displays (LCD). The article states that the new technology consists of front planes and back planes. According to the article, front plane technology in the form of electronic paper, also called e-paper, is produced by Massachusetts-based E Ink Corporation, and used in digital book readers such as the Amazon Kindle, Barnes & Noble Nook, and Sony Reader Digital Book. The article describes the other front plane technology called organic light-emitting diodes (OLEDs), which emit light and consume more power than e-paper yet have a higher lumen-per-watt rating. The article also discusses various possibilities in incorporating glass in flexible displays.
- Published
- 2011
- Full Text
- View/download PDF
17. Conference Paper Selectivity and Impact.
- Author
-
JILIN CHEN and KONSTAN, JOSEPH A.
- Subjects
- *
IMPACT factor (Citation analysis) , *CITATION analysis , *COMPUTER science , *SCIENTIFIC literature , *CONFERENCES & conventions ,RESEARCH evaluation - Abstract
The article presents the results of a study which investigated the correlation between the acceptance rate and impact rating of conference papers in the field of computer science. The papers with the highest impact ratings were found to be associated with highly selective conferences, defined as those which rejected between 70 and 85 percent of papers submitted. Such papers, on average, had higher impact ratings than papers which were published in journals without being presented at conferences. A rejection rate of 85 percent or more tended to suppress submission levels and reduce impact factors, while an acceptance rate over 30 percent was associated with less prestigious conferences.
- Published
- 2010
- Full Text
- View/download PDF
18. Oracle, Where Shall I Submit My Papers?
- Author
-
ELMACIOGLU, ERGIN and DONGWON LEE
- Subjects
- *
COMPUTER science education , *SCHOLARLY publishing , *COMPUTER training , *SCHOLARLY communication , *SCHOLARLY periodicals , *ACADEMIC discourse , *STUDENTS , *CONFERENCES & conventions - Abstract
The article discusses computer science (CS) conferences, presenting a methodology for determining the different characteristics of questionable versus respectable CS conferences. The CS discipline requires the verification of ideas through a rigorous peer-review process prior to publication in conferences, the article states. Topics include the pressures of CS researchers to publish academic papers, competition related to top-tier CS conferences, and program committee (PC) members of CS conferences who do not realize their conference is being operated merely for profit.
- Published
- 2009
- Full Text
- View/download PDF
19. DIARY AS DIALOGUE in Papermill Process Control.
- Author
-
Robinson, Mike, Kovalainen, Mikko, and Auramäki, Esa
- Subjects
- *
INFORMATION retrieval , *DIARIES (Blank-books) , *PAPER mills , *PAPER industry workers , *PROCESS control systems - Abstract
The article focuses on the substitution of paper diaries by electronic diaries, used by the employees. Papermills are gigantic and complex machines, incorporating state-of-the-art technology. As part of a larger project a papermill in Finland was used to provide support for factory floor workers in process industries. Paper diaries were substituted with electronic diaries on one production line employing 35 workers. Entries constitute dialogues within and between work shifts, and partially with other organizational levels. The e-diary design was a cooperative effort between workers in an oil refinery a superintendent and foreman from the papermill, and the research group. It was piloted in the oil refinery, and a refined prototype was used in the papermill. All 35 production line staff workers received about four hours training on the diary including its underlying Microsoft Windows and Lotus Notes applications. E-diary entries are a text version of talking-out-loud-with extended spatial and temporal scope. INSET: An Example of Extended Dialogue Reject Carrier Problems in.....
- Published
- 2000
- Full Text
- View/download PDF
20. Achieving High Performance the Functional Way: Expressing High-Performance Optimizations as Rewrite Strategies.
- Author
-
Hagedorn, Bastian, Lenfers, Johannes, Koehler, Thomas, Xueying Qin, Gorlatch, Sergei, and Steuwer, Michel
- Subjects
- *
OPL (Computer program language) , *PROGRAMMING languages , *DOMAIN-specific programming languages , *COMPUTER programming , *ELECTRONIC data processing , *COMPUTER software development - Abstract
Optimizing programs to run efficiently on modern parallel hardware is hard but crucial for many applications. The predominantly used imperative languages force the programmer to intertwine the code describing functionality and optimizations. This results in a portability nightmare that is particularly problematic given the accelerating trend toward specialized hardware devices to further increase efficiency. Many emerging domain-specific languages (DSLs) used in performance-demanding domains such as deep learning attempt to simplify or even fully automate the optimization process. Using a high-level--often functional--language, programmers focus on describing functionality in a declarative way. In some systems such as Halide or TVM, a separate schedule specifies how the program should be optimized. Unfortunately, these schedules are not written in well-defined programming languages. Instead, they are implemented as a set of ad hoc predefined APIs that the compiler writers have exposed. In this paper, we show how to employ functional programming techniques to solve this challenge with elegance. We present two functional languages that work together--each addressing a separate concern. RISE is a functional language for expressing computations using well-known data-parallel patterns. ELEVATE is a functional language for describing optimization strategies. A high-level RISE program is transformed into a low-level form using optimization strategies written in ELEVATE. From the rewritten low-level program, high-performance parallel code is automatically generated. In contrast to existing high-performance domain-specific systems with scheduling APIs, in our approach programmers are not restricted to a set of built-in operations and optimizations but freely define their own computational patterns in RISE and optimization strategies in ELEVATE in a composable and reusable way. We show how our holistic functional approach achieves competitive performance with the state-of-the-art imperative systems such as Halide and TVM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. INTERACTING WITH PAPER ON THE DIGITAL DESK.
- Author
-
Wellner, Pierre
- Subjects
- *
DIGITAL electronics , *COMPUTER systems , *HUMAN-computer interaction , *USER interfaces , *DIGITAL image processing , *MICROCOMPUTER workstations (Computers) , *CALCULATORS - Abstract
The article focuses on DigitalDesk, which is a real physical desk where one can stack papers, lay out their favorite pencils and markers, and leave coffee cup, but it is enhanced to provide some characteristics of an electronic workstation. A computer display is projected onto the desk, and video cameras pointed down at the desk feed an image-processing system that can sense what the user is doing. Three important characteristics of DigitalDesk are, it projects electronic images down onto the desk and onto paper documents, it responds to interaction with pens or bare fingers and it can read paper documents placed on the desk. The DigitalDesk provides a computer-augmented environment in which paper gains electronic properties that allow it to overcome some of its physical limitations. The calculator is a simple and familiar application that can benefit from the DigitalDesk. It allows people to place ordinary paper documents on the desk and simply point at a printed number to enter it into the calculator.
- Published
- 1993
22. Speculative Taint Tracking (STT): A Comprehensive Protection for Speculatively Accessed Data.
- Author
-
Jiyong Yu, Mengjia Yan, Khyzha, Artem, Morrison, Adam, Torrellas, Josep, and Fletcher, Christopher W.
- Subjects
- *
COMPUTER security , *DATA protection , *MALWARE prevention , *COMPUTER architecture , *COMPUTER performance - Abstract
Speculative execution attacks present an enormous security threat, capable of reading arbitrary program data under malicious speculation, and later exfiltrating that data over microarchitectural covert channels. This paper proposes speculative taint tracking (STT), a high security and high performance hardware mechanism to block these attacks. The main idea is that it is safe to execute and selectively forward the results of speculative instructions that read secrets, as long as we can prove that the forwarded results do not reach potential covert channels. The technical core of the paper is a new abstraction to help identify all microarchitectural covert channels, and an architecture to quickly identify when a covert channel is no longer a threat. We further conduct a detailed formal analysis on the scheme in a companion document. When evaluated on SPEC06 workloads, STT incurs 8.5% or 14.5% performance overhead relative to an insecure machine. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Actionable Auditing Revisited--Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.
- Author
-
Raji, Inioluwa Deborah and Buolamwini, Joy
- Subjects
- *
HUMAN facial recognition software , *AUDITING , *HUMAN skin color , *GENDER , *ERRORS , *SOCIAL responsibility of business - Abstract
Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits and continue to find it difficult to translate such independent assessments into meaningful corporate accountability. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender- and skin-type performance disparities in commercial facial analysis models. This paper (1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, (2) presents new performance metrics from targeted companies such as IBM, Microsoft, and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, (3) provides performance results on PPB by non-target companies such as Amazon and Kairos, and (4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new application program interface (API) versions. All targets reduced accuracy disparities between males and females and darker- and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup that underwent a 17.7-30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72-8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively. This is an expanded version of an earlier publication of these results, revised for a more general audience, and updated to include commentary on further developments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Traffic Classification in an Increasingly Encrypted Web.
- Author
-
Akbari, Iman, Salahuddin, Mohammad A., Ven, Leni, Limam, Noura, Boutaba, Raouf, Mathieu, Bertrand, Moteau, Stephanie, and Tuffin, Stephane
- Subjects
- *
COMPUTER network management , *CONVOLUTIONAL neural networks , *DATA encryption , *INTERNET traffic , *COMPUTER network architectures - Abstract
Traffic classification is essential in network management for a wide range of operations. Recently, it has become increasingly challenging with the widespread adoption of encryption in the Internet, for example, as a de facto in HTTP/2 and QUIC protocols. In the current state of encrypted traffic classification using deep learning (DL), we identify fundamental issues in the way it is typically approached. For instance, although complex DL models with millions of parameters are being used, these models implement a relatively simple logic based on certain header fields of the TLS handshake, limiting model robustness to future versions of encrypted protocols. Furthermore, encrypted traffic is often treated as any other raw input for DL, while crucial domain-specific considerations are commonly ignored. In this paper, we design a novel feature engineering approach used for encrypted Web protocols, and develop a neural network architecture based on stacked long short-term memory layers and convolutional neural networks. We evaluate our approach on a real-world Web traffic dataset from a major Internet service provider and mobile network operator. We achieve an accuracy of 95% in service classification with less raw traffic and a smaller number of parameters, outperforming a state-of-the-art method by nearly 50% fewer false classifications. We show that our DL model generalizes for different classification objectives and encrypted Web protocols. We also evaluate our approach on a public QUIC dataset with finer application-level granularity in labeling, achieving an overall accuracy of 99%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Polymorphic Wireless Receivers.
- Author
-
Restuccia, Francesco and Melodia, Tommaso
- Subjects
- *
WIRELESS communications , *RADIOS , *DEEP learning , *SOFTWARE architecture , *COMPUTER input-output equipment , *SIGNAL processing , *RADIO transmitters & transmission - Abstract
Today's wireless technologies are largely based on inflexible designs, which make them inefficient and prone to a variety of wireless attacks. To address this key issue, wireless receivers will need to (i) infer on-the-fly the physical layer parameters currently used by transmitters; and if needed, (ii) change their hardware and software structures to demodulate the incoming waveform. In this paper, we introduce PolymoRF, a deep learning-based polymorphic receiver able to reconfigure itself in real time based on the inferred waveform parameters. Our key technical innovations are (i) a novel embedded deep learning architecture, called RFNet, which enables the solution of key waveform inference problems, and (ii) a generalized hardware/software architecture that integrates RFNet with radio components and signal processing. We prototype PolymoRF on a custom software-defined radio platform and show through extensive over-the-air experiments that PolymoRF achieves throughput within 87% of a perfect-knowledge Oracle system, thus demonstrating for the first time that polymorphic receivers are feasible. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Sampling Near Neighbors in Search for Fairness.
- Author
-
Aumüller, Martin, Har-Peled, Sariel, Mahabadi, Sepideh, Pagh, Rasmus, and Silvestri, Francesco
- Subjects
- *
DATA , *FAIRNESS , *DATABASE searching , *SEARCH algorithms , *COMPUTER algorithms , *COMPUTER science , *COMPUTER programming - Abstract
Similarity search is a fundamental algorithmic primitive, widely used in many computer science disciplines. Given a set of points S and a radius parameter r > 0, the r-near neighbor (r-NN) problem asks for a data structure that, given any query point q, returns a point p within distance at most r from q. In this paper, we study the r-NN problem in the light of individual fairness and providing equal opportunities: all points that are within distance r from the query should have the same probability to be returned. The problem is of special interest in high dimensions, where Locality Sensitive Hashing (LSH), the theoretically leading approach to similarity search, does not provide any fairness guarantee. In this work, we show that LSH-based algorithms can be made fair, without a significant loss in efficiency. We propose several efficient data structures for the exact and approximate variants of the fair NN problem. Our approach works more generally for sampling uniformly from a subcollection of sets of a given collection and can be used in a few other applications. We also carried out an experimental evaluation that highlights the inherent unfairness of existing NN data structures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. To Be or Not To Be Cited in Computer Science.
- Subjects
- *
BIBLIOGRAPHICAL citations , *COMPUTER scientists , *CONFERENCE papers , *DATABASE management , *RESEARCH methodology , *CHARTS, diagrams, etc. , *COMPUTER network resources - Abstract
The article discusses relative relevant undercitation (RRU) in computer science databases including Scopus, Web of Science (WoS), and the Association for Computing Machinery (ACM) Digital Library (DL). According to the article, conference papers are not always as represented as journal articles in citations for reasons that include parsing technology and a traditional bias for journals. Three experiments testing computer science databases for inaccuracies are described. Tables present researched information about citations by author and database.
- Published
- 2012
- Full Text
- View/download PDF
28. The Pros and Cons of the ‘PACM’ Proposal: Point.
- Author
-
McKinley, Kathryn S.
- Subjects
- *
CONFERENCE papers , *PERIODICAL articles , *COMPUTER science , *SCHOLARLY publishing - Abstract
The article offers the author's comments on a proposal by the Publications Board of the Association for Computing Machinery to bring together conference papers and publishing of journal articles on computer science research. According to the author, compared to journal reviewing, conference process has clear advantages that this proposal maintains.
- Published
- 2015
- Full Text
- View/download PDF
29. The Pros and Cons of the ‘PACM’ Proposal: Counterpoint.
- Author
-
Rosenblum, David S.
- Subjects
- *
CONFERENCE papers , *PERIODICAL articles , *COMPUTER science research , *SCHOLARLY publishing - Abstract
The article offers the author's comments on a proposal by the Publications Board of the Association for Computing Machinery in the U.S. to bring together conference papers and publishing of journal articles on computer science research. According to the author, linking journal publication to annual conference cycles arguably would encourage an annual cycle of submissions from authors.
- Published
- 2015
- Full Text
- View/download PDF
30. Creating the Internet of Biological and Bio-Inspired Things: Technical Perspective.
- Author
-
Gollakota, Shyamnath
- Subjects
- *
BIOLOGICAL systems , *INTERNET of things , *ENVIRONMENTAL monitoring , *DETECTORS - Abstract
This article describes advancements towards the creation of the Internet of biological things and introduces a paper that introduces one such milestone. Biological systems, such as bees and dandelions, exhibit remarkable capabilities like efficient energy use and long-range seed dispersal, which current IoT and embedded systems cannot match. Recent advancements aim to bridge this gap by developing tiny, bio-integrated devices like mSAIL, a lightweight sensor capable of tracking monarch butterflies over long distances using innovative methods like neural network-based location estimation. This research represents a significant step toward creating a network of bio-inspired devices that could transform fields such as environmental monitoring and healthcare by harnessing the strengths of biological systems.
- Published
- 2024
- Full Text
- View/download PDF
31. Improving Refugees' Integration with Online Resource Allocation: Technical Perspective.
- Author
-
Freund, Daniel
- Subjects
- *
REFUGEE resettlement , *RESOURCE allocation , *ALGORITHMS , *EMPLOYMENT - Abstract
The article discusses a research paper that applies online resource allocation algorithms to refugee resettlement, aiming to improve refugees' integration into local communities and employment prospects. By utilizing concepts from algorithm design, such as balancing resource utilization and maintaining capacity for future refugees, the authors were able to enhance the employability metric for resettlement agencies like the Hebrew Immigrant Aid Society (HIAS) by approximately 10%. This research not only addresses critical societal issues but also highlights the potential of algorithms to positively impact real-world outcomes for vulnerable populations, encouraging collaboration between algorithm designers and practitioners on important societal problems.
- Published
- 2024
- Full Text
- View/download PDF
32. BioScript: Programming Safe Chemistry on Laboratories-on-a-Chip.
- Author
-
Ott, Jason, Loveless, Tyson, Curtis, Chris, Lesani, Mohsen, and Brisk, Philip
- Subjects
- *
COMPUTER programming , *BIOCHIPS , *BIOCHEMISTRY , *MICROFLUIDICS , *DOMAIN-specific programming languages , *SYNTAX (Grammar) - Abstract
This paper introduces BioScript, a domain-specific language (DSL) for programmable biochemistry that executes on emerging microfluidic platforms. The goal of this research is to provide a simple, intuitive, and type-safe DSL that is accessible to life science practitioners. The novel feature of the language is its syntax, which aims to optimize human readability; the technical contribution of the paper is the BioScript type system. The type system ensures that certain types of errors, specific to biochemistry, do not occur, such as the interaction of chemicals that may be unsafe. Results are obtained using a custom-built compiler that implements the BioScript language and type system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Technical Perspective: What's All the Fuss about Fuzzing?
- Author
-
Fraser, Gordon
- Subjects
- *
COMPUTER software testing , *ENTROPY (Information theory) , *INFORMATION theory , *TEST methods - Abstract
The article discusses the significance of fuzzing as an effective method for testing software. Fuzzing involves feeding random or invalid test data to programs to discover potential crashes or vulnerabilities, often at scale. Fuzzing is categorized into black-box, grey-box, and white-box approaches, each with varying levels of information about the system under test. Unlike traditional test generation papers that focus on theory and code coverage, fuzzing emphasizes practical applications and bug discovery in real systems. The article introduces a a paper that presents a novel twist to grey-box fuzzing, proposing a power schedule that selects seeds for mutation based on their potential to reveal new program behavior. This approach is underpinned by information theory and measures entropy as an efficiency metric for fuzzers. The article highlights the potential of fuzzing to bridge the gap between traditional test generation and practical testing methods, offering a valuable contribution to the field of software testing.
- Published
- 2023
- Full Text
- View/download PDF
34. On Sampled Metrics for Item Recommendation.
- Author
-
Krichene, Walid and Rendle, Steffen
- Subjects
- *
RECOMMENDER systems , *INFORMATION filtering systems , *INTERNET , *ALGORITHMS , *SOFTWARE measurement - Abstract
Recommender systems personalize content by recommending items to users. Item recommendation algorithms are evaluated by metrics that compare the positions of truly relevant items among the recommended items. To speed up the computation of metrics, recent work often uses sampled metrics where only a smaller set of random items and the relevant items are ranked. This paper investigates such sampled metrics in more detail and shows that they are inconsistent with their exact counterpart, in the sense that they do not persist relative statements, for example, recommender A is better than B, not even in expectation. Moreover, the smaller the sample size, the less difference there is between metrics, and for very small sample size, all metrics collapse to the AUC metric. We show that it is possible to improve the quality of the sampled metrics by applying a correction, obtained by minimizing different criteria. We conclude with an empirical evaluation of the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric calculation, however if an experimental study needs to sample, the proposed corrections can improve the quality of the estimate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
- *
COMPUTER programming , *COMPUTERS in lexicography , *ALGORITHMS , *DATA structures , *COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index7), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Set the Configuration for the Heart of the OS: On the Practicality of Operating System Kernel Debloating.
- Author
-
Hsuan-Chi Kuo, Jianyan Chen, Mohan, Sibin, and Tianyin Xu
- Subjects
- *
KERNEL operating systems , *COMPUTER security , *LINUX operating systems , *OPEN source software , *PROGRAMMING languages - Abstract
This paper presents a study on the practicality of operating system (OS) kernel debloating, that is, reducing kernel code that is not needed by the target applications. Despite their significant benefits regarding security (attack surface reduction) and performance (fast boot time and reduced memory footprints), the state-of-the-art OS kernel debloating techniques are not widely adopted in practice, especially in production environments. We identify the limitations of existing kernel debloating techniques that hinder their practical adoption, such as both accidental and essential ones. To understand these limitations, we build an advanced debloating framework named Cozart that enables us to conduct a number of experiments on different types of OS kernels (such as Linux and the L4 microkernel) with a wide variety of applications (such as HTTPD, Memcached, MySQL, NGINX, PHP, and Redis). Our experimental results reveal the challenges and opportunities in making OS kernel debloating practical. We share these insights and our experience to shed light on addressing the limitations of kernel debloating techniques in future research and development efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. A Position Paper on Computing and Communications.
- Author
-
Dennis, Jack B.
- Subjects
- *
COMPUTER systems , *TELECOMMUNICATION , *INFORMATION science , *INFORMATION technology , *COMPUTERS , *INFORMATION storage & retrieval systems - Abstract
The effective operation of free enterprise in creating the envisioned information service industry is dependent upon three accomplishments: (1) the restructuring of our information processing industry so that a clear division of costs is mode among computing, communications, and the development of information services; (2) the wide use of multiaccess system concepts so that information services may share in the use of computer installations and so that the cost of their construction is reasonable; and (3) the development of public, message-switched communications services so that adequate provisions are made for Information security. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
38. Framing Sustainability as a Property of Software Quality.
- Author
-
LAGO, PATRICIA, AKINLI KOÇAK,, SEDEF, CRNKOVIC, IVICA, and PENZENSTADLER, BIRGIT
- Subjects
- *
SUSTAINABILITY , *COMPUTER programming , *CAR sharing , *PAPER mills , *COMPUTER software development - Abstract
The article focuses on case studies on sustainability in computer programming. The cases investigated were a paper mill's energy control system and a car-sharing service in Germany. The authors argue that creating a sustainability analysis framework will allow software developers to analyze the environmental and social aspects of their work and to balance this with technical and economic considerations. However, they note that sustainability requirements will add to the system scope and will necessitate more work during the requirements phase. INSET: Software Sustainability..
- Published
- 2015
- Full Text
- View/download PDF
39. Economic and Business Dimensions The Gamification of Academia: Gaming the system.
- Author
-
Flaherty, Sean and Gordon, Gregg
- Subjects
- *
HIGHER education , *SOCIAL science research , *PERIODICAL publishing , *ACADEMIC fraud , *FALSIFICATION of data , *AUTHORSHIP , *CORRUPT practices in research - Abstract
The article discusses the establishment of a system to provide metrics and rank the academic value of social science research articles, authors, universities and other organizations. Information is presented on the way the metrics began to have real-world implications for career advancement and tenure, tempting individuals to manipulate these metrics by utilizing methods such as crafting software scripts to boost their metrics and employing graduate students to repeatedly download their papers. The article ends by asking if the prevalence of cheating within academia is a result of the pursuit of status within the intellectual hierarchy, or a consequence of 'gamification' infiltrating academics' lives, eliciting behavior patterns inherent to human nature?
- Published
- 2023
- Full Text
- View/download PDF
40. Worst-Case Topological Entropy and Minimal Data Rate for State Estimation of Switched Linear Systems.
- Author
-
Berger, Guillaume O. and Jungers, Raphaël M.
- Subjects
- *
LINEAR systems , *TOPOLOGICAL entropy , *JOINT spectral radius , *COMPUTER systems - Abstract
In this paper, we study the problem of estimating the state of a switched linear system (SLS), when the observation of the system is subject to communication constraints. We introduce the concept of worst-case topological entropy of such systems, and we show that this quantity is equal to the minimal data rate (number of bits per second) required for the state estimation of the system under arbitrary switching. Furthermore, we provide a closed-form expression for the worst-case topological entropy of switched linear systems, showing that its evaluation reduces to the computation of the joint spectral radius (JSR) of some lifted switched linear system obtained from the original one by using tools from multilinear algebra, and thus can benefit from well-established algorithms for the stability analysis of switched linear systems. Finally, drawing on this expression, we describe a practical coder-decoder that estimates the state of the system and operates at a data rate arbitrarily close to the worst-case topological entropy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Research for Practice: Crash Consistency: Keeping data safe in the presence of crashes is a fundamental problem.
- Author
-
ALAGAPPAN, RAMNATTHAN
- Subjects
- *
COMPUTER system failures , *COMPUTER system failure prevention , *COMPUTER storage capacity , *APPLICATION software , *ELECTRONIC file management , *COMPUTERS - Abstract
This article discusses system crashes and how each level of the system needs to be implemented correctly and the system component interfaces need to be used correctly by applications in order to prevent crashes. Several papers are cited within this article exploring the file system, a lower level component within the system, an exploration of interface-level guarantees with bug-finders, crash-consistent programs, and how the newer concept of persistent memory interacts with system crashes.
- Published
- 2023
- Full Text
- View/download PDF
42. Research for Practice: Convergence.
- Author
-
KLEPPMANN, MARK
- Subjects
- *
COMPUTER science research , *CONSISTENCY models (Computers) , *PROGRAMMING languages , *HUMAN-computer interaction , *DATA management , *DISTRIBUTED databases - Abstract
The article presents a selection of research papers focusing on the theme of convergence, also known as eventual consistency, and specific computer science topics including systems, programming languages, human-computer interaction, and data management. Topics include conflict-free replicated data type (CRDT), consistency as logical monotonicity (CALM), and mergeable replicated data types (MRDTs).
- Published
- 2022
- Full Text
- View/download PDF
43. Spectre Attacks: Exploiting Speculative Execution.
- Author
-
Kocher, Paul, Horn, Jann, Fogh, Anders, Genkin, Daniel, Gruss, Daniel, Haas, Werner, Hamburg, Mike, Lipp, Moritz, Mangard, Stefan, Prescher, Thomas, Schwarz, Michael, and Yarom, Yuval
- Subjects
- *
COMPUTER hacking , *CENTRAL processing units , *INSTRUCTION set architecture , *JUST-in-time systems , *MICROPROCESSORS - Abstract
Modern processors use branch prediction and speculative execution to maximize performance. For example, if the destination of a branch depends on a memory value that is in the process of being read, CPUs will try to guess the destination and attempt to execute ahead. When the memory value finally arrives, the CPU either discards or commits the speculative computation. Speculative logic is unfaithful in how it executes, can access the victim's memory and registers, and can perform operations with measurable side effects. Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim's confidential information via a side channel to the adversary. This paper describes practical attacks that combine methodology from side-channel attacks, fault attacks, and return-oriented programming that can read arbitrary memory from the victim's process. More broadly, the paper shows that speculative execution implementations violate the security assumptions underpinning numerous software security mechanisms, such as operating system process separation, containerization, just-in-time (JIT) compilation, and countermeasures to cache timing and side-channel attacks. These attacks represent a serious threat to actual systems because vulnerable speculative execution capabilities are found in microprocessors from Intel, AMD, and ARM that are used in billions of devices. Although makeshift processor-specific countermeasures are possible in some cases, sound solutions will require fixes to processor designs as well as updates to instruction set architectures (ISAs) to give hardware architects and software developers a common understanding as to what computation state CPU implementations are (and are not) permitted to leak. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. Multi-Itinerary Optimization as Cloud Service.
- Author
-
Cristian, Alexandru, Marshall, Luke, Negrea, Mihai, Stoichescu, Flavius, Cao, Peiwei, and Menache, Ishai
- Subjects
- *
CLOUD computing , *TRAFFIC flow , *ALGORITHMS , *TRAVELING salesman problem , *TRAVEL time (Traffic engineering) - Abstract
In this paper, we describe multi-itinerary optimization (MIO)--a novel Bing Maps service that automates the process of building itineraries for multiple agents while optimizing their routes to minimize travel time or distance. MIO can be used by organizations with a fleet of vehicles and drivers, mobile salesforce, or a team of personnel in the field, to maximize workforce efficiency. It supports a variety of constraints, such as service time windows, duration, priority, pickup and delivery dependencies, and vehicle capacity. MIO also considers traffic conditions between locations, resulting in algorithmic challenges at multiple levels (e.g., calculating time-dependent travel-time distance matrices at scale and scheduling services for multiple agents). To support an end-to-end cloud service with turnaround times of a few seconds, our algorithm design targets a sweet spot between accuracy and performance. Toward that end, we build a scalable approach based on the ALNS metaheuristic. Our experiments show that accounting for traffic significantly improves solution quality: MIO finds efficient routes that avoid late arrivals, whereas traffic-agnostic approaches result in a 15% increase in the combined travel time and the lateness of an arrival. Furthermore, our approach generates itineraries with substantially higher quality than a cutting-edge heuristic (LKH), with faster running times for large instances. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Should Conferences Meet Journals and Where? A Proposal for ‘PACM’.
- Author
-
Konstan, Joseph A. and Davidson, Jack W.
- Subjects
- *
CONFERENCE papers , *PERIODICALS , *SCHOLARLY publishing , *COMPUTER scientists - Abstract
The article focuses on a proposal by the Association for Computing Machinery's Publications Board Conferences Committee to bring together conference and journal publishing. Different methods offered by the Association to publish conference papers in journals are presented. Many computer scientists have focused on limitations of conference publishing.
- Published
- 2015
- Full Text
- View/download PDF
46. Securing the Wireless Emergency Alerts System.
- Author
-
Jihoon Lee, Gyuhong Lee, Jinsung Lee, Youngbin Im, Hollingsworth, Max, Wustrow, Eric, Grunwald, Dirk, and Sangtae Ha
- Subjects
- *
WIRELESS communications , *EMERGENCY communication systems , *4G networks , *CELL phones - Abstract
Modern cell phones are required to receive and display alerts via the Wireless Emergency Alert (WEA) program, under the mandate of the Warning, Alert, and Response Act of 2006. These alerts include AMBER alerts, severe weather alerts, and (unblockable) Presidential Alerts, intended to inform the public of imminent threats. Recently, a test Presidential Alert was sent to all capable phones in the U.S., prompting concerns about how the underlying WEA protocol could be misused or attacked. In this paper, we investigate the details of this system and develop and demonstrate the first practical spoofing attack on Presidential Alerts, using commercially available hardware and modified open source software. Our attack can be performed using a commercially available software-defined radio, and our modifications to the open source software libraries. We find that with only four malicious portable base stations of a single Watt of transmit power each, almost all of a 50,000-seat stadium can be attacked with a 90% success rate. The real impact of such an attack would, of course, depend on the density of cellphones in range; fake alerts in crowded cities or stadiums could potentially result in cascades of panic. Fixing this problem will require a large collaborative effort between carriers, government stakeholders, and cellphone manufacturers. To seed this effort, we also propose three mitigation solutions to address this threat. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. A Year in Lockdown: How the Waves of COVID-19 Impact Internet Traffic.
- Author
-
Feldmann, Anja, Gasser, Oliver, Lichtblau, Franziska, Pujol, Enric, Poese, Ingmar, Dietzel, Christoph, Wagner, Daniel, Wichtlhuber, Matthias, Tapiador, Juan, Vallina-Rodriguez, Narseo, Hohlfeld, Oliver, and Smaragdakis, Georgios
- Subjects
- *
COVID-19 pandemic , *INTERNET traffic , *INTERNET users , *TECHNOLOGY & society , *VIRTUAL private networks - Abstract
In March 2020, the World Health Organization declared the Corona Virus 2019 (COVID-19) outbreak a global pandemic. As a result, billions of people were either encouraged or forced by their governments to stay home to reduce the spread of the virus. This caused many to turn to the Internet for work, education, social interaction, and entertainment. With the Internet demand rising at an unprecedented rate, the question of whether the Internet could sustain this additional load emerged. To answer this question, this paper will review the impact of the first year of the COVID-19 pandemic on Internet traffic in order to analyze its performance. In order to keep our study broad, we collect and analyze Internet traffic data from multiple locations at the core and edge of the Internet. From this, we characterize how traffic and application demands change, to describe the "new normal," and explain how the Internet reacted during these unprecedented times. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Relative Status of Journal and Conference Publications in Computer Science.
- Author
-
FREYNE, JILL, COYLE, LORCAN, SMYTH, BARRY, and CUNNINGHAM, PADRAIG
- Subjects
- *
PUBLISHING , *COMPUTER science , *COMPUTER logic , *CONFERENCE papers , *ACADEMIC discourse , *INFORMATION technology - Abstract
The article discusses the status of research papers published by computer science (CS) conferences, as compared with those published in CS journals. Debate has occurred in relation to the proper way in which to qualify the research presented in a paper. Problems exist in determining the quality of research in various journals due to the wide array of publication opportunities available. A scale that has been created to measure the quality of conference papers in a variety of ways, such as citations and rejection rates, is discussed.
- Published
- 2010
- Full Text
- View/download PDF
49. Understanding Deep Learning (Still) Requires Rethinking Generalization.
- Author
-
Chiyuan Zhang, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vinyals, Oriol
- Subjects
- *
MATHEMATICAL regularization , *GENERALIZATION , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *DATA - Abstract
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small gap between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. We supplement this republication with a new section at the end summarizing recent progresses in the field since the original version of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Constant Overhead Quantum Fault Tolerance with Quantum Expander Codes.
- Author
-
Fawzi, Omar, Grospellier, Antoine, and Leverrier, Anthony
- Subjects
- *
QUANTUM computing , *COMPUTER programming , *ALGORITHMS , *FAULT-tolerant computing - Abstract
The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the small-set-flip decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the bit-flip algorithm applied to the (classical) expander codes of Sipser and Spielman. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.