1,265 results on '"Sadeghi, Ahmad-Reza"'
Search Results
2. Lost and Found in Speculation: Hybrid Speculative Vulnerability Detection
- Author
-
Rostami, Mohamadreza, Zeitouni, Shaza, Kande, Rahul, Chen, Chen, Mahmoody, Pouya, Jeyavijayan, Rajendran, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Hardware Architecture - Abstract
Microarchitectural attacks represent a challenging and persistent threat to modern processors, exploiting inherent design vulnerabilities in processors to leak sensitive information or compromise systems. Of particular concern is the susceptibility of Speculative Execution, a fundamental part of performance enhancement, to such attacks. We introduce Specure, a novel pre-silicon verification method composing hardware fuzzing with Information Flow Tracking (IFT) to address speculative execution leakages. Integrating IFT enables two significant and non-trivial enhancements over the existing fuzzing approaches: i) automatic detection of microarchitectural information leakages vulnerabilities without golden model and ii) a novel Leakage Path coverage metric for efficient vulnerability detection. Specure identifies previously overlooked speculative execution vulnerabilities on the RISC-V BOOM processor and explores the vulnerability search space 6.45x faster than existing fuzzing techniques. Moreover, Specure detected known vulnerabilities 20x faster.
- Published
- 2024
- Full Text
- View/download PDF
3. Phantom: Untargeted Poisoning Attacks on Semi-Supervised Learning (Full Version)
- Author
-
Knauer, Jonathan, Rieger, Phillip, Fereidooni, Hossein, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Deep Neural Networks (DNNs) can handle increasingly complex tasks, albeit they require rapidly expanding training datasets. Collecting data from platforms with user-generated content, such as social networks, has significantly eased the acquisition of large datasets for training DNNs. Despite these advancements, the manual labeling process remains a substantial challenge in terms of both time and cost. In response, Semi-Supervised Learning (SSL) approaches have emerged, where only a small fraction of the dataset needs to be labeled, leaving the majority unlabeled. However, leveraging data from untrusted sources like social networks also creates new security risks, as potential attackers can easily inject manipulated samples. Previous research on the security of SSL primarily focused on injecting backdoors into trained models, while less attention was given to the more challenging untargeted poisoning attacks. In this paper, we introduce Phantom, the first untargeted poisoning attack in SSL that disrupts the training process by injecting a small number of manipulated images into the unlabeled dataset. Unlike existing attacks, our approach only requires adding few manipulated samples, such as posting images on social networks, without the need to control the victim. Phantom causes SSL algorithms to overlook the actual images' pixels and to rely only on maliciously crafted patterns that \ourname superimposed on the real images. We show Phantom's effectiveness for 6 different datasets and 3 real-world social-media platforms (Facebook, Instagram, Pinterest). Already small fractions of manipulated samples (e.g., 5\%) reduce the accuracy of the resulting model by 10\%, with higher percentages leading to a performance comparable to a naive classifier. Our findings demonstrate the threat of poisoning user-generated content platforms, rendering them unsuitable for SSL in specific tasks., Comment: To Appear at ACM CCS 2024
- Published
- 2024
- Full Text
- View/download PDF
4. Beyond Random Inputs: A Novel ML-Based Hardware Fuzzing
- Author
-
Rostami, Mohamadreza, Chilese, Marco, Zeitouni, Shaza, Kande, Rahul, Rajendran, Jeyavijayan, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Software Engineering ,Computer Science - Hardware Architecture ,Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Modern computing systems heavily rely on hardware as the root of trust. However, their increasing complexity has given rise to security-critical vulnerabilities that cross-layer at-tacks can exploit. Traditional hardware vulnerability detection methods, such as random regression and formal verification, have limitations. Random regression, while scalable, is slow in exploring hardware, and formal verification techniques are often concerned with manual effort and state explosions. Hardware fuzzing has emerged as an effective approach to exploring and detecting security vulnerabilities in large-scale designs like modern processors. They outperform traditional methods regarding coverage, scalability, and efficiency. However, state-of-the-art fuzzers struggle to achieve comprehensive coverage of intricate hardware designs within a practical timeframe, often falling short of a 70% coverage threshold. We propose a novel ML-based hardware fuzzer, ChatFuzz, to address this challenge. Ourapproach leverages LLMs like ChatGPT to understand processor language, focusing on machine codes and generating assembly code sequences. RL is integrated to guide the input generation process by rewarding the inputs using code coverage metrics. We use the open-source RISCV-based RocketCore processor as our testbed. ChatFuzz achieves condition coverage rate of 75% in just 52 minutes compared to a state-of-the-art fuzzer, which requires a lengthy 30-hour window to reach a similar condition coverage. Furthermore, our fuzzer can attain 80% coverage when provided with a limited pool of 10 simulation instances/licenses within a 130-hour window. During this time, it conducted a total of 199K test cases, of which 6K produced discrepancies with the processor's golden model. Our analysis identified more than 10 unique mismatches, including two new bugs in the RocketCore and discrepancies from the RISC-V ISA Simulator.
- Published
- 2024
5. One for All and All for One: GNN-based Control-Flow Attestation for Embedded Devices
- Author
-
Chilese, Marco, Mitev, Richard, Orenbach, Meni, Thorburn, Robert, Atamli, Ahmad, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Control-Flow Attestation (CFA) is a security service that allows an entity (verifier) to verify the integrity of code execution on a remote computer system (prover). Existing CFA schemes suffer from impractical assumptions, such as requiring access to the prover's internal state (e.g., memory or code), the complete Control-Flow Graph (CFG) of the prover's software, large sets of measurements, or tailor-made hardware. Moreover, current CFA schemes are inadequate for attesting embedded systems due to their high computational overhead and resource usage. In this paper, we overcome the limitations of existing CFA schemes for embedded devices by introducing RAGE, a novel, lightweight CFA approach with minimal requirements. RAGE can detect Code Reuse Attacks (CRA), including control- and non-control-data attacks. It efficiently extracts features from one execution trace and leverages Unsupervised Graph Neural Networks (GNNs) to identify deviations from benign executions. The core intuition behind RAGE is to exploit the correspondence between execution trace, execution graph, and execution embeddings to eliminate the unrealistic requirement of having access to a complete CFG. We evaluate RAGE on embedded benchmarks and demonstrate that (i) it detects 40 real-world attacks on embedded software; (ii) Further, we stress our scheme with synthetic return-oriented programming (ROP) and data-oriented programming (DOP) attacks on the real-world embedded software benchmark Embench, achieving 98.03% (ROP) and 91.01% (DOP) F1-Score while maintaining a low False Positive Rate of 3.19%; (iii) Additionally, we evaluate RAGE on OpenSSL, used by millions of devices and achieve 97.49% and 84.42% F1-Score for ROP and DOP attack detection, with an FPR of 5.47%.
- Published
- 2024
6. DeepEclipse: How to Break White-Box DNN-Watermarking Schemes
- Author
-
Pegoraro, Alessandro, Segna, Carlotta, Kumari, Kavita, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Deep Learning (DL) models have become crucial in digital transformation, thus raising concerns about their intellectual property rights. Different watermarking techniques have been developed to protect Deep Neural Networks (DNNs) from IP infringement, creating a competitive field for DNN watermarking and removal methods. The predominant watermarking schemes use white-box techniques, which involve modifying weights by adding a unique signature to specific DNN layers. On the other hand, existing attacks on white-box watermarking usually require knowledge of the specific deployed watermarking scheme or access to the underlying data for further training and fine-tuning. We propose DeepEclipse, a novel and unified framework designed to remove white-box watermarks. We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes. DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme, additional data, or training and fine-tuning. Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes, reducing watermark detection to random guessing while maintaining a similar model accuracy as the original one. Our framework showcases a promising solution to address the ongoing DNN watermark protection and removal challenges., Comment: To appear in the 33rd USENIX Security Symposium, August 2024, Philadelphia, PA, USA. 18 pages, 7 figures, 4 tables, 5 algorithms, 13 equations
- Published
- 2024
7. WhisperFuzz: White-Box Fuzzing for Detecting and Locating Timing Vulnerabilities in Processors
- Author
-
Borkar, Pallavi, Chen, Chen, Rostami, Mohamadreza, Singh, Nikhilesh, Kande, Rahul, Sadeghi, Ahmad-Reza, Rebeiro, Chester, and Rajendran, Jeyavijayan
- Subjects
Computer Science - Cryptography and Security - Abstract
Timing vulnerabilities in processors have emerged as a potent threat. As processors are the foundation of any computing system, identifying these flaws is imperative. Recently fuzzing techniques, traditionally used for detecting software vulnerabilities, have shown promising results for uncovering vulnerabilities in large-scale hardware designs, such as processors. Researchers have adapted black-box or grey-box fuzzing to detect timing vulnerabilities in processors. However, they cannot identify the locations or root causes of these timing vulnerabilities, nor do they provide coverage feedback to enable the designer's confidence in the processor's security. To address the deficiencies of the existing fuzzers, we present WhisperFuzz--the first white-box fuzzer with static analysis--aiming to detect and locate timing vulnerabilities in processors and evaluate the coverage of microarchitectural timing behaviors. WhisperFuzz uses the fundamental nature of processors' timing behaviors, microarchitectural state transitions, to localize timing vulnerabilities. WhisperFuzz automatically extracts microarchitectural state transitions from a processor design at the register-transfer level (RTL) and instruments the design to monitor the state transitions as coverage. Moreover, WhisperFuzz measures the time a design-under-test (DUT) takes to process tests, identifying any minor, abnormal variations that may hint at a timing vulnerability. WhisperFuzz detects 12 new timing vulnerabilities across advanced open-sourced RISC-V processors: BOOM, Rocket Core, and CVA6. Eight of these violate the zero latency requirements of the Zkt extension and are considered serious security vulnerabilities. Moreover, WhisperFuzz also pinpoints the locations of the new and the existing vulnerabilities., Comment: Accepted to USENIX Sec'24
- Published
- 2024
8. FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
- Author
-
Fereidooni, Hossein, Pegoraro, Alessandro, Rieger, Phillip, Dmitrienko, Alexandra, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Federated learning (FL) is a collaborative learning paradigm allowing multiple clients to jointly train a model without sharing their training data. However, FL is susceptible to poisoning attacks, in which the adversary injects manipulated model updates into the federated model aggregation process to corrupt or destroy predictions (untargeted poisoning) or implant hidden functionalities (targeted poisoning or backdoors). Existing defenses against poisoning attacks in FL have several limitations, such as relying on specific assumptions about attack types and strategies or data distributions or not sufficiently robust against advanced injection techniques and strategies and simultaneously maintaining the utility of the aggregated model. To address the deficiencies of existing defenses, we take a generic and completely different approach to detect poisoning (targeted and untargeted) attacks. We present FreqFed, a novel aggregation mechanism that transforms the model updates (i.e., weights) into the frequency domain, where we can identify the core frequency components that inherit sufficient information about weights. This allows us to effectively filter out malicious updates during local training on the clients, regardless of attack types, strategies, and clients' data distributions. We extensively evaluate the efficiency and effectiveness of FreqFed in different application domains, including image classification, word prediction, IoT intrusion detection, and speech recognition. We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model., Comment: To appear in the Network and Distributed System Security (NDSS) Symposium 2024. 16 pages, 8 figures, 12 tables, 1 algorithm, 3 equations
- Published
- 2023
- Full Text
- View/download PDF
9. MABFuzz: Multi-Armed Bandit Algorithms for Fuzzing Processors
- Author
-
Gohil, Vasudev, Kande, Rahul, Chen, Chen, Sadeghi, Ahmad-Reza, and Rajendran, Jeyavijayan
- Subjects
Computer Science - Cryptography and Security - Abstract
As the complexities of processors keep increasing, the task of effectively verifying their integrity and security becomes ever more daunting. The intricate web of instructions, microarchitectural features, and interdependencies woven into modern processors pose a formidable challenge for even the most diligent verification and security engineers. To tackle this growing concern, recently, researchers have developed fuzzing techniques explicitly tailored for hardware processors. However, a prevailing issue with these hardware fuzzers is their heavy reliance on static strategies to make decisions in their algorithms. To address this problem, we develop a novel dynamic and adaptive decision-making framework, MABFuzz, that uses multi-armed bandit (MAB) algorithms to fuzz processors. MABFuzz is agnostic to, and hence, applicable to, any existing hardware fuzzer. In the process of designing MABFuzz, we encounter challenges related to the compatibility of MAB algorithms with fuzzers and maximizing their efficacy for fuzzing. We overcome these challenges by modifying the fuzzing process and tailoring MAB algorithms to accommodate special requirements for hardware fuzzing. We integrate three widely used MAB algorithms in a state-of-the-art hardware fuzzer and evaluate them on three popular RISC-V-based processors. Experimental results demonstrate the ability of MABFuzz to cover a broader spectrum of processors' intricate landscapes and doing so with remarkable efficiency. In particular, MABFuzz achieves up to 308x speedup in detecting vulnerabilities and up to 5x speedup in achieving coverage compared to a state-of-the-art technique., Comment: To be published at Design, Automation and Test in Europe Conference, 2024
- Published
- 2023
10. DEMASQ: Unmasking the ChatGPT Wordsmith
- Author
-
Kumari, Kavita, Pegoraro, Alessandro, Fereidooni, Hossein, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
The potential misuse of ChatGPT and other Large Language Models (LLMs) has raised concerns regarding the dissemination of false information, plagiarism, academic dishonesty, and fraudulent activities. Consequently, distinguishing between AI-generated and human-generated content has emerged as an intriguing research topic. However, current text detection methods lack precision and are often restricted to specific tasks or domains, making them inadequate for identifying content generated by ChatGPT. In this paper, we propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content. Our method addresses two critical factors: (i) the distinct biases in text composition observed in human- and machine-generated content and (ii) the alterations made by humans to evade previous detection methods. DEMASQ is an energy-based detection model that incorporates novel aspects, such as (i) optimization inspired by the Doppler effect to capture the interdependence between input text embeddings and output labels, and (ii) the use of explainable AI techniques to generate diverse perturbations. To evaluate our detector, we create a benchmark dataset comprising a mixture of prompts from both ChatGPT and humans, encompassing domains such as medical, open Q&A, finance, wiki, and Reddit. Our evaluation demonstrates that DEMASQ achieves high accuracy in identifying content generated by ChatGPT., Comment: To appear in the Network and Distributed System Security (NDSS) Symposium 2024. 15 pages, 3 figures, 6 tables, 3 algorithms, 6 equations
- Published
- 2023
- Full Text
- View/download PDF
11. Unleashing IoT Security: Assessing the Effectiveness of Best Practices in Protecting Against Threats
- Author
-
Pütz, Philipp, Mitev, Richard, Miettinen, Markus, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
The Internet of Things (IoT) market is rapidly growing and is expected to double from 2020 to 2025. The increasing use of IoT devices, particularly in smart homes, raises crucial concerns about user privacy and security as these devices often handle sensitive and critical information. Inadequate security designs and implementations by IoT vendors can lead to significant vulnerabilities. To address these IoT device vulnerabilities, institutions, and organizations have published IoT security best practices (BPs) to guide manufacturers in ensuring the security of their products. However, there is currently no standardized approach for evaluating the effectiveness of individual BP recommendations. This leads to manufacturers investing effort in implementing less effective BPs while potentially neglecting measures with greater impact. In this paper, we propose a methodology for evaluating the security impact of IoT BPs and ranking them based on their effectiveness in protecting against security threats. Our approach involves translating identified BPs into concrete test cases that can be applied to real-world IoT devices to assess their effectiveness in mitigating vulnerabilities. We applied this methodology to evaluate the security impact of nine commodity IoT products, discovering 18 vulnerabilities. By empirically assessing the actual impact of BPs on device security, IoT designers and implementers can prioritize their security investments more effectively, improving security outcomes and optimizing limited security budgets.
- Published
- 2023
12. Accountability of Things: Large-Scale Tamper-Evident Logging for Smart Devices
- Author
-
Koisser, David and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Our modern world relies on a growing number of interconnected and interacting devices, leading to a plethora of logs establishing audit trails for all kinds of events. Simultaneously, logs become increasingly important for forensic investigations, and thus, an adversary will aim to alter logs to avoid culpability, e.g., by compromising devices that generate and store logs. Thus, it is essential to ensure that no one can tamper with any logs without going undetected. However, existing approaches to establish tamper evidence of logs do not scale and cannot protect the increasingly large number of devices found today, as they impose large storage or network overheads. Additionally, most schemes do not provide an efficient mechanism to prove that individual events have been logged to establish accountability when different devices interact. This paper introduces a novel scheme for practical large-scale tamper-evident logging with the help of a trusted third party. To achieve this, we present a new binary hash tree construction designed around timestamps to achieve constant storage overhead with a configured temporal resolution. Additionally, our design enables the efficient construction of shareable proofs, proving that an event was indeed logged. Our evaluation shows that - using practical parameters - our scheme can localize any tampering of logs with a sub-second resolution, with a constant overhead of ~8KB per hour per device.
- Published
- 2023
13. FLAIRS: FPGA-Accelerated Inference-Resistant & Secure Federated Learning
- Author
-
Li, Huimin, Rieger, Phillip, Zeitouni, Shaza, Picek, Stjepan, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Federated Learning (FL) has become very popular since it enables clients to train a joint model collaboratively without sharing their private data. However, FL has been shown to be susceptible to backdoor and inference attacks. While in the former, the adversary injects manipulated updates into the aggregation process; the latter leverages clients' local models to deduce their private data. Contemporary solutions to address the security concerns of FL are either impractical for real-world deployment due to high-performance overheads or are tailored towards addressing specific threats, for instance, privacy-preserving aggregation or backdoor defenses. Given these limitations, our research delves into the advantages of harnessing the FPGA-based computing paradigm to overcome performance bottlenecks of software-only solutions while mitigating backdoor and inference attacks. We utilize FPGA-based enclaves to address inference attacks during the aggregation process of FL. We adopt an advanced backdoor-aware aggregation algorithm on the FPGA to counter backdoor attacks. We implemented and evaluated our method on Xilinx VMK-180, yielding a significant speed-up of around 300 times on the IoT-Traffic dataset and more than 506 times on the CIFAR-10 dataset.
- Published
- 2023
14. Don't Shoot the Messenger: Localization Prevention of Satellite Internet Users
- Author
-
Koisser, David, Mitev, Richard, Chilese, Marco, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Satellite Internet plays an increasingly important role in geopolitical conflicts. This notion was affirmed in the Ukrainian conflict escalating at the beginning of 2022, with the large-scale deployment of the Starlink satellite Internet service which consequently demonstrated the strategic importance of a free flow of information. Aside from military use, many citizens publish sensitive information on social media platforms to influence the public narrative. However, the use of satellite communication has proven to be dangerous, as the signals can be monitored by other satellites and used to triangulate the source on the ground. Unfortunately, the targeted killings of journalists have shown this threat to be effective. While the increasing deployment of satellite Internet systems gives citizens an unprecedented mouthpiece in conflicts, protecting them against localization is an unaddressed problem. To address this threat, we present AnonSat, a novel scheme to protect satellite Internet users from triangulation. AnonSat works with cheap off-the-shelf devices, leveraging long-range wireless communication to span a local network among satellite base stations. This allows rerouting users' communication to other satellite base stations, some distance away from each user, thus, preventing their localization. AnonSat is designed for easy deployment and usability, which we demonstrate with a prototype implementation. Our large-scale network simulations using real-world data sets show the effectiveness of AnonSat in various practical settings.
- Published
- 2023
15. PSOFuzz: Fuzzing Processors with Particle Swarm Optimization
- Author
-
Chen, Chen, Gohil, Vasudev, Kande, Rahul, Sadeghi, Ahmad-Reza, and Rajendran, Jeyavijayan
- Subjects
Computer Science - Cryptography and Security - Abstract
Hardware security vulnerabilities in computing systems compromise the security defenses of not only the hardware but also the software running on it. Recent research has shown that hardware fuzzing is a promising technique to efficiently detect such vulnerabilities in large-scale designs such as modern processors. However, the current fuzzing techniques do not adjust their strategies dynamically toward faster and higher design space exploration, resulting in slow vulnerability detection, evident through their low design coverage. To address this problem, we propose PSOFuzz, which uses particle swarm optimization (PSO) to schedule the mutation operators and to generate initial input programs dynamically with the objective of detecting vulnerabilities quickly. Unlike traditional PSO, which finds a single optimal solution, we use a modified PSO that dynamically computes the optimal solution for selecting mutation operators required to explore new design regions in hardware. We also address the challenge of inefficient initial seed generation by employing PSO-based seed generation. Including these optimizations, our final formulation outperforms fuzzers without PSO. Experiments show that PSOFuzz achieves up to 15.25$\times$ speedup for vulnerability detection and up to 2.22$\times$ speedup for coverage compared to the state-of-the-art simulation-based hardware fuzzer., Comment: To be published in the proceedings of the ICCAD, 2023
- Published
- 2023
16. HyPFuzz: Formal-Assisted Processor Fuzzing
- Author
-
Chen, Chen, Kande, Rahul, Nguyen, Nathan, Andersen, Flemming, Tyagi, Aakash, Sadeghi, Ahmad-Reza, and Rajendran, Jeyavijayan
- Subjects
Computer Science - Cryptography and Security - Abstract
Recent research has shown that hardware fuzzers can effectively detect security vulnerabilities in modern processors. However, existing hardware fuzzers do not fuzz well the hard-to-reach design spaces. Consequently, these fuzzers cannot effectively fuzz security-critical control- and data-flow logic in the processors, hence missing security vulnerabilities. To tackle this challenge, we present HyPFuzz, a hybrid fuzzer that leverages formal verification tools to help fuzz the hard-to-reach part of the processors. To increase the effectiveness of HyPFuzz, we perform optimizations in time and space. First, we develop a scheduling strategy to prevent under- or over-utilization of the capabilities of formal tools and fuzzers. Second, we develop heuristic strategies to select points in the design space for the formal tool to target. We evaluate HyPFuzz on five widely-used open-source processors. HyPFuzz detected all the vulnerabilities detected by the most recent processor fuzzer and found three new vulnerabilities that were missed by previous extensive fuzzing and formal verification. This led to two new common vulnerabilities and exposures (CVE) entries. HyPFuzz also achieves 11.68$\times$ faster coverage than the most recent processor fuzzer., Comment: To be published in the proceedings of the 32st USENIX Security Symposium, 2023
- Published
- 2023
17. To ChatGPT, or not to ChatGPT: That is the question!
- Author
-
Pegoraro, Alessandro, Kumari, Kavita, Fereidooni, Hossein, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
ChatGPT has become a global sensation. As ChatGPT and other Large Language Models (LLMs) emerge, concerns of misusing them in various ways increase, such as disseminating fake news, plagiarism, manipulating public opinion, cheating, and fraud. Hence, distinguishing AI-generated from human-generated becomes increasingly essential. Researchers have proposed various detection methodologies, ranging from basic binary classifiers to more complex deep-learning models. Some detection techniques rely on statistical characteristics or syntactic patterns, while others incorporate semantic or contextual information to improve accuracy. The primary objective of this study is to provide a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection. Additionally, we evaluated other AI-generated text detection tools that do not specifically claim to detect ChatGPT-generated content to assess their performance in detecting ChatGPT-generated content. For our evaluation, we have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains and user-generated responses from popular social networking platforms. The dataset serves as a reference to assess the performance of various techniques in detecting ChatGPT-generated content. Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
- Published
- 2023
18. ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
- Author
-
Rieger, Phillip, Chilese, Marco, Mohamed, Reham, Miettinen, Markus, Fereidooni, Hossein, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
IoT application domains, device diversity and connectivity are rapidly growing. IoT devices control various functions in smart homes and buildings, smart cities, and smart factories, making these devices an attractive target for attackers. On the other hand, the large variability of different application scenarios and inherent heterogeneity of devices make it very challenging to reliably detect abnormal IoT device behaviors and distinguish these from benign behaviors. Existing approaches for detecting attacks are mostly limited to attacks directly compromising individual IoT devices, or, require predefined detection policies. They cannot detect attacks that utilize the control plane of the IoT system to trigger actions in an unintended/malicious context, e.g., opening a smart lock while the smart home residents are absent. In this paper, we tackle this problem and propose ARGUS, the first self-learning intrusion detection system for detecting contextual attacks on IoT environments, in which the attacker maliciously invokes IoT device actions to reach its goals. ARGUS monitors the contextual setting based on the state and actions of IoT devices in the environment. An unsupervised Deep Neural Network (DNN) is used for modeling the typical contextual device behavior and detecting actions taking place in abnormal contextual settings. This unsupervised approach ensures that ARGUS is not restricted to detecting previously known attacks but is also able to detect new attacks. We evaluated ARGUS on heterogeneous real-world smart-home settings and achieve at least an F1-Score of 99.64% for each setup, with a false positive rate (FPR) of at most 0.03%., Comment: To appear in the 32nd USENIX Security Symposium, August 2022, Anaheim CA, USA
- Published
- 2023
19. Oops..! I Glitched It Again! How to Multi-Glitch the Glitching-Protections on ARM TrustZone-M
- Author
-
Saß, Marvin, Mitev, Richard, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Voltage Fault Injection (VFI), also known as power glitching, has proven to be a severe threat to real-world systems. In VFI attacks, the adversary disturbs the power-supply of the target-device forcing the device to illegitimate behavior. Various countermeasures have been proposed to address different types of fault injection attacks at different abstraction layers, either requiring to modify the underlying hardware or software/firmware at the machine instruction level. Moreover, only recently, individual chip manufacturers have started to respond to this threat by integrating countermeasures in their products. Generally, these countermeasures aim at protecting against single fault injection (SFI) attacks, since Multiple Fault Injection (MFI) is believed to be challenging and sometimes even impractical. In this paper, we present {\mu}-Glitch, the first Voltage Fault Injection (VFI) platform which is capable of injecting multiple, coordinated voltage faults into a target device, requiring only a single trigger signal. We provide a novel flow for Multiple Voltage Fault Injection (MVFI) attacks to significantly reduce the search complexity for fault parameters, as the search space increases exponentially with each additional fault injection. We evaluate and showcase the effectiveness and practicality of our attack platform on four real-world chips, featuring TrustZone-M: The first two have interdependent backchecking mechanisms, while the second two have additionally integrated countermeasures against fault injection. Our evaluation revealed that {\mu}-Glitch can successfully inject four consecutive faults within an average time of one day. Finally, we discuss potential countermeasures to mitigate VFI attacks and additionally propose two novel attack scenarios for MVFI.
- Published
- 2023
20. AuthentiSense: A Scalable Behavioral Biometrics Authentication Scheme using Few-Shot Learning for Mobile Platforms
- Author
-
Fereidooni, Hossein, König, Jan, Rieger, Phillip, Chilese, Marco, Gökbakan, Bora, Finke, Moritz, Dmitrienko, Alexandra, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Mobile applications are widely used for online services sharing a large amount of personal data online. One-time authentication techniques such as passwords and physiological biometrics (e.g., fingerprint, face, and iris) have their own advantages but also disadvantages since they can be stolen or emulated, and do not prevent access to the underlying device, once it is unlocked. To address these challenges, complementary authentication systems based on behavioural biometrics have emerged. The goal is to continuously profile users based on their interaction with the mobile device. However, existing behavioural authentication schemes are not (i) user-agnostic meaning that they cannot dynamically handle changes in the user-base without model re-training, or (ii) do not scale well to authenticate millions of users. In this paper, we present AuthentiSense, a user-agnostic, scalable, and efficient behavioural biometrics authentication system that enables continuous authentication and utilizes only motion patterns (i.e., accelerometer, gyroscope and magnetometer data) while users interact with mobile apps. Our approach requires neither manually engineered features nor a significant amount of data for model training. We leverage a few-shot learning technique, called Siamese network, to authenticate users at a large scale. We perform a systematic measurement study and report the impact of the parameters such as interaction time needed for authentication and n-shot verification (comparison with enrollment samples) at the recognition stage. Remarkably, AuthentiSense achieves high accuracy of up to 97% in terms of F1-score even when evaluated in a few-shot fashion that requires only a few behaviour samples per user (3 shots). Our approach accurately authenticates users only after 1 second of user interaction. For AuthentiSense, we report a FAR and FRR of 0.023 and 0.057, respectively., Comment: 16 pages, 7 figures
- Published
- 2023
21. BayBFed: Bayesian Backdoor Defense for Federated Learning
- Author
-
Kumari, Kavita, Rieger, Phillip, Fereidooni, Hossein, Jadliwala, Murtuza, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Federated learning (FL) allows participants to jointly train a machine learning model without sharing their private data with others. However, FL is vulnerable to poisoning attacks such as backdoor attacks. Consequently, a variety of defenses have recently been proposed, which have primarily utilized intermediary states of the global model (i.e., logits) or distance of the local models (i.e., L2-norm) from the global model to detect malicious backdoors. However, as these approaches directly operate on client updates, their effectiveness depends on factors such as clients' data distribution or the adversary's attack strategies. In this paper, we introduce a novel and more generic backdoor defense framework, called BayBFed, which proposes to utilize probability distributions over client updates to detect malicious updates in FL: it computes a probabilistic measure over the clients' updates to keep track of any adjustments made in the updates, and uses a novel detection algorithm that can leverage this probabilistic measure to efficiently detect and filter out malicious updates. Thus, it overcomes the shortcomings of previous approaches that arise due to the direct usage of client updates; as our probabilistic measure will include all aspects of the local client training strategies. BayBFed utilizes two Bayesian Non-Parametric extensions: (i) a Hierarchical Beta-Bernoulli process to draw a probabilistic measure given the clients' updates, and (ii) an adaptation of the Chinese Restaurant Process (CRP), referred by us as CRP-Jensen, which leverages this probabilistic measure to detect and filter out malicious updates. We extensively evaluate our defense approach on five benchmark datasets: CIFAR10, Reddit, IoT intrusion detection, MNIST, and FMNIST, and show that it can effectively detect and eliminate malicious updates in FL without deteriorating the benign performance of the global model.
- Published
- 2023
22. Follow Us and Become Famous! Insights and Guidelines From Instagram Engagement Mechanisms
- Author
-
Tricomi, Pier Paolo, Chilese, Marco, Conti, Mauro, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Social and Information Networks ,Computer Science - Computers and Society ,Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
With 1.3 billion users, Instagram (IG) has also become a business tool. IG influencer marketing, expected to generate $33.25 billion in 2022, encourages companies and influencers to create trending content. Various methods have been proposed for predicting a post's popularity, i.e., how much engagement (e.g., Likes) it will generate. However, these methods are limited: first, they focus on forecasting the likes, ignoring the number of comments, which became crucial in 2021. Secondly, studies often use biased or limited data. Third, researchers focused on Deep Learning models to increase predictive performance, which are difficult to interpret. As a result, end-users can only estimate engagement after a post is created, which is inefficient and expensive. A better approach is to generate a post based on what people and IG like, e.g., by following guidelines. In this work, we uncover part of the underlying mechanisms driving IG engagement. To achieve this goal, we rely on statistical analysis and interpretable models rather than Deep Learning (black-box) approaches. We conduct extensive experiments using a worldwide dataset of 10 million posts created by 34K global influencers in nine different categories. With our simple yet powerful algorithms, we can predict engagement up to 94% of F1-Score, making us comparable and even superior to Deep Learning-based method. Furthermore, we propose a novel unsupervised algorithm for finding highly engaging topics on IG. Thanks to our interpretable approaches, we conclude by outlining guidelines for creating successful posts.
- Published
- 2023
23. DARWIN: Survival of the Fittest Fuzzing Mutators
- Author
-
Jauernig, Patrick, Jakobovic, Domagoj, Picek, Stjepan, Stapf, Emmanuel, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Software Engineering ,Computer Science - Cryptography and Security - Abstract
Fuzzing is an automated software testing technique broadly adopted by the industry. A popular variant is mutation-based fuzzing, which discovers a large number of bugs in practice. While the research community has studied mutation-based fuzzing for years now, the algorithms' interactions within the fuzzer are highly complex and can, together with the randomness in every instance of a fuzzer, lead to unpredictable effects. Most efforts to improve this fragile interaction focused on optimizing seed scheduling. However, real-world results like Google's FuzzBench highlight that these approaches do not consistently show improvements in practice. Another approach to improve the fuzzing process algorithmically is optimizing mutation scheduling. Unfortunately, existing mutation scheduling approaches also failed to convince because of missing real-world improvements or too many user-controlled parameters whose configuration requires expert knowledge about the target program. This leaves the challenging problem of cleverly processing test cases and achieving a measurable improvement unsolved. We present DARWIN, a novel mutation scheduler and the first to show fuzzing improvements in a realistic scenario without the need to introduce additional user-configurable parameters, opening this approach to the broad fuzzing community. DARWIN uses an Evolution Strategy to systematically optimize and adapt the probability distribution of the mutation operators during fuzzing. We implemented a prototype based on the popular general-purpose fuzzer AFL. DARWIN significantly outperforms the state-of-the-art mutation scheduler and the AFL baseline in our own coverage experiment, in FuzzBench, and by finding 15 out of 21 bugs the fastest in the MAGMA benchmark. Finally, DARWIN found 20 unique bugs (including one novel bug), 66% more than AFL, in widely-used real-world applications.
- Published
- 2022
- Full Text
- View/download PDF
24. CrowdGuard: Federated Backdoor Detection in Federated Learning
- Author
-
Rieger, Phillip, Krauß, Torsten, Miettinen, Markus, Dmitrienko, Alexandra, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks. This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides., Comment: To appear in the Network and Distributed System Security (NDSS) Symposium 2024. Phillip Rieger and Torsten Krau{\ss} contributed equally to this contribution. 19 pages, 8 figures, 5 tables, 4 algorithms, 5 equations
- Published
- 2022
25. POSE: Practical Off-chain Smart Contract Execution
- Author
-
Frassetto, Tommaso, Jauernig, Patrick, Koisser, David, Kretzler, David, Schlosser, Benjamin, Faust, Sebastian, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Smart contracts enable users to execute payments depending on complex program logic. Ethereum is the most notable example of a blockchain that supports smart contracts leveraged for countless applications including games, auctions and financial products. Unfortunately, the traditional method of running contract code on-chain is very expensive, for instance, on the Ethereum platform, fees have dramatically increased, rendering the system unsuitable for complex applications. A prominent solution to address this problem is to execute code off-chain and only use the blockchain as a trust anchor. While there has been significant progress in developing off-chain systems over the last years, current off-chain solutions suffer from various drawbacks including costly blockchain interactions, lack of data privacy, huge capital costs from locked collateral, or supporting only a restricted set of applications. In this paper, we present POSE -- a practical off-chain protocol for smart contracts that addresses the aforementioned shortcomings of existing solutions. POSE leverages a pool of Trusted Execution Environments (TEEs) to execute the computation efficiently and to swiftly recover from accidental or malicious failures. We show that POSE provides strong security guarantees even if a large subset of parties is corrupted. We evaluate our proof-of-concept implementation with respect to its efficiency and effectiveness.
- Published
- 2022
- Full Text
- View/download PDF
26. Trusted Container Extensions for Container-based Confidential Computing
- Author
-
Brasser, Ferdinand, Jauernig, Patrick, Pustelnik, Frederik, Sadeghi, Ahmad-Reza, and Stapf, Emmanuel
- Subjects
Computer Science - Cryptography and Security - Abstract
Cloud computing has emerged as a corner stone of today's computing landscape. More and more customers who outsource their infrastructure benefit from the manageability, scalability and cost saving that come with cloud computing. Those benefits get amplified by the trend towards microservices. Instead of renting and maintaining full VMs, customers increasingly leverage container technologies, which come with a much more lightweight resource footprint while also removing the need to emulate complete systems and their devices. However, privacy concerns hamper many customers from moving to the cloud and leveraging its benefits. Furthermore, regulatory requirements prevent the adaption of cloud computing in many industries, such as health care or finance. Standard software isolation mechanisms have been proven to be insufficient if the host system is not fully trusted, e.g., when the cloud infrastructure gets compromised by malicious third-party actors. Consequently, confidential computing is gaining increasing relevance in the cloud computing field. We present Trusted Container Extensions (TCX), a novel container security architecture, which combines the manageability and agility of standard containers with the strong protection guarantees of hardware-enforced Trusted Execution Environments (TEEs) to enable confidential computing for container workloads. TCX provides significant performance advantages compared to existing approaches while protecting container workloads and the data processed by them. Our implementation, based on AMD Secure Encrypted Virtualization (SEV), ensures integrity and confidentiality of data and services during deployment, and allows secure interaction between protected containers as well as to external entities. Our evaluation shows that our implementation induces a low performance overhead of 5.77% on the standard SPEC2017 benchmark suite.
- Published
- 2022
27. V'CER: Efficient Certificate Validation in Constrained Networks
- Author
-
Koisser, David, Jauernig, Patrick, Tsudik, Gene, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
We address the challenging problem of efficient trust establishment in constrained networks, i.e., networks that are composed of a large and dynamic set of (possibly heterogeneous) devices with limited bandwidth, connectivity, storage, and computational capabilities. Constrained networks are an integral part of many emerging application domains, from IoT meshes to satellite networks. A particularly difficult challenge is how to enforce timely revocation of compromised or faulty devices. Unfortunately, current solutions and techniques cannot cope with idiosyncrasies of constrained networks, since they mandate frequent real-time communication with centralized entities, storage and maintenance of large amounts of revocation information, and incur considerable bandwidth overhead. To address the shortcomings of existing solutions, we design V'CER, a secure and efficient scheme for certificate validation that augments and benefits a PKI for constrained networks. V'CER utilizes unique features of Sparse Merkle Trees (SMTs) to perform lightweight revocation checks, while enabling collaborative operations among devices to keep them up-to-date when connectivity to external authorities is limited. V'CER can complement any PKI scheme to increase its flexibility and applicability, while ensuring fast dissemination of validation information independent of the network routing or topology. V'CER requires under 3KB storage per node covering 106 certificates. We developed and deployed a prototype of V'CER on an in-orbit satellite and our large-scale simulations demonstrate that V'CER decreases the number of requests for updates from external authorities by over 93%, when nodes are intermittently connected., Comment: 18 pages, 7 figures, to be published at USENIX Security 2022
- Published
- 2022
28. Building Your Own Trusted Execution Environments Using FPGA
- Author
-
Armanuzzaman, Md, Sadeghi, Ahmad-Reza, and Zhao, Ziming
- Subjects
Computer Science - Cryptography and Security ,C.0 ,K.6.5 - Abstract
In recent years, we have witnessed unprecedented growth in using hardware-assisted Trusted Execution Environments (TEE) or enclaves to protect sensitive code and data on commodity devices thanks to new hardware security features, such as Intel SGX and Arm TrustZone. Even though the proprietary TEEs bring many benefits, they have been criticized for lack of transparency, vulnerabilities, and various restrictions. For example, existing TEEs only provide a static and fixed hardware Trusted Computing Base (TCB), which cannot be customized for different applications. Existing TEEs time-share a processor core with the Rich Execution Environment (REE), making execution less efficient and vulnerable to cache side-channel attacks. Moreover, TrustZone lacks hardware support for multiple TEEs, remote attestation, and memory encryption. In this paper, we present BYOTee (Build Your Own Trusted Execution Environments), which is an easy-to-use infrastructure for building multiple equally secure enclaves by utilizing commodity Field Programmable Gate Arrays (FPGA) devices. BYOTee creates enclaves with customized hardware TCBs, which include softcore CPUs, block RAMs, and peripheral connections, in FPGA on demand. Additionally, BYOTee provides mechanisms to attest the integrity of the customized enclaves' hardware and software stacks, including bitstream, firmware, and the Security-Sensitive Applications (SSA) along with their inputs and outputs to remote verifiers. We implement a BYOTee system for the Xilinx System-on-Chip (SoC) FPGA. The evaluations on the low-end Zynq-7000 system for four SSAs and 12 benchmark applications demonstrate the usage, security, effectiveness, and performance of the BYOTee framework.
- Published
- 2022
29. Digital Contact Tracing Solutions: Promises, Pitfalls and Challenges
- Author
-
Nguyen, Thien Duc, Miettinen, Markus, Dmitrienko, Alexandra, Sadeghi, Ahmad-Reza, and Visconti, Ivan
- Subjects
Computer Science - Cryptography and Security - Abstract
The COVID-19 pandemic has caused many countries to deploy novel digital contact tracing (DCT) systems to boost the efficiency of manual tracing of infection chains. In this paper, we systematically analyze DCT solutions and categorize them based on their design approaches and architectures. We analyze them with regard to effectiveness, security, privacy, and ethical aspects and compare prominent solutions with regard to these requirements. In particular, we discuss the shortcomings of the Google and Apple Exposure Notification API (GAEN) that is currently widely adopted all over the world. We find that the security and privacy of GAEN have considerable deficiencies as it can be compromised by severe, large-scale attacks. We also discuss other proposed approaches for contact tracing, including our proposal TRACECORONA, that are based on Diffie-Hellman (DH) key exchange and aim at tackling shortcomings of existing solutions. Our extensive analysis shows thatTRACECORONA fulfills the above security requirements better than deployed state-of-the-art approaches. We have implementedTRACECORONA, and its beta test version has been used by more than 2000 users without any major functional problems, demonstrating that there are no technical reasons requiring to make compromises with regard to the requirements of DCTapproaches., Comment: A core part of this paper is to be published in IEEE Transactions on Emerging Topics in Computing, DOI: 10.1109/TETC.2022.3216473
- Published
- 2022
30. TheHuzz: Instruction Fuzzing of Processors Using Golden-Reference Models for Finding Software-Exploitable Vulnerabilities
- Author
-
Tyagi, Aakash, Crump, Addison, Sadeghi, Ahmad-Reza, Persyn, Garrett, Rajendran, Jeyavijayan, Jauernig, Patrick, and Kande, Rahul
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Hardware Architecture ,Computer Science - Software Engineering - Abstract
The increasing complexity of modern processors poses many challenges to existing hardware verification tools and methodologies for detecting security-critical bugs. Recent attacks on processors have shown the fatal consequences of uncovering and exploiting hardware vulnerabilities. Fuzzing has emerged as a promising technique for detecting software vulnerabilities. Recently, a few hardware fuzzing techniques have been proposed. However, they suffer from several limitations, including non-applicability to commonly used Hardware Description Languages (HDLs) like Verilog and VHDL, the need for significant human intervention, and inability to capture many intrinsic hardware behaviors, such as signal transitions and floating wires. In this paper, we present the design and implementation of a novel hardware fuzzer, TheHuzz, that overcomes the aforementioned limitations and significantly improves the state of the art. We analyze the intrinsic behaviors of hardware designs in HDLs and then measure the coverage metrics that model such behaviors. TheHuzz generates assembly-level instructions to increase the desired coverage values, thereby finding many hardware bugs that are exploitable from software. We evaluate TheHuzz on four popular open-source processors and achieve 1.98x and 3.33x the speed compared to the industry-standard random regression approach and the state-of-the-art hardware fuzzer, DiffuzRTL, respectively. Using TheHuzz, we detected 11 bugs in these processors, including 8 new vulnerabilities, and we demonstrate exploits using the detected bugs. We also show that TheHuzz overcomes the limitations of formal verification tools from the semiconductor industry by comparing its findings to those discovered by the Cadence JasperGold tool., Comment: To be published in the proceedings of the 31st USENIX Security Symposium, 2022
- Published
- 2022
31. DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
- Author
-
Rieger, Phillip, Nguyen, Thien Duc, Miettinen, Markus, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversary-controlled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also removes benign models of clients with deviating data distributions, causing the aggregated model to perform poorly for such clients. To address this problem, we propose DeepSight, a novel model filtering approach for mitigating backdoor attacks. It is based on three novel techniques that allow to characterize the distribution of data used to train model updates and seek to measure fine-grained differences in the internal structure and outputs of NNs. Using these techniques, DeepSight can identify suspicious model updates. We also develop a scheme that can accurately cluster model updates. Combining the results of both components, DeepSight is able to identify and eliminate model clusters containing poisoned models with high attack impact. We also show that the backdoor contributions of possibly undetected poisoned models can be effectively mitigated with existing weight clipping-based defenses. We evaluate the performance and effectiveness of DeepSight and show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data., Comment: 18 pages, 8 figures; to appear in the Network and Distributed System Security Symposium (NDSS)
- Published
- 2022
- Full Text
- View/download PDF
32. Chunked-Cache: On-Demand and Scalable Cache Isolation for Security Architectures
- Author
-
Dessouky, Ghada, Gruler, Alexander, Mahmoody, Pouya, Sadeghi, Ahmad-Reza, and Stapf, Emmanuel
- Subjects
Computer Science - Cryptography and Security - Abstract
Shared cache resources in multi-core processors are vulnerable to cache side-channel attacks. Recently proposed defenses have their own caveats: Randomization-based defenses are vulnerable to the evolving attack algorithms besides relying on weak cryptographic primitives, because they do not fundamentally address the root cause for cache side-channel attacks. Cache partitioning defenses, on the other hand, provide the strict resource partitioning and effectively block all side-channel threats. However, they usually rely on way-based partitioning which is not fine-grained and cannot scale to support a larger number of protection domains, e.g., in trusted execution environment (TEE) security architectures, besides degrading performance and often resulting in cache underutilization. To overcome the shortcomings of both approaches, we present a novel and flexible set-associative cache partitioning design for TEE architectures, called Chunked-Cache. Chunked-Cache enables an execution context to "carve" out an exclusive configurable chunk of the cache if the execution requires side-channel resilience. If side-channel resilience is not required, mainstream cache resources are freely utilized. Hence, our solution addresses the security-performance trade-off practically by enabling selective and on-demand utilization of side-channel-resilient caches, while providing well-grounded future-proof security guarantees. We show that Chunked-Cache provides side-channel-resilient cache utilization for sensitive code execution, with small hardware overhead, while incurring no performance overhead on the OS. We also show that it outperforms conventional way-based cache partitioning by 43%, while scaling significantly better to support a larger number of protection domains., Comment: Accepted on 3 Sept 2021 to appear at the Network and Distributed System Security Symposium (NDSS) 2022
- Published
- 2021
33. FakeWake: Understanding and Mitigating Fake Wake-up Words of Voice Assistants
- Author
-
Chen, Yanjiao, Bai, Yijie, Mitev, Richard, Wang, Kaibo, Sadeghi, Ahmad-Reza, and Xu, Wenyuan
- Subjects
Computer Science - Machine Learning ,Computer Science - Cryptography and Security - Abstract
In the area of Internet of Things (IoT) voice assistants have become an important interface to operate smart speakers, smartphones, and even automobiles. To save power and protect user privacy, voice assistants send commands to the cloud only if a small set of pre-registered wake-up words are detected. However, voice assistants are shown to be vulnerable to the FakeWake phenomena, whereby they are inadvertently triggered by innocent-sounding fuzzy words. In this paper, we present a systematic investigation of the FakeWake phenomena from three aspects. To start with, we design the first fuzzy word generator to automatically and efficiently produce fuzzy words instead of searching through a swarm of audio materials. We manage to generate 965 fuzzy words covering 8 most popular English and Chinese smart speakers. To explain the causes underlying the FakeWake phenomena, we construct an interpretable tree-based decision model, which reveals phonetic features that contribute to false acceptance of fuzzy words by wake-up word detectors. Finally, we propose remedies to mitigate the effect of FakeWake. The results show that the strengthened models are not only resilient to fuzzy words but also achieve better overall performance on original training datasets.
- Published
- 2021
34. ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep Neural Network and Transfer Learning
- Author
-
Lutz, Oliver, Chen, Huili, Fereidooni, Hossein, Sendner, Christoph, Dmitrienko, Alexandra, Sadeghi, Ahmad Reza, and Koushanfar, Farinaz
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Ethereum smart contracts are automated decentralized applications on the blockchain that describe the terms of the agreement between buyers and sellers, reducing the need for trusted intermediaries and arbitration. However, the deployment of smart contracts introduces new attack vectors into the cryptocurrency systems. In particular, programming flaws in smart contracts can be and have already been exploited to gain enormous financial profits. It is thus an emerging yet crucial issue to detect vulnerabilities of different classes in contracts in an efficient manner. Existing machine learning-based vulnerability detection methods are limited and only inspect whether the smart contract is vulnerable, or train individual classifiers for each specific vulnerability, or demonstrate multi-class vulnerability detection without extensibility consideration. To overcome the scalability and generalization limitations of existing works, we propose ESCORT, the first Deep Neural Network (DNN)-based vulnerability detection framework for Ethereum smart contracts that support lightweight transfer learning on unseen security vulnerabilities, thus is extensible and generalizable. ESCORT leverages a multi-output NN architecture that consists of two parts: (i) A common feature extractor that learns the semantics of the input contract; (ii) Multiple branch structures where each branch learns a specific vulnerability type based on features obtained from the feature extractor. Experimental results show that ESCORT achieves an average F1-score of 95% on six vulnerability types and the detection time is 0.02 seconds per contract. When extended to new vulnerability types, ESCORT yields an average F1-score of 93%. To the best of our knowledge, ESCORT is the first framework that enables transfer learning on new vulnerability types with minimal modification of the DNN model architecture and re-training overhead., Comment: 17 pages, 10 figures, 5 tables, 5 equations, 2 listings
- Published
- 2021
35. FLAME: Taming Backdoors in Federated Learning (Extended Version 1)
- Author
-
Nguyen, Thien Duc, Rieger, Phillip, Chen, Huili, Yalame, Hossein, Möllering, Helen, Fereidooni, Hossein, Marchal, Samuel, Miettinen, Markus, Mirhoseini, Azalia, Zeitouni, Shaza, Koushanfar, Farinaz, Sadeghi, Ahmad-Reza, and Schneider, Thomas
- Subjects
Computer Science - Cryptography and Security - Abstract
Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to backdoor attacks, in which an adversary injects manipulated model updates into the model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors while maintaining the model performance. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models. Furthermore, following the considerable attention that our research has received after its presentation at USENIX SEC 2022, FLAME has become the subject of numerous investigations proposing diverse attack methodologies in an attempt to circumvent it. As a response to these endeavors, we provide a comprehensive analysis of these attempts. Our findings show that these papers (e.g., 3DFed [36]) have not fully comprehended nor correctly employed the fundamental principles underlying FLAME, i.e., our defense mechanism effectively repels these attempted attacks., Comment: This extended version incorporates a novel section (Section 10) that provides a comprehensive analysis of recent proposed attacks, notably "3DFed: Adaptive and extensible framework for covert backdoor attack in federated learning" by Li et al. This new section addresses flawed assertions made in the papers that aim to bypass FLAME or misinterpreted its fundamental design principles
- Published
- 2021
36. DMA’n’Play: Practical Remote Attestation Based on Direct Memory Access
- Author
-
Surminski, Sebastian, Niesler, Christian, Davi, Lucas, Sadeghi, Ahmad-Reza, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tibouchi, Mehdi, editor, and Wang, XiaoFeng, editor
- Published
- 2023
- Full Text
- View/download PDF
37. CURE: A Security Architecture with CUstomizable and Resilient Enclaves
- Author
-
Bahmani, Raad, Brasser, Ferdinand, Dessouky, Ghada, Jauernig, Patrick, Klimmek, Matthias, Sadeghi, Ahmad-Reza, and Stapf, Emmanuel
- Subjects
Computer Science - Cryptography and Security - Abstract
Security architectures providing Trusted Execution Environments (TEEs) have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers. The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type, however, different services need flexible enclaves that can adjust to their demands. Second, they cannot efficiently support emerging applications (e.g., Machine Learning as a Service), which require secure channels to peripherals (e.g., accelerators), or the computational power of multiple cores. Third, their protection against cache side-channel attacks is either an afterthought or impractical, i.e., no fine-grained mapping between cache resources and individual enclaves is provided. In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: (i) sub-space enclaves provide vertical isolation at all execution privilege levels, (ii) user-space enclaves provide isolated execution to unprivileged applications, and (iii) self-contained enclaves allow isolated execution environments that span multiple privilege levels. Moreover, CURE enables the exclusive assignment of system resources, e.g., peripherals, CPU cores, or cache resources to single enclaves. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. We implemented CURE on a RISC-V-based SoC and thoroughly evaluated our prototype in terms of hardware and performance overhead. CURE imposes a geometric mean performance overhead of 15.33% on standard benchmarks., Comment: Accepted to be published in the proceedings of the 30th USENIX Security Symposium (USENIX Security '21 )
- Published
- 2020
38. SoK: On the Security Challenges and Risks of Multi-Tenant FPGAs in the Cloud
- Author
-
Zeitouni, Shaza, Dessouky, Ghada, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
In their continuous growth and penetration into new markets, Field Programmable Gate Arrays (FPGAs) have recently made their way into hardware acceleration of machine learning among other specialized compute-intensive services in cloud data centers, such as Amazon and Microsoft. To further maximize their utilization in the cloud, several academic works propose the spatial multi-tenant deployment model, where the FPGA fabric is simultaneously shared among mutually mistrusting clients. This is enabled by leveraging the partial reconfiguration property of FPGAs, which allows to split the FPGA fabric into several logically isolated regions and reconfigure the functionality of each region independently at runtime. In this paper, we survey industrial and academic deployment models of multi-tenant FPGAs in the cloud computing settings, and highlight their different adversary models and security guarantees, while shedding light on their fundamental shortcomings from a security standpoint. We further survey and classify existing academic works that demonstrate a new class of remotely exploitable physical attacks on multi-tenant FPGA devices, where these attacks are launched remotely by malicious clients sharing physical resources with victim users. Through investigating the problem of end-to-end multi-tenant FPGA deployment more comprehensively, we reveal how these attacks actually represent only one dimension of the problem, while various open security and privacy challenges remain unaddressed. We conclude with our insights and a call for future research to tackle these challenges., Comment: 17 pages, 8 figures
- Published
- 2020
39. Trustworthy AI Inference Systems: An Industry Research View
- Author
-
Cammarota, Rosario, Schunter, Matthias, Rajan, Anand, Boemer, Fabian, Kiss, Ágnes, Treiber, Amos, Weinert, Christian, Schneider, Thomas, Stapf, Emmanuel, Sadeghi, Ahmad-Reza, Demmler, Daniel, Stock, Joshua, Chen, Huili, Hussain, Siam Umar, Riazi, Sadegh, Koushanfar, Farinaz, Gupta, Saransh, Rosing, Tajan Simunic, Chaudhuri, Kamalika, Nejatollahi, Hamid, Dutt, Nikil, Imani, Mohsen, Laine, Kim, Dubey, Anuj, Aysu, Aydin, Hosseini, Fateme Sadat, Yang, Chengmo, Wallace, Eric, and Norton, Pamela
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Artificial Intelligence ,Computer Science - Hardware Architecture ,Computer Science - Computers and Society ,Computer Science - Machine Learning - Abstract
In this work, we provide an industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems. Such systems provide customers with timely, informed, and customized inferences to aid their decision, while at the same time utilizing appropriate security protection mechanisms for AI models. Additionally, such systems should also use Privacy-Enhancing Technologies (PETs) to protect customers' data at any time. To approach the subject, we start by introducing current trends in AI inference systems. We continue by elaborating on the relationship between Intellectual Property (IP) and private data protection in such systems. Regarding the protection mechanisms, we survey the security and privacy building blocks instrumental in designing, building, deploying, and operating private AI inference systems. For example, we highlight opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use. Finally, we outline areas of further development that require the global collective attention of industry, academia, and government researchers to sustain the operation of trustworthy AI inference systems.
- Published
- 2020
40. Offline Model Guard: Secure and Private ML on Mobile Devices
- Author
-
Bayerl, Sebastian P., Frassetto, Tommaso, Jauernig, Patrick, Riedhammer, Korbinian, Sadeghi, Ahmad-Reza, Schneider, Thomas, Stapf, Emmanuel, and Weinert, Christian
- Subjects
Computer Science - Cryptography and Security - Abstract
Performing machine learning tasks in mobile applications yields a challenging conflict of interest: highly sensitive client information (e.g., speech data) should remain private while also the intellectual property of service providers (e.g., model parameters) must be protected. Cryptographic techniques offer secure solutions for this, but have an unacceptable overhead and moreover require frequent network interaction. In this work, we design a practically efficient hardware-based solution. Specifically, we build Offline Model Guard (OMG) to enable privacy-preserving machine learning on the predominant mobile computing platform ARM - even in offline scenarios. By leveraging a trusted execution environment for strict hardware-enforced isolation from other system components, OMG guarantees privacy of client data, secrecy of provided models, and integrity of processing algorithms. Our prototype implementation on an ARM HiKey 960 development board performs privacy-preserving keyword recognition using TensorFlow Lite for Microcontrollers in real time., Comment: Original Publication (in the same form): DATE 2020
- Published
- 2020
- Full Text
- View/download PDF
41. LeakyPick: IoT Audio Spy Detector
- Author
-
Mitev, Richard, Pazii, Anna, Miettinen, Markus, Enck, William, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Manufacturers of smart home Internet of Things (IoT) devices are increasingly adding voice assistant and audio monitoring features to a wide range of devices including smart speakers, televisions, thermostats, security systems, and doorbells. Consequently, many of these devices are equipped with microphones, raising significant privacy concerns: users may not always be aware of when audio recordings are sent to the cloud, or who may gain access to the recordings. In this paper, we present the LeakyPick architecture that enables the detection of the smart home devices that stream recorded audio to the Internet without the user's consent. Our proof-of-concept is a LeakyPick device that is placed in a user's smart home and periodically "probes" other devices in its environment and monitors the subsequent network traffic for statistical patterns that indicate audio transmission. Our prototype is built on a Raspberry Pi for less than USD40 and has a measurement accuracy of 94% in detecting audio transmissions for a collection of 8 devices with voice assistant capabilities. Furthermore, we used LeakyPick to identify 89 words that an Amazon Echo Dot misinterprets as its wake-word, resulting in unexpected audio transmission. LeakyPick provides a cost effective approach for regular consumers to monitor their homes for unexpected audio transmissions to the cloud.
- Published
- 2020
- Full Text
- View/download PDF
42. Mind the GAP: Security & Privacy Risks of Contact Tracing Apps
- Author
-
Baumgärtner, Lars, Dmitrienko, Alexandra, Freisleben, Bernd, Gruler, Alexander, Höchst, Jonas, Kühlberg, Joshua, Mezini, Mira, Mitev, Richard, Miettinen, Markus, Muhamedagic, Anel, Nguyen, Thien Duc, Penning, Alvar, Pustelnik, Dermot Frederik, Roos, Filipp, Sadeghi, Ahmad-Reza, Schwarz, Michael, and Uhl, Christian
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Computers and Society - Abstract
Google and Apple have jointly provided an API for exposure notification in order to implement decentralized contract tracing apps using Bluetooth Low Energy, the so-called "Google/Apple Proposal", which we abbreviate by "GAP". We demonstrate that in real-world scenarios the current GAP design is vulnerable to (i) profiling and possibly de-anonymizing infected persons, and (ii) relay-based wormhole attacks that basically can generate fake contacts with the potential of affecting the accuracy of an app-based contact tracing system. For both types of attack, we have built tools that can easily be used on mobile phones or Raspberry Pis (e.g., Bluetooth sniffers). The goal of our work is to perform a reality check towards possibly providing empirical real-world evidence for these two privacy and security risks. We hope that our findings provide valuable input for developing secure and privacy-preserving digital contact tracing systems.
- Published
- 2020
43. V0LTpwn: Attacking x86 Processor Integrity from Software
- Author
-
Kenjar, Zijo, Frassetto, Tommaso, Gens, David, Franz, Michael, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Fault-injection attacks have been proven in the past to be a reliable way of bypassing hardware-based security measures, such as cryptographic hashes, privilege and access permission enforcement, and trusted execution environments. However, traditional fault-injection attacks require physical presence, and hence, were often considered out of scope in many real-world adversary settings. In this paper we show this assumption may no longer be justified. We present V0LTpwn, a novel hardware-oriented but software-controlled attack that affects the integrity of computation in virtually any execution mode on modern x86 processors. To the best of our knowledge, this represents the first attack on x86 integrity from software. The key idea behind our attack is to undervolt a physical core to force non-recoverable hardware faults. Under a V0LTpwn attack, CPU instructions will continue to execute with erroneous results and without crashes, allowing for exploitation. In contrast to recently presented side-channel attacks that leverage vulnerable speculative execution, V0LTpwn is not limited to information disclosure, but allows adversaries to affect execution, and hence, effectively breaks the integrity goals of modern x86 platforms. In our detailed evaluation we successfully launch software-based attacks against Intel SGX enclaves from a privileged process to demonstrate that a V0LTpwn attack can successfully change the results of computations within enclave execution across multiple CPU revisions.
- Published
- 2019
44. GrandDetAuto: Detecting Malicious Nodes in Large-Scale Autonomous Networks
- Author
-
Abera, Tigist, Brasser, Ferdinand, Gunn, Lachlan J., Jauernig, Patrick, Koisser, David, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Autonomous collaborative networks of devices are rapidly emerging in numerous domains, such as self-driving cars, smart factories, critical infrastructure, and Internet of Things in general. Although autonomy and self-organization are highly desired properties, they increase vulnerability to attacks. Hence, autonomous networks need dependable mechanisms to detect malicious devices in order to prevent compromise of the entire network. However, current mechanisms to detect malicious devices either require a trusted central entity or scale poorly. In this paper, we present GrandDetAuto, the first scheme to identify malicious devices efficiently within large autonomous networks of collaborating entities. GrandDetAuto functions without relying on a central trusted entity, works reliably for very large networks of devices, and is adaptable to a wide range of application scenarios thanks to interchangeable components. Our scheme uses random elections to embed integrity validation schemes in distributed consensus, providing a solution supporting tens of thousands of devices. We implemented and evaluated a concrete instance of GrandDetAuto on a network of embedded devices and conducted large-scale network simulations with up to 100000 nodes. Our results show the effectiveness and efficiency of our scheme, revealing logarithmic growth in run-time and message complexity with increasing network size. Moreover, we provide an extensive evaluation of key parameters showing that GrandDetAuto is applicable to many scenarios with diverse requirements.
- Published
- 2019
- Full Text
- View/download PDF
45. HybCache: Hybrid Side-Channel-Resilient Caches for Trusted Execution Environments
- Author
-
Dessouky, Ghada, Frassetto, Tommaso, and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
Modern multi-core processors share cache resources for maximum cache utilization and performance gains. However, this leaves the cache vulnerable to side-channel attacks, where timing differences in shared cache behavior are exploited to infer information on the victim's execution patterns, ultimately leaking private information. The root cause for these attacks is mutually distrusting processes sharing cache entries and accessing them in a deterministic manner. Various defenses against cache side-channel attacks have been proposed. However, they either degrade performance significantly, impose impractical restrictions, or can only defeat certain classes of these attacks. More importantly, they assume that side-channel-resilient caches are required for the entire execution workload and do not allow to selectively enable the mitigation only for the security-critical portion of the workload. We present a generic mechanism for a flexible and soft partitioning of set-associative caches and propose a hybrid cache architecture, called HybCache. HybCache can be configured to selectively apply side-channel-resilient cache behavior only for isolated execution domains, while providing the non-isolated execution with conventional cache behavior, capacity and performance. An isolation domain can include one or more processes, specific portions of code, or a Trusted Execution Environment. We show that, with minimal hardware modifications and kernel support, HybCache can provide side-channel-resilient cache only for isolated execution with a performance overhead of 3.5-5%, while incurring no performance overhead for the remaining execution workload. We provide a simulator-based and hardware implementation of HybCache to evaluate the performance and area overheads, and show how it mitigates typical access-based and contention-based cache attacks., Comment: Accepted on 18 June 2019 to appear in USENIX Security 2020
- Published
- 2019
46. ARM2GC: Succinct Garbled Processor for Secure Computation
- Author
-
Songhori, Ebrahim M., Riazi, M. Sadegh, Hussain, Siam U., Sadeghi, Ahmad-Reza, and Koushanfar, Farinaz
- Subjects
Computer Science - Cryptography and Security - Abstract
We present ARM2GC, a novel secure computation framework based on Yao's Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop privacy-preserving applications using standard high-level programming languages (e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm). The main enabler of this framework is the introduction of SkipGate, an algorithm that dynamically omits the communication and encryption cost of the gates whose outputs are independent of the private data. SkipGate greatly enhances the performance of ARM2GC by omitting costs of the gates associated with the instructions of the compiled binary, which is known by both parties involved in the computation. Our evaluation on benchmark functions demonstrates that ARM2GC not only outperforms the current GC frameworks that support high-level languages, it also achieves efficiency comparable to the best prior solutions based on hardware description languages. Moreover, in contrast to previous high-level frameworks with domain-specific languages and customized compilers, ARM2GC relies on standard ARM compiler which is rigorously verified and supports programs written in the standard syntax., Comment: 13 pages
- Published
- 2019
47. Control Behavior Integrity for Distributed Cyber-Physical Systems
- Author
-
Adepu, Sridhar, Brasser, Ferdinand, Garcia, Luis, Rodler, Michael, Davi, Lucas, Sadeghi, Ahmad-Reza, and Zonouz, Saman
- Subjects
Computer Science - Cryptography and Security - Abstract
Cyber-physical control systems, such as industrial control systems (ICS), are increasingly targeted by cyberattacks. Such attacks can potentially cause tremendous damage, affect critical infrastructure or even jeopardize human life when the system does not behave as intended. Cyberattacks, however, are not new and decades of security research have developed plenty of solutions to thwart them. Unfortunately, many of these solutions cannot be easily applied to safety-critical cyber-physical systems. Further, the attack surface of ICS is quite different from what can be commonly assumed in classical IT systems. We present Scadman, a system with the goal to preserve the Control Behavior Integrity (CBI) of distributed cyber-physical systems. By observing the system-wide behavior, the correctness of individual controllers in the system can be verified. This allows Scadman to detect a wide range of attacks against controllers, like programmable logic controller (PLCs), including malware attacks, code-reuse and data-only attacks. We implemented and evaluated Scadman based on a real-world water treatment testbed for research and training on ICS security. Our results show that we can detect a wide range of attacks--including attacks that have previously been undetectable by typical state estimation techniques--while causing no false-positive warning for nominal threshold values., Comment: 15 pages, 8 figures
- Published
- 2018
48. When a Patch is Not Enough - HardFails: Software-Exploitable Hardware Bugs
- Author
-
Dessouky, Ghada, Gens, David, Haney, Patrick, Persyn, Garrett, Kanuparthi, Arun, Khattri, Hareesh, Fung, Jason M., Sadeghi, Ahmad-Reza, and Rajendran, Jeyavijayan
- Subjects
Computer Science - Cryptography and Security - Abstract
In this paper, we take a deep dive into microarchitectural security from a hardware designer's perspective by reviewing the existing approaches to detect hardware vulnerabilities during the design phase. We show that a protection gap currently exists in practice that leaves chip designs vulnerable to software-based attacks. In particular, existing verification approaches fail to detect specific classes of vulnerabilities, which we call HardFails: these bugs evade detection by current verification techniques while being exploitable from software. We demonstrate such vulnerabilities in real-world SoCs using RISC-V to showcase and analyze concrete instantiations of HardFails. Patching these hardware bugs may not always be possible and can potentially result in a product recall. We base our findings on two extensive case studies: the recent Hack@DAC 2018 hardware security competition, where 54 independent teams of researchers competed world-wide over a period of 12 weeks to catch inserted security bugs in SoC RTL designs, and an in-depth systematic evaluation of state-of-the-art verification approaches. Our findings indicate that even combinations of techniques will miss high-impact bugs due to the large number of modules with complex interdependencies and fundamental limitations of current detection approaches. We also craft a real-world software attack that exploits one of the RTL bugs from Hack@DAC that evaded detection and discuss novel approaches to mitigate the growing problem of cross-layer bugs at design time.
- Published
- 2018
49. Baseline functionality for security and control of commodity IoT devices and domain-controlled device lifecycle management
- Author
-
Miettinen, Markus, van Oorschot, Paul C., and Sadeghi, Ahmad-Reza
- Subjects
Computer Science - Cryptography and Security - Abstract
The emerging Internet of Things (IoT) drastically increases the number of connected devices in homes, workplaces and smart city infrastructures. This drives a need for means to not only ensure confidentiality of device-related communications, but for device configuration and management---ensuring that only legitimate devices are granted privileges to a local domain, that only authorized agents have access to the device and data it holds, and that software updates are authentic. The need to support device on-boarding, ongoing device management and control, and secure decommissioning dictates a suite of key management services for both access control to devices, and access by devices to wireless infrastructure and networked resources. We identify this core functionality, and argue for the recognition of efficient and reliable key management support---both within IoT devices, and by a unifying external management platform---as a baseline requirement for an IoT world. We present a framework architecture to facilitate secure, flexible and convenient device management in commodity IoT scenarios, and offer an illustrative set of protocols as a base solution---not to promote specific solution details, but to highlight baseline functionality to help domain owners oversee deployments of large numbers of independent multi-vendor IoT devices.
- Published
- 2018
50. Peek-a-Boo: I see your smart home activities, even encrypted!
- Author
-
Acar, Abbas, Fereidooni, Hossein, Abera, Tigist, Sikder, Amit Kumar, Miettinen, Markus, Aksu, Hidayet, Conti, Mauro, Sadeghi, Ahmad-Reza, and Uluagac, Selcuk
- Subjects
Computer Science - Cryptography and Security - Abstract
A myriad of IoT devices such as bulbs, switches, speakers in a smart home environment allow users to easily control the physical world around them and facilitate their living styles through the sensors already embedded in these devices. Sensor data contains a lot of sensitive information about the user and devices. However, an attacker inside or near a smart home environment can potentially exploit the innate wireless medium used by these devices to exfiltrate sensitive information from the encrypted payload (i.e., sensor data) about the users and their activities, invading user privacy. With this in mind,in this work, we introduce a novel multi-stage privacy attack against user privacy in a smart environment. It is realized utilizing state-of-the-art machine-learning approaches for detecting and identifying the types of IoT devices, their states, and ongoing user activities in a cascading style by only passively sniffing the network traffic from smart home devices and sensors. The attack effectively works on both encrypted and unencrypted communications. We evaluate the efficiency of the attack with real measurements from an extensive set of popular off-the-shelf smart home IoT devices utilizing a set of diverse network protocols like WiFi, ZigBee, and BLE. Our results show that an adversary passively sniffing the traffic can achieve very high accuracy (above 90%) in identifying the state and actions of targeted smart home devices and their users. To protect against this privacy leakage, we also propose a countermeasure based on generating spoofed traffic to hide the device states and demonstrate that it provides better protection than existing solutions., Comment: Update (May 13, 2020): This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec '20), July 8-10, 2020, Linz (Virtual Event), Austria, https://doi.org/10.1145/3395351.3399421
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.