75 results on '"Sean Peisert"'
Search Results
2. SoDa: An Irradiance-Based Synthetic Solar Data Generation Tool
- Author
-
David Pinney, Ciaran Roberts, Anna Scaglione, Sean Peisert, Sy-Toan Ngo, Raksha Ramakrishna, Ignacio Losada Carreño, and Daniel Arnold
- Subjects
Stochastic modelling ,business.industry ,Computer science ,05 social sciences ,Photovoltaic system ,Irradiance ,050801 communication & media studies ,Radiation ,Phasor measurement unit ,0508 media and communications ,Solar time ,0502 economics and business ,050211 marketing ,Astrophysics::Earth and Planetary Astrophysics ,business ,Solar power ,Remote sensing - Abstract
In this paper, we present SoDa, an irradiance-based synthetic Solar Data generation tool to generate realistic sub-minute solar photovoltaic (PV) output power time series, that emulate the weather pattern for a certain geographical location. Our tool relies on the National Solar Radiation Database (NSRDB) to obtain irradiance and weather data patterns for the site. Irradiance is mapped onto a PV model estimate of a solar plant’s 30-min power output, based on the configuration of the panel. The working hypothesis to generate high-resolution (e.g. 1 second) solar data is that the conditional distribution of the time series of solar power output given the cloud density is the same for different locations. We therefore propose a stochastic model with a switching behavior due to different weather regimes as provided by the cloud type label in the NSRDB, and train our stochastic model parameters for the cloudy states on the high-resolution solar power measurements from a Phasor Measurement Unit (PMU). In the paper we introduce the stochastic model, and the methodology used for the training of its parameters. The numerical results show that our tool creates synthetic solar time series at high resolutions that are statistically representative of the measured solar power and illustrate how to make use of the tool to create synthetic data for arbitrary sites in the footprint covered by the NSRDB.
- Published
- 2023
3. Unsafe at Any Clock Speed: The Insecurity of Computer System Design, Implementation, and Operation
- Author
-
Sean Peisert
- Subjects
Computer Software ,Defence & Security Studies ,Computer Networks and Communications ,Computation Theory and Mathematics ,Electrical and Electronic Engineering ,Strategic ,Law ,Data Format - Published
- 2022
- Full Text
- View/download PDF
4. Differentially Private Map Matching for Mobility Trajectories
- Author
-
Ammar Haydari, Chen-Nee Chuah, Michael Zhang, Jane Macfarlane, and Sean Peisert
- Published
- 2022
- Full Text
- View/download PDF
5. Message from the S-HPC 22 Workshop Chairs
- Author
-
Sean Peisert
- Published
- 2022
- Full Text
- View/download PDF
6. SoK: Limitations of Confidential Computing via TEEs for High-Performance Compute Systems
- Author
-
Ayaz Akram, Venkatesh Akella, Sean Peisert, and Jason Lowe-Power
- Published
- 2022
- Full Text
- View/download PDF
7. Reflections on the Past, Perspectives on the Future [From the Editors]
- Author
-
Sean Peisert
- Subjects
Computer Networks and Communications ,Computer science ,Engineering ethics ,Electrical and Electronic Engineering ,Law - Published
- 2021
- Full Text
- View/download PDF
8. Adam-based Augmented Random Search for Control Policies for Distributed Energy Resource Cyber Attack Mitigation
- Author
-
Daniel Arnold, Sy-Toan Ngo, Ciaran Roberts, Yize Chen, Anna Scaglione, and Sean Peisert
- Subjects
Optimization and Control (math.OC) ,FOS: Electrical engineering, electronic engineering, information engineering ,FOS: Mathematics ,Systems and Control (eess.SY) ,Electrical Engineering and Systems Science - Systems and Control ,Mathematics - Optimization and Control - Abstract
Volt-VAR and Volt-Watt control functions are mechanisms that are included in distributed energy resource (DER) power electronic inverters to mitigate excessively high or low voltages in distribution systems. In the event that a subset of DER have had their Volt-VAR and Volt-Watt settings compromised as part of a cyber-attack, we propose a mechanism to control the remaining set of non-compromised DER to ameliorate large oscillations in system voltages and large voltage imbalances in real time. To do so, we construct control policies for individual non-compromised DER, directly searching the policy space using an Adam-based augmented random search (ARS). In this paper we show that, compared to previous efforts aimed at training policies for DER cybersecurity using deep reinforcement learning (DRL), the proposed approach is able to learn optimal (and sometimes linear) policies an order of magnitude faster than conventional DRL techniques (e.g., Proximal Policy Optimization).
- Published
- 2022
- Full Text
- View/download PDF
9. Perspectives for self-driving labs in synthetic biology
- Author
-
Hector G Martin, Tijana Radivojevic, Jeremy Zucker, Kristofer Bouchard, Jess Sustarich, Sean Peisert, Dan Arnold, Nathan Hillson, Gyorgy Babnigg, Jose M Marti, Christopher J Mungall, Gregg T Beckham, Lucas Waldburger, James Carothers, ShivShankar Sundaram, Deb Agarwal, Blake A Simmons, Tyler Backman, Deepanwita Banerjee, Deepti Tanjore, Lavanya Ramakrishnan, and Anup Singh
- Subjects
Technology ,Biomedical Engineering ,Bioengineering ,Biological Sciences ,Other Quantitative Biology (q-bio.OT) ,Quantitative Biology - Other Quantitative Biology ,Engineering ,Artificial Intelligence ,q-bio.OT ,FOS: Biological sciences ,Humans ,Synthetic Biology ,Biotechnology - Abstract
Self-driving labs (SDLs) combine fully automated experiments with artificial intelligence (AI) that decides the next set of experiments. Taken to their ultimate expression, SDLs could usher a new paradigm of scientific research, where the world is probed, interpreted, and explained by machines for human benefit. While there are functioning SDLs in the fields of chemistry and materials science, we contend that synthetic biology provides a unique opportunity since the genome provides a single target for affecting the incredibly wide repertoire of biological cell behavior. However, the level of investment required for the creation of biological SDLs is only warranted if directed towards solving difficult and enabling biological questions. Here, we discuss challenges and opportunities in creating SDLs for synthetic biology., Comment: 17 pages, 3 figures. Submitted for publication in Current Opinion in Biotechnology. Updated figure 3 in this version
- Published
- 2022
- Full Text
- View/download PDF
10. Differentially Private K-means Clustering Applied to Meter Data Analysis and Synthesis
- Author
-
Nikhil Ravi, Anna Scaglione, Sachin Kadam, Reinhard Gentz, Sean Peisert, Brent Lunghino, Emmanuel Levijarvi, and Aram Shumavon
- Subjects
Signal Processing (eess.SP) ,General Computer Science ,FOS: Electrical engineering, electronic engineering, information engineering ,Generic health relevance ,Interdisciplinary Engineering ,Electrical Engineering and Systems Science - Signal Processing ,Electrical and Electronic Engineering - Abstract
The proliferation of smart meters has resulted in a large amount of data being generated. It is increasingly apparent that methods are required for allowing a variety of stakeholders to leverage the data in a manner that preserves the privacy of the consumers. The sector is scrambling to define policies, such as the so called `15/15 rule', to respond to the need. However, the current policies fail to adequately guarantee privacy. In this paper, we address the problem of allowing third parties to apply $K$-means clustering, obtaining customer labels and centroids for a set of load time series by applying the framework of differential privacy. We leverage the method to design an algorithm that generates differentially private synthetic load data consistent with the labeled data. We test our algorithm's utility by answering summary statistics such as average daily load profiles for a 2-dimensional synthetic dataset and a real-world power load dataset., 13 pages, 13 figures
- Published
- 2022
11. A Framework for Evaluating BFT
- Author
-
James R. Clavin, Yue Huang, Xin Wang, Pradeep M. Prakash, Sisi Duan, Jianwu Wang, and Sean Peisert
- Published
- 2021
- Full Text
- View/download PDF
12. Position Papers for the ASCR Workshop on Cybersecurity and Privacy for Scientific Computing Ecosystems
- Author
-
Stacy Prowell, David Manz, Candace Culhane, Sheikh Ghafoor, Martine Kalke, Kate Keahey, Celeste Matarazzo, Chris Oehmen, Sean Peisert, and Ali Pinar
- Published
- 2021
- Full Text
- View/download PDF
13. Some Experiences in Developing Security Technology That Actually Get Used
- Author
-
Sean Peisert
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Internet privacy ,Electrical and Electronic Engineering ,business ,Law - Published
- 2019
- Full Text
- View/download PDF
14. Deep Reinforcement Learning for Mitigating Cyber-Physical DER Voltage Unbalance Attacks
- Author
-
Anna Scaglione, Sean Peisert, Sy-Toan Ngo, Daniel Arnold, Ciaran Roberts, and Alexandre Milesi
- Subjects
business.industry ,Computer science ,Electrical engineering ,Cyber-physical system ,Attack surface ,law.invention ,Power (physics) ,Electric power system ,law ,Control theory ,Reinforcement learning ,business ,Transformer ,Control logic - Abstract
The deployment of DER with smart-inverter functionality is increasing the controllable assets on power distribution networks and, consequently, the cyber-physical attack surface. Within this work, we consider the use of reinforcement learning as an online controller that adjusts DER Volt/Var and Volt/Watt control logic to mitigate network voltage unbalance. We specifically focus on the case where a network-aware cyber-physical attack has compromised a subset of single-phase DER, causing a large voltage unbalance. We show how deep reinforcement learning successfully learns a policy minimizing the unbalance, both during normal operation and during a cyber-physical attack. In mitigating the attack, the learned stochastic policy operates alongside legacy equipment on the network, i.e. tap-changing transformers, adjusting optimally predefined DER control-logic.
- Published
- 2021
- Full Text
- View/download PDF
15. Performance Analysis of Scientific Computing Workloads on General Purpose TEEs
- Author
-
Jason Lowe-Power, Sean Peisert, Anna Giannakou, Ayaz Akram, and Venkatesh Akella
- Subjects
Potential impact ,Graph analytics ,Memory management ,General purpose ,Computer science ,Scale (chemistry) ,Programming paradigm ,Graph (abstract data type) ,Confidentiality ,Computational science - Abstract
Scientific computing sometimes involves computation on sensitive data. Depending on the data and the execution environment, the HPC (high-performance computing) user or data provider may require confidentiality and/or integrity guarantees. To study the applicability of hardware-based trusted execution environments (TEEs) to enable secure scientific computing, we deeply analyze the performance impact of general purpose TEEs, AMD SEV, and Intel SGX, for diverse HPC benchmarks including traditional scientific computing, machine learning, graph analytics, and emerging scientific computing workloads. We observe three main findings: 1) SEV requires careful memory placement on large scale NUMA machines (1×–3.4× slowdown without and 1×–1.15× slowdown with NUMA aware placement), 2) virtualization—a prerequisite for SEV— results in performance degradation for workloads with irregular memory accesses and large working sets (1×–4× slowdown compared to native execution for graph applications) and 3) SGX is inappropriate for HPC given its limited secure memory size and inflexible programming model (1.2×–126× slowdown over unsecure execution). Finally, we discuss forthcoming new TEE designs and their potential impact on scientific computing.
- Published
- 2021
- Full Text
- View/download PDF
16. Learning from learning machines: improving the predictive power of energy-water-land nexus models with insights from complex measured and simulated data
- Author
-
James Brown, Michael Sohn, Utkarsh Mital, Dipankar Dwivedi, Haruko Wainwright, Carl Steefel, Eoin Brodie, William Collins, Daniel Jacobson, Michael Mahoney, Tianzhen Hong, Christoph Gehbauer, Doug Black, Thomas Kirchstetter, Daniel Arnold, and Sean Peisert
- Subjects
Computer science ,Simulated data ,Predictive power ,Nexus (standard) ,Industrial engineering ,Energy (signal processing) - Published
- 2021
- Full Text
- View/download PDF
17. Perspectives on the SolarWinds Incident
- Author
-
Terry Benzel, Atul Prakash, Jelena Mirkovic, James Bret Michael, Carl E. Landwehr, Hamed Okhravi, Fabio Massacci, Bruce Schneier, Mohammad Mannan, Sean Peisert, and Naval Postgraduate School (U.S.)
- Subjects
Source code ,Computer Networks and Communications ,Event (computing) ,Computer science ,business.industry ,media_common.quotation_subject ,Computation Theory and Mathematics ,Editorial board ,Computer security ,computer.software_genre ,Data Format ,Defence & Security Studies ,Computer Software ,Software ,Malware ,Electrical and Electronic Engineering ,business ,Law ,computer ,Strategic ,media_common - Abstract
A significant cybersecurity event has recently been discovered in which malicious actors gained access to the source code for the Orion monitoring and management software made by the company SolarWinds and inserted malware into that source code. This article describes brief perspectives from a few experts regarding that incident and probable solutions. The attackers inserted malware into that source code so that, when the software was distributed to and deployed by SolarWinds customers as part of an update, the malicious software could be used to surveil customers who unknowingly installed the malware and gain potentially arbitrary control over the systems managed by Orion. One of the solutions is to improve government software procurement. Software is critical to national security. Any system of procuring that software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure that they are sufficient to meet the security needs of the network they are being installed in. If these evaluations are made public, along with the list of companies that meet them, all network buyers can benefit from them.
- Published
- 2021
- Full Text
- View/download PDF
18. Lyapunov stability of smart inverters using linearized distflow approximation
- Author
-
Daniel Arnold, Eran Schweitzer, Ciaran Roberts, Shammya Shananda Saha, Sean Peisert, Nathan G. Johnson, and Anna Scaglione
- Subjects
Lyapunov function ,Computer science ,Stability criterion ,020209 energy ,TJ807-830 ,02 engineering and technology ,Systems and Control (eess.SY) ,Electrical Engineering and Systems Science - Systems and Control ,7. Clean energy ,Renewable energy sources ,symbols.namesake ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Voltage droop ,Electrical and Electronic Engineering ,Lyapunov stability ,eess.SY ,Energy ,Renewable Energy, Sustainability and the Environment ,020208 electrical & electronic engineering ,AC power ,Lipschitz continuity ,cs.SY ,symbols ,Inverter ,Voltage - Abstract
Fast-acting smart inverters that utilize preset operating conditions to determine real and reactive power injection/consumption can create voltage instabilities (over-voltage, voltage oscillations and more) in an electrical distribution network if set-points are not properly configured. In this work, linear distribution power flow equations and droop-based Volt-Var and Volt-Watt control curves are used to analytically derive a stability criterion using \lyapnouv analysis that includes the network operating condition. The methodology is generally applicable for control curves that can be represented as Lipschitz functions. The derived Lipschitz constants account for smart inverter hardware limitations for reactive power generation. A local policy is derived from the stability criterion that allows inverters to adapt their control curves by monitoring only local voltage, thus avoiding centralized control or information sharing with other inverters. The criterion is independent of the internal time-delays of smart inverters. Simulation results for inverters with and without the proposed stabilization technique demonstrate how smart inverters can mitigate voltage oscillations locally and mitigate real and reactive power flow disturbances at the substation under multiple scenarios. The study concludes with illustrations of how the control policy can dampen oscillations caused by solar intermittency and cyber-attacks., Comment: Accepted for IET Renewable Power Generation
- Published
- 2021
19. Trustworthy Scientific Computing
- Author
-
Sean Peisert
- Subjects
Data sharing ,Trustworthiness ,General Computer Science ,Computer science ,Information and Computing Sciences ,020204 information systems ,Data_MISCELLANEOUS ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,02 engineering and technology ,Data science ,Information Systems - Abstract
Author(s): Peisert, S | Abstract: Addressing the trust issues underlying the current limits on data sharing.
- Published
- 2021
20. Machine learning for metabolic engineering: A review
- Author
-
Jose Manuel Martí, Sai Vamshi R. Jonnalagadda, Christopher J. Petzold, Aindrila Mukhopadhyay, Reinhard Gentz, Christopher E. Lawson, Hector Garcia Martin, Joonhoon Kim, Deepti Tanjore, Sean Peisert, Steven W. Singer, Joshua G. Dunn, Tijana Radivojevic, Blake A. Simmons, and Nathan J. Hillson
- Subjects
0106 biological sciences ,Computer science ,Data management ,Bioengineering ,Machine learning ,computer.software_genre ,01 natural sciences ,Applied Microbiology and Biotechnology ,Industrial Biotechnology ,Metabolic engineering ,Omics data ,Machine Learning ,03 medical and health sciences ,Synthetic biology ,Deep Learning ,010608 biotechnology ,Production (economics) ,030304 developmental biology ,Gene Editing ,0303 health sciences ,business.industry ,Deep learning ,Variety (cybernetics) ,Metabolic Engineering ,Synthetic Biology ,Artificial intelligence ,business ,Advice (complexity) ,computer ,Algorithms ,Biotechnology - Abstract
Machine learning provides researchers a unique opportunity to make metabolic engineering more predictable. In this review, we offer an introduction to this discipline in terms that are relatable to metabolic engineers, as well as providing in-depth illustrative examples leveraging omics data and improving production. We also include practical advice for the practitioner in terms of data management, algorithm libraries, computational resources, and important non-technical issues. A variety of applications ranging from pathway construction and optimization, to genetic editing optimization, cell factory testing, and production scale-up are discussed. Moreover, the promising relationship between machine learning and mechanistic models is thoroughly reviewed. Finally, the future perspectives and most promising directions for this combination of disciplines are examined.
- Published
- 2021
21. Supporting Cyber Security of Power Distribution Systems by Detecting Differences Between Real-time Micro-Synchrophasor Measurements and Cyber-Reported SCADA (Final Report)
- Author
-
Anna Scaglione, C. P. McParland, Aaron Snyder, Reinhard Gentz, Sean Peisert, Alex McEachren, Galen Rasche, Ciaran Roberts, and Mahdi Jamei
- Subjects
Distribution system ,Power (social and political) ,SCADA ,Computer science ,Mahdi ,Computer security ,computer.software_genre ,computer - Abstract
Author(s): Peisert, Sean; Roberts, Ciaran; Scaglione, Anna; Jamei, Mahdi; Gentz, Reinhard; Mcparland, Charles; McEachren, Alex; Rasche, Galen; Snyder, Aaron
- Published
- 2020
- Full Text
- View/download PDF
22. Performance Analysis of Scientific Computing Workloads on Trusted Execution Environments
- Author
-
Jason Lowe-Power, Anna Giannakou, Sean Peisert, Venkatesh Akella, and Ayaz Akram
- Subjects
Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Graph (abstract data type) ,020206 networking & telecommunications ,02 engineering and technology ,Computational science - Abstract
Author(s): Akram, Ayaz; Giannakou, Anna; Akella, Venkatesh; Lowe-Power, Jason; Peisert, Sean | Abstract: Scientific computing sometimes involves computation on sensitive data. Depending on the data and the execution environment, the HPC (high-performance computing) user or data provider may require confidentiality and/or integrity guarantees. To study the applicability of hardware-based trusted execution environments (TEEs) to enable secure scientific computing, we deeply analyze the performance impact of AMD SEV and Intel SGX for diverse HPC benchmarks including traditional scientific computing, machine learning, graph analytics, and emerging scientific computing workloads. We observe three main findings: 1) SEV requires careful memory placement on large scale NUMA machines (1×−3.4× slowdown without and 1×−1.15× slowdown with NUMA aware placement), 2) virtualization−a prerequisite for SEV−results in performance degradation for workloads with irregular memory accesses and large working sets (1×−4× slowdown compared to native execution for graph applications) and 3) SGX is inappropriate for HPC given its limited secure memory size and inflexible programming model (1.2×−126× slowdown over unsecure execution). Finally, we discuss forthcoming new TEE designs and their potential impact on scientific computing.
- Published
- 2020
- Full Text
- View/download PDF
23. Anomaly Detection for Science DMZs Using System Performance Data
- Author
-
Christina Mao, Ross K. Gegan, Sean Peisert, Dipak Ghosal, and Matt Bishop
- Subjects
DBSCAN ,Computer science ,DMZ ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Denial-of-service attack ,Anomaly detection ,Network monitoring ,Cluster analysis ,business ,Context switch ,Data transmission ,Computer network - Abstract
Science DMZs are specialized networks that enable large-scale distributed scientific research, providing efficient and guaranteed performance while transferring large amounts of data at high rates. The high-speed performance of a Science DMZ is made viable via data transfer nodes (DTNs), therefore they are a critical point of failure. DTNs are usually monitored with network intrusion detection systems (NIDS). However, NIDS do not consider system performance data, such as network I/O interrupts and context switches, which can also be useful in revealing anomalous system performance potentially arising due to external network based attacks or insider attacks. In this paper, we demonstrate how system performance metrics can be applied towards securing a DTN in a Science DMZ network. Specifically, we evaluate the effectiveness of system performance data in detecting TCP-SYN flood attacks on a DTN using DBSCAN (a density-based clustering algorithm) for anomaly detection. Our results demonstrate that system interrupts and context switches can be used to successfully detect TCP-SYN floods, suggesting that system performance data could be effective in detecting a variety of attacks not easily detected through network monitoring alone.
- Published
- 2020
24. Learning Behavior of Distribution System Discrete Control Devices for Cyber-Physical Security
- Author
-
Reinhard Gentz, Daniel Arnold, Alex McEachern, Sean Peisert, Mahdi Jamei, Ciaran Roberts, Anna Scaglione, Chuck McParland, and Emma Stewart
- Subjects
General Computer Science ,Computer science ,Network packet ,Network security ,business.industry ,020209 energy ,Cyber-physical systems ,Real-time computing ,data analysis ,Cyber-physical system ,power distribution ,02 engineering and technology ,Intrusion detection system ,Grid ,Electric power system ,power system security ,SCADA ,Affordable and Clean Energy ,0202 electrical engineering, electronic engineering, information engineering ,network security ,020201 artificial intelligence & image processing ,Interdisciplinary Engineering ,Electrical and Electronic Engineering ,Control logic ,business - Abstract
Conventional cyber-security intrusion detection systems monitor network traffic for malicious activity and indications that an adversary has gained access to the system. The approach discussed here expands the idea of a traditional intrusion detection system within electrical power systems, specifically power distribution networks, by monitoring the physical behavior of the grid. This is achieved through the use of high-rate distribution Phasor Measurement Units (PMUs), alongside SCADA packets analysis, for the purpose of monitoring the behavior of discrete control devices. In this work we present a set of algorithms for passively learning the control logic of voltage regulators and switched capacitor banks. Upon detection of an abnormal operation, the operator is alerted and further action can be taken. The proposed learning algorithms are validated on both simulated data and on measured PMU data from a utility pilot deployment site.
- Published
- 2020
25. Phasor Measurement Units Optimal Placement and Performance Limits for Fault Localization
- Author
-
Mahdi Jamei, Raksha Ramakrishna, Sean Peisert, Anna Scaglione, Teklemariam Tsegay Tesfay, Ciaran Roberts, and Reinhard Gentz
- Subjects
Observability ,Computer Networks and Communications ,Computer science ,intrusion detection ,Real-time computing ,02 engineering and technology ,Intrusion detection system ,Fault (power engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,cyber-physical security ,Communications Technologies ,Sensors ,Phasor ,Phasor measurement units ,Circuit faults ,020206 networking & telecommunications ,Voltage measurement ,Grid ,Phasor measurement unit ,Current measurement ,Cluster detection ,Fault location ,Graph (abstract data type) ,identification ,optimal PMU placement ,Distributed Computing ,Networking & Telecommunications - Abstract
In this paper, the performance limits of faults localization are investigated using synchrophasor data. The focus is on a non-trivial operating regime where the number of Phasor Measurement Unit (PMU) sensors available is insufficient to have full observability of the grid state. Proposed analysis uses the Kullback Leibler (KL) divergence between the distributions corresponding to different fault location hypotheses associated with the observation model. This analysis shows that the most likely locations are concentrated in clusters of buses more tightly connected to the actual fault site akin to graph communities. Consequently, a PMU placement strategy is derived that achieves a near-optimal resolution for localizing faults for a given number of sensors. The problem is also analyzed from the perspective of sampling a graph signal, and how the placement of the PMUs i.e. the spatial sampling pattern and the topological characteristic of the grid affect the ability to successfully localize faults. To highlight the superior performance of presented fault localization and placement algorithms, the proposed strategy is applied to a modified IEEE 34, IEEE-123 bus test cases and to data from a real distribution grid. Additionally, the detection of cyber-physical attacks is also examined where PMU data and relevant Supervisory Control and Data Acquisition (SCADA) network traffic information are compared to determine if a network breach has affected the integrity of the system information and/or operations.
- Published
- 2020
26. Anomaly Detection Using Optimally Placed <tex-math notation='LaTeX'>$\mu \text{PMU}$ </tex-math> Sensors in Distribution Grids
- Author
-
Alex McEachern, Mahdi Jamei, Emma Stewart, Anna Scaglione, Chuck McParland, Sean Peisert, and Ciaran Roberts
- Subjects
Engineering ,Situation awareness ,business.industry ,020209 energy ,Energy Engineering and Power Technology ,02 engineering and technology ,Sensor fusion ,computer.software_genre ,Grid ,Topology ,Set (abstract data type) ,Units of measurement ,Analytics ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Anomaly detection ,Data mining ,Electrical and Electronic Engineering ,business ,computer - Abstract
As the distribution grid moves toward a tightly monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. In this paper, focusing on microphasor measurement unit ( $\mu \text{PMU}$ ) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. Due to the key role of the $\mu \text{PMU}$ devices in our architecture, an optimal $\mu \text{PMU}$ placement with limited number of sensors is also described that finds the best location of the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real $\mu \text{PMU}$ data.
- Published
- 2018
- Full Text
- View/download PDF
27. The medical science DMZ: a network design pattern for data-intensive medical science
- Author
-
Eli Dart, Anurag Shankar, Robert L. Grossman, William K. Barnett, Ari E. Berman, James Cuff, Sean Peisert, Brian Tierney, and Edward Balas
- Subjects
Health Insurance Portability and Accountability Act ,Medical education ,Computer science ,DMZ ,Process (engineering) ,business.industry ,Big data ,Privacy laws of the United States ,high-performance computing ,Health Informatics ,Context (language use) ,Research and Applications ,Data science ,03 medical and health sciences ,0302 clinical medicine ,biomedical research ,030220 oncology & carcinogenesis ,data-intensive science ,030212 general & internal medicine ,computer communication networks ,Engineering research ,business ,Cloud storage ,computer security - Abstract
Objective We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet-filter firewalls, network intrusion-detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. Discussion The exponentially increasing amounts of “omics” data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research “Big Data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.
- Published
- 2017
- Full Text
- View/download PDF
28. Iterative Analysis to Improve Key Properties of Critical Human-Intensive Processes
- Author
-
Borislava I. Simidchieva, Huong Phan, Leon J. Osterweil, Heather M. Conboy, Sean Peisert, George S. Avrunin, Lori A. Clarke, and Matt Bishop
- Subjects
0301 basic medicine ,Fault tree analysis ,Model checking ,Correctness ,Process modeling ,General Computer Science ,Computer science ,Process (engineering) ,05 social sciences ,Reliability engineering ,03 medical and health sciences ,030104 developmental biology ,Risk analysis (engineering) ,Iterative analysis ,Key (cryptography) ,0501 psychology and cognitive sciences ,Software system ,Safety, Risk, Reliability and Quality ,050107 human factors - Abstract
In this article, we present an approach for systematically improving complex processes, especially those involving human agents, hardware devices, and software systems. We illustrate the utility of this approach by applying it to part of an election process and show how it can improve the security and correctness of that subprocess. We use the Little-JIL process definition language to create a precise and detailed definition of the process. Given this process definition, we use two forms of automated analysis to explore whether specified key properties, such as security and safety policies, can be undermined. First, we use model checking to identify process execution sequences that fail to conform to event-sequence properties. After these are addressed, we apply fault tree analysis to identify when the misperformance of steps might allow undesirable outcomes, such as security breaches. The results of these analyses can provide assurance about the process; suggest areas for improvement; and, when applied to a modified process definition, evaluate proposed changes.
- Published
- 2017
- Full Text
- View/download PDF
29. Workflow automation in liquid chromatography mass spectrometry
- Author
-
Sean Peisert, Hector Garcia Martin, Reinhard Gentz, and Edward E. K. Baidoo
- Subjects
Computer science ,business.industry ,computer.software_genre ,Automation ,Open source ,Workflow ,Networking and Information Technology R&D ,Networking and Information Technology R&D (NITRD) ,Fully automated ,Liquid chromatography–mass spectrometry ,Leverage (statistics) ,Data mining ,Noise level ,business ,computer - Abstract
We describe the fully automated workflow path developed for the ingest and analysis of liquid chromatography mass spectrometry (LCMS) data. With the help of this computational workflow, we were able to replace two human work days to analyze data with two hours of unsupervised computation time. In addition, this tool also can compute confidence intervals for all its results, based on the noise level present in the data. We leverage only open source tools and libraries in this workflow.
- Published
- 2019
30. Detecting control system misbehavior by fingerprinting programmable logic controller functionality
- Author
-
Sean Peisert, Reinhard Gentz, Melissa Stockman, and Dipankar Dwivedi
- Subjects
Feature engineering ,Online and offline ,Information Systems and Management ,cybersecurity ,Computer science ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,computer.software_genre ,Stuxnet ,cyber-physical systems ,Convolutional neural network ,Civil Engineering ,Safety, Risk, Reliability and Quality ,programmable logic controller ,business.industry ,Programmable logic controller ,side channels ,Computation Theory and Mathematics ,Computer Science Applications ,Random forest ,machine learning ,Modeling and Simulation ,Control system ,Embedded system ,Malware ,business ,computer - Abstract
In recent years, attacks such as the Stuxnet malware have demonstrated that cyberattacks against control systems cause extensive damage. These attacks can result in physical damage to the networked systems under their control. In this paper, we discuss our approach for detecting such attacks by distinguishing between programs running on a programmable logic controller (PLC) without having to monitor communications. Using power signatures generated by an attached, high-frequency power measurement device, we can identify what a PLC is doing and when an attack may have altered what the PLC should be doing. To accomplish this, we generated labeled data for testing our methods and applied feature engineering techniques and machine learning models. The results demonstrate that Random Forests and Convolutional Neural Networks classify programs with up to 98% accuracy for major program differences and 84% accuracy for minor differences. Our results can be used for both online and offline applications.
- Published
- 2019
31. SPARCS: Stream-Processing Architecture applied in Real-time Cyber-physical Security
- Author
-
Joshua Boverhof, Dan Gunter, Reinhard Gentz, and Sean Peisert
- Subjects
Stream processing ,SCADA ,Affordable and Clean Energy ,Computer science ,Analytics ,business.industry ,Real-time computing ,Cyber-physical system ,Bandwidth (computing) ,Control reconfiguration ,Fault tolerance ,business ,Phasor measurement unit - Abstract
In this paper, we showcase a complete, end-to-end, fault tolerant, bandwidth and latency optimized architecture for real time utilization of data from multiple sources that allows the collection, transport, storage, processing, and display of both raw data and analytics. This architecture can be applied for a wide variety of applications ranging from automation/control to monitoring and security. We propose a practical, hierarchical design that allows easy addition and reconfiguration of software and hardware components, while utilizing local processing of data at sensor or field site ("fog computing") level to reduce latency and upstream bandwidth requirements. The system supports multiple fail-safe mechanisms to guarantee the delivery of sensor data. We describe the application of this architecture to cyber-physical security (CPS) by supporting security monitoring of an electric distribution grid, through the collection and analysis of distribution-grid level phasor measurement unit (PMU) data, as well as Supervisory Control And Data Acquisition (SCADA) communication in the control area network.
- Published
- 2019
32. Trusted CI Experiences in Cybersecurity and Service to Open Science
- Author
-
Ryan Kiser, Kay Avila, Barton P. Miller, John Zage, Andrew Adams, Scott Russell, Von Welch, Jim Marsteller, Jim Basney, Robert Cowles, Mark Krenz, Terry Fleury, Susan Sons, Florence Hudson, Sean Peisert, Dana Brunson, Elisa Heymann, Craig Jackson, and Jeannette Dopheide
- Subjects
distributed systems ,FOS: Computer and information sciences ,Service (systems architecture) ,Open science ,Computer Science - Cryptography and Security ,business.industry ,Center of excellence ,security and protection ,Computer security ,computer.software_genre ,risk management ,cs.CR ,Political science ,business ,computer ,Cryptography and Security (cs.CR) ,Risk management - Abstract
This article describes experiences and lessons learned from the Trusted CI project, funded by the US National Science Foundation to serve the community as the NSF Cybersecurity Center of Excellence. Trusted CI is an effort to address cybersecurity for the open science community through a single organization that provides leadership, training, consulting, and knowledge to that community. The article describes the experiences and lessons learned of Trusted CI regarding both cybersecurity for open science and managing the process of providing centralized services to a broad and diverse community., Comment: 8 pages, PEARC '19: Practice and Experience in Advanced Research Computing, July 28-August 1, 2019, Chicago, IL, USA
- Published
- 2019
- Full Text
- View/download PDF
33. Low-Resolution Fault Localization Using Phasor Measurement Units with Community Detection
- Author
-
Anna Scaglione, Sean Peisert, and Mahdi Jamei
- Subjects
Computer science ,020209 energy ,FOS: Physical sciences ,Bioengineering ,Systems and Control (eess.SY) ,02 engineering and technology ,Fault (power engineering) ,Unobservable ,physics.data-an ,Units of measurement ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,Observability ,Fault Localization ,Electrical impedance ,Phasor ,Observable ,PMU ,cs.SY ,Physics - Data Analysis, Statistics and Probability ,Computer Science - Systems and Control ,Heuristics ,Algorithm ,Community Detection ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
A significant portion of the literature on fault localization assumes (more or less explicitly) that there are sufficient reliable measurements to guarantee that the system is observable. While several heuristics exist to break the observability barrier, they mostly rely on recognizing spatio-temporal patterns, without giving insights on how the performance are tied with the system features and the sensor deployment. In this paper, we try to fill this gap and investigate the limitations and performance limits of fault localization using Phasor Measurement Units (PMUs), in the low measurements regime, i.e., when the system is unobservable with the measurements available. Our main contribution is to show how one can leverage the scarce measurements to localize different type of distribution line faults (three-phase, single-phase to ground, ...) at the level of sub-graph, rather than with the resolution of a line. We show that the resolution we obtain is strongly tied with the graph clustering notion in network science., Accepted in IEEE SmartGridComm 2018 Conference
- Published
- 2018
34. Flowzilla: A Methodology for Detecting Data Transfer Anomalies in Research Networks
- Author
-
Anna Giannakou, Dan Gunter, and Sean Peisert
- Subjects
screening and diagnosis ,Computer science ,media_common.quotation_subject ,Volume (computing) ,computer.software_genre ,anomaly detection ,Random forest ,Detection ,Outlier ,network security ,network performance measurement ,Anomaly detection ,Data mining ,computer ,Normality ,4.2 Evaluation of markers and technologies ,Data transmission ,media_common - Abstract
Research networks are designed to support high volume scientific data transfers that span multiple network links. Like any other network, research networks experience anomalies. Anomalies are deviations from profiles of normality in a research network's traffic levels. Diagnosing anomalies is critical both for network operators and users (e.g., scientists). In this paper we present Flowzilla, a general framework for detecting and quantifying anomalies on scientific data transfers of arbitrary size. Flowzilla incorporates Random Forest Regression(RFR) for predicting the size of data transfers and utilizes an adaptive threshold mechanism for detecting outliers. Our results demonstrate that our framework achieves up to 92.5% detection accuracy. Furthermore, we are able to predict data transfer sizes up to 10 weeks after training with accuracy above 90%.
- Published
- 2018
- Full Text
- View/download PDF
35. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid
- Author
-
Anthony R. Florita, Deepjyoti Deka, Matthew J. Reno, Scott Backhaus, Ciaran Roberts, Sean Peisert, Michael Chertkov, Emma Stewart, Andrey Y. Lokhov, Philip Top, Valerie Hendrix, and Thomas J. King
- Subjects
0209 industrial biotechnology ,Analytics ,Computer science ,Interface (computing) ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Field (computer science) ,Machine Learning ,020901 industrial engineering & automation ,Resilience (network) ,0105 earth and related environmental sciences ,incipient failure ,validation ,business.industry ,Data stream mining ,Distribution Grid ,prediction ,Grid ,DER ,Smart grid ,Data analysis ,Artificial intelligence ,business ,verification ,computer - Abstract
We consider the field of machine learning and where it is both useful, and not useful, for the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper we consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors - such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals - such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop. This paper describes benefits and limits of present machine-learning applications for use on the grid and presents a series of case studies that illustrate the potential benefits of developing advanced local multi-variate analytics machine-learning-based applications.
- Published
- 2017
36. A Model of Owner Controlled, Full-Provenance, Non-Persistent, High-Availability Information Sharing
- Author
-
Sean Peisert, Ed Talbot, and Matt Bishop
- Subjects
Provable security ,business.industry ,Computer science ,Information sharing ,05 social sciences ,Information Dissemination ,Access control ,02 engineering and technology ,Redaction ,Computer security ,computer.software_genre ,Need to know ,020204 information systems ,High availability ,information sharing ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Confidentiality ,fault tolerance ,ORCON ,provable security ,business ,computer ,050107 human factors - Abstract
© 2017 Copyright held by the owner/author(s). In this paper, we propose principles of information control and sharing that support ORCON (ORiginator COntrolled access control) models while simultaneously improving components of confidentiality, availability, and integrity needed to inherently support, when needed, responsibility to share policies, rapid information dissemination, data provenance, and data redaction. This new paradigm of providing unfettered and unimpeded access to information by authorized users, while at the same time, making access by unauthorized users impossible, contrasts with historical approaches to information sharing that have focused on need to know rather than need to (or responsibility to) share.
- Published
- 2017
37. Security in high-performance computing environments
- Author
-
Sean Peisert
- Subjects
General Computer Science ,cybersecurity ,high-throughput networks ,Computer science ,high-performance computing ,020207 software engineering ,02 engineering and technology ,Computer security ,computer.software_genre ,Supercomputer ,scientific computing ,020204 information systems ,Information and Computing Sciences ,0202 electrical engineering, electronic engineering, information engineering ,computer ,computer security ,Information Systems - Abstract
High-performance computing ((HPC) systems have some similarities and some differences with traditional IT computing systems, which present both challenges and opportunities. One challenge is that HPC systems are 'high-performance' by definition, and so many traditional security techniques are not effective because they cannot keep up with the system or reduce performance. HPC systems tend to be used for very distinctive purposes, have much more regular and predictable activity, and contain highly custom hardware/software stacks. Each of these elements can provide a toehold for leveraging some aspect of the HPC platform to improve security.
- Published
- 2017
38. ASLR: How Robust Is the Randomness?
- Author
-
Jonathan Ganz and Sean Peisert
- Subjects
Address space layout randomization ,Software ,Computer science ,business.industry ,Server ,Computer security ,computer.software_genre ,business ,computer ,Implementation ,Randomness - Abstract
This paper examines the security provided by different implementations of Address Space Layout Randomization (ASLR). ASLR is a security mechanism that increases control-flow integrity by making it more difficult for an attacker to properly execute a buffer-overflow attack, even in systems with vulnerable software. The strength of ASLR lies in the randomness of the offsets it produces in memory layouts. We compare multiple operating systems, each compiled for two different hardware architectures, and measure the amount of entropy provided to a vulnerable application. Our paper is the first publication that we are aware of that quantitatively compares the entropy of different ASLR implementations. In addition, we provide a method for remotely assessing the efficacy of a particular security feature on systems that are otherwise unavailable for analysis, and highlight the need for independent evaluation of security mechanisms.
- Published
- 2017
- Full Text
- View/download PDF
39. Online Thevenin parameter tracking using synchrophasor data
- Author
-
Sean Peisert, Emma Stewart, Alex McEachern, Anna Scaglione, Mahdi Jamei, Chuck McParland, and Ciaran Roberts
- Subjects
Smart grid ,Computer science ,020209 energy ,Real-time computing ,0202 electrical engineering, electronic engineering, information engineering ,Phasor ,Anomaly detection ,02 engineering and technology ,Thévenin's theorem ,Grid ,Electrical impedance ,Power (physics) - Abstract
There is significant interest in smart grid analytics based on phasor measurement data. One application is estimation of the Thevenin equivalent model of the grid from local measurements. In this paper, we propose methods using phasor measurement data to track Thevenin parameters at substations delivering power to both an unbalanced and balanced feeder. We show that for an unbalanced grid, it is possible to estimate the Thevenin parameters at each instant of time using only instantaneous phasor measurements. For balanced grids, we propose a method that is well-suited for online applications when the data is highly temporally-correlated over a short window of time. The effectiveness of the two methods is tested via simulation for two use-cases, one for monitoring voltage stability and the other for identifying cyber attackers performing “reconnaissance” in a distribution substation.
- Published
- 2017
- Full Text
- View/download PDF
40. Automated Anomaly Detection in Distribution Grids Using uPMU Measurements
- Author
-
Sean Peisert, Anna Scaglione, Alex McEachern, Emma Stewart, Chuck McParland, Mahdi Jamei, and Ciaran Roberts
- Subjects
Operating point ,Computer science ,business.industry ,020209 energy ,Real-time computing ,Phasor ,02 engineering and technology ,Transmission system ,Intrusion detection system ,computer.software_genre ,Grid ,Phasor measurement unit ,Automation ,Energy engineering ,law.invention ,Transmission (mechanics) ,SCADA ,Transmission (telecommunications) ,law ,0202 electrical engineering, electronic engineering, information engineering ,Anomaly detection ,Data mining ,business ,computer - Abstract
Automated Anomaly Detection in Distribution Grids Using µ PMU Measurements Mahdi Jamei ∗ , Anna Scaglione ∗ , Ciaran Roberts † , Emma Stewart † , Sean Peisert † , Chuck McParland † , Alex McEachern ‡ , ∗ School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ, USA † Lawrence Berkeley National Laboratory, Berkeley, CA, USA ‡ Power Standards Laboratory, Alameda, CA, USA Abstract—The impact of Phasor Measurement Units (PMUs) for providing situational awareness to transmission system op- erators has been widely documented. Micro-PMUs (µPMUs) are an emerging sensing technology that can provide similar benefits to Distribution System Operators (DSOs), enabling a level of visibility into the distribution grid that was previously unattainable. In order to support the deployment of these high resolution sensors, the automation of data analysis and prioritizing communication to the DSO becomes crucial. In this paper, we explore the use of µPMUs to detect anomalies on the distribution grid. Our methodology is motivated by growing concern about failures and attacks to distribution automation equipment. The effectiveness of our approach is demonstrated through both real and simulated data. Index Terms—Intrusion Detection, Anomaly Detection, Micro- Phasor Measurement Unit, Distribution Grid I. I NTRODUCTION The state vectors of the transmission grid are closely monitored and their physical behavior is well-understood [1]. In contrast, Distribution System Operators (DSOs) have historically lacked detailed real-time actionable information about their system. This, however, is set to change. As the distribution grid shifts from a demand serving network towards an interactive grid, there is a growing interest in gaining situational awareness via advanced sensors such as Micro- Phasor Measurement Units (µPMUs) [2]. The deployment of the µPMUs in isolation without addi- tional data driven applications and analytics is insufficient. It is critical to equip DSOs with complimentary software tools that are capable of automatically mining these large data sets in search of useful, actionable information. There has been a lot of work focused on using PMU data at the transmission level to improve Wide-Area Monitoring, Protection and Control (WAMPC) [3], [4]. The distribution grid, however, is lagging in this respect. Due to inherent differences between operational behavior, such as imbalances and increased variability on the distribution and transmission grid, the algorithms derived for WAMPC at the transmission level are generally not directly applicable at the distribution level. Our work is aimed at addressing this issue. We focus on an important application of µPMU data in the distribution system: anomaly detection, i.e., behavior that differs significantly from normal operation of the grid during (quasi) steady-state. An anomaly can take a number of forms, including faults, misoperations of devices or switching transients, among others, and its root cause can be either a natural occurrence, error or attack. The risk of cyber-physical attacks via an IP network has recently gained significant interest due to the increase in automation of our power gird via two-way communication. This communication is typically carried out on breachable networks that can be manipulated by attackers [5]. Even if an anomaly naturally occurs, it is important to notify the DSO to ensure proper remedial action is taken. A. Related Work The majority of published work in anomaly detection using sensor data, primarily SCADA and PMU data, has focused on the transmission grid. The proposed methods are typically data-driven approaches, whereby the measurements are in- spected for abnormality irrespective of the underlying physical model. One such example, the common path data mining approach implemented on PMU data and audit logs at a central server, is proposed in [6] to classify between a disturbance, an attack via IP computer networks and normal operation. Chen et al., [7] derive a linear basis expansion for the PMU data to reduce the dimensionality of the measurements. Through this linear basis expansion, it is shown how an anomaly, which changes the grid operating point, can be spotted by comparing the error of the projected data onto the subspace spanned by the basis and the actual values. Valenzuela et al., [8] used Principal Component Analysis (PCA) to classify the power flow results into regular and irregular subspaces. Through analyzing the data residing in the irregular subspace, their method determines whether the irregularity is caused by a network attack or not. Jamei et al., [9] propose an intrusion detection architecture that leverages µPMU data and SCADA communication over IP networks to detect potentially damag- ing activities in the grid. These aforementioned algorithms are all part of the suite of machine learning techniques that the security monitoring architecture will rely on. B. Our Contribution µPMUs, due to their high sampling frequency, are a much richer data source in comparison to traditional Distribution Supervisory Control and Data Acquisition (DSCADA). In this
- Published
- 2017
- Full Text
- View/download PDF
41. Monitoring Security of Networked Control Systems: It's the Physics
- Author
-
Chuck McParland, Sean Peisert, and Anna Scaglione
- Subjects
Computer Networks and Communications ,Computer science ,Energy management ,Control (management) ,Cyber-physical system ,Intrusion detection system ,Computer security ,computer.software_genre ,Insider ,Control system ,Safety engineering ,State (computer science) ,Electrical and Electronic Engineering ,Law ,computer - Abstract
Physical device safety is typically implemented locally using embedded controllers, whereas operations safety is primarily performed in control centers. Safe operations can be enhanced by correctly designed device-level control algorithms as well as protocols, procedures, and operator training at the control-room level, but all of these can fail. Moreover, these elements exchange data and issue commands via vulnerable communication layers. To secure these gaps and enhance operational safety, the authors believe command sequence monitoring must be combined with an awareness of physical device limitations and automata models that capture safety mechanisms. One way of doing this is by leveraging specification-based intrusion detection to monitor for physical constraint violations. This method can also verify that the physical infrastructure state is consistent with information and commands exchanged by controllers. This additional security layer enhances protection from both outsider attacks and insider mistakes.
- Published
- 2014
- Full Text
- View/download PDF
42. Control Systems Security from the Front Lines
- Author
-
Paul Dorey, Dale Peterson, Zach Tudor, Eric Byres, Sean Peisert, and Jonathan Margulies
- Subjects
Control system security ,Cloud computing security ,Computer Networks and Communications ,Computer science ,Network security ,business.industry ,Information security ,Computer security model ,Computer security ,computer.software_genre ,Security information and event management ,Security engineering ,Security service ,Software security assurance ,Security through obscurity ,Network security policy ,Electrical and Electronic Engineering ,business ,Law ,computer - Abstract
As part of this special issue on control systems for the energy sector, guest editors Sean Peisert and Jonathan Margulies put together a roundtable discussion so readers can learn about the security challenges facing the industrial control system/SCADA world from those who are on the front lines. The discussion touches on some of the hard problems of securing mission-critical systems in the real world, including the challenges of securing 20-year-old legacy infrastructures, defining vendors' roles and responsibilities in security, and where research and new technologies are needed to fill today's security gaps.
- Published
- 2014
- Full Text
- View/download PDF
43. Selected Papers from the 2017 IEEE Symposium on Security and Privacy
- Author
-
Terry Benzel and Sean Peisert
- Subjects
Defence & Security Studies ,Computer Software ,Computer Networks and Communications ,business.industry ,Computer science ,Internet privacy ,Computation Theory and Mathematics ,Electrical and Electronic Engineering ,business ,Strategic ,Law ,Data Format - Abstract
© 2003-2012 IEEE. For 38 years, the IEEE Symposium on Security and Privacy has been the premier forum for presenting computer security and electronic privacy developments and for bringing together leading researchers and practitioners. We invited authors to submit revised versions of their symposium papers, recast as articles suitable for publication in IEEE Security & Privacy magazine. Specifically, we asked the original authors to revise their papers to speak to the magazine's audience, which goes beyond the symposium's traditional academically focused audience to also include policymakers and practitioners.
- Published
- 2018
- Full Text
- View/download PDF
44. Multiclass classification of distributed memory parallel computations
- Author
-
Matt Bishop, Sean Whalen, and Sean Peisert
- Subjects
Self-organizing map ,Theoretical computer science ,Computer science ,Cloud computing ,Machine learning ,computer.software_genre ,Field (computer science) ,Multiclass classification ,Engineering ,Artificial Intelligence ,Communication patterns ,Physical Sciences and Mathematics ,Self-organizing maps ,business.industry ,Message passing ,Random forests ,Supercomputer ,Bayesian networks ,Signal Processing ,Distributed memory ,High performance computing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software - Abstract
High Performance Computing (HPC) is a field concerned with solving large-scale problems in science and engineering. However, the computational infrastructure of HPC systems can also be misused as demonstrated by the recent commoditization of cloud computing resources on the black market. As a first step towards addressing this, we introduce a machine learning approach for classifying distributed parallel computations based on communication patterns between compute nodes. We first provide relevant background on message passing and computational equivalence classes called dwarfs and describe our exploratory data analysis using self organizing maps. We then present our classification results across 29 scientific codes using Bayesian networks and compare their performance against Random Forest classifiers. These models, trained with hundreds of gigabytes of communication logs collected at Lawrence Berkeley National Laboratory, perform well without any a priori information and address several shortcomings of previous approaches.
- Published
- 2013
- Full Text
- View/download PDF
45. Techniques for the dynamic randomization of network attributes
- Author
-
William M.S. Stout, Adrian R. Chavez, and Sean Peisert
- Subjects
business.industry ,Computer science ,Software Defined Networking ,Distributed computing ,Dynamic Defense ,Overlay network ,Control reconfiguration ,Critical infrastructure ,Networking hardware ,Network simulation ,Moving Target Defense ,IP Address Hopping ,Software deployment ,Control system ,business ,Software-defined networking ,Computer Security ,Computer network - Abstract
Critical infrastructure control systems continue to foster predictable communication paths and static configurations that allow easy access to our networked critical infrastructure around the world. This makes them attractive and easy targets for cyber-attack. We have developed technologies that address these attack vectors by automatically reconfiguring network settings. Applying these protective measures will convert control systems into "moving targets" that proactively defend themselves against attack. This "Moving Target Defense" (MTD) revolves about the movement of network reconfiguration, securely communicating reconfiguration specifications to other network nodes as required, and ensuring that connectivity between nodes is uninterrupted. Software-defined Networking (SDN) is leveraged to meet many of these goals. Our MTD approach eliminates adversaries targeting known static attributes of network devices and systems, and consists of the following three techniques: (1) Network Randomization for TCP/UDP Ports; (2) Network Randomization for IP Addresses; (3) Network Randomization for Network Paths In this paper, we describe the implementation of the aforementioned technologies. We also discuss the individual and collective successes for the techniques, challenges for deployment, constraints and assumptions, and the performance implications for each technique.
- Published
- 2016
46. Security and Elections
- Author
-
Matt Bishop and Sean Peisert
- Subjects
electronic voting ,Computer Networks and Communications ,Electronic voting ,Computer science ,media_common.quotation_subject ,Internet privacy ,Security of data ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Audit ,Social and Behavioral Sciences ,privacy ,Education ,Politics ,Engineering ,Computer security ,Voting ,General election ,election security ,computer security education ,elections ,Electrical and Electronic Engineering ,Fraudlent activities ,media_common ,business.industry ,Network security ,Ranked voting system ,Public relations ,Voter registration ,voting fraud ,e-voting ,business ,Law - Abstract
Elections are common to almost all societies. Periodically, groups of people determine their representatives, leaders, neighborhood spokespersons, corporate executives, or union representatives by casting ballots and counting votes using a variety of schemes. Those who don’t participate see others around them doing so. And stories abound about rigged elections or results considered compromised by accident or poor communication. US-based elections follow a general pattern of voter registration, determining items to vote on, generating ballots, distributing election materials to the polling places, voting, counting the votes, declaring winners, and auditing the results. The details differ among jurisdictions, but each step requires considerable care to ensure the election’s integrity. So, elections are an ideal mechanism for teaching about security. At the University of California, Davis, we teach numerous computer security classes for undergraduate majors and nonmajors and for graduate students. This column presents some of our experiences using elections and e-voting systems as lecture material and as a class project done with the Yolo County Elections Office.
- Published
- 2012
- Full Text
- View/download PDF
47. The Open Science Cyber Risk Profile: The Rosetta Stone for Open Science and Cybersecurity
- Author
-
Von Welch and Sean Peisert
- Subjects
Open science ,Computer Networks and Communications ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Computer security ,computer.software_genre ,Security information and event management ,Risk profile ,050601 international relations ,Bridge (nautical) ,Threat ,Computer Software ,Electrical and Electronic Engineering ,Strategic ,Risk management ,021110 strategic, defence & security studies ,business.industry ,05 social sciences ,Software development ,Computation Theory and Mathematics ,Data Format ,0506 political science ,Defence & Security Studies ,business ,Law ,computer - Abstract
© 2003-2012 IEEE. The Open Science Cyber Risk Profile (OSCRP) working group has created a document that motivates scientists by demonstrating how improving their security posture reduces the risks to their science. This effort aims to bridge the communication gap between scientists and IT security professionals and allows for the effective management of risks to open science caused by IT security threats.
- Published
- 2017
- Full Text
- View/download PDF
48. The IEEE Symposium on Security and Privacy, in Retrospect
- Author
-
Peter G. Neumann, Sean Peisert, and Marvin Schaefer
- Subjects
Cynicism ,Computer Networks and Communications ,Computer science ,George (robot) ,Attendance ,Media studies ,Relevance (law) ,Organizational structure ,Electrical and Electronic Engineering ,Computer security ,computer.software_genre ,Law ,computer - Abstract
SECTION TITLE The IEEE Symposium on Security and Privacy, in Retrospect Peter G. Neumann | SRI International Sean Peisert | Lawrence Berkeley National Laboratory and University of California, Davis Marvin Schaefer T racing the history of computer security and pri- vacy is a mammoth undertaking, somewhat resembling efforts to combine archaeology and ethnol- ogy with a compendium of past and foreseen risks— and how different courses of history might have affected those risks in different ways. (For example, the Univer- sity of Minnesota’s NSF-funded collection of oral his- tories from influential people in this area is a wonderful effort to capture some this information; https://wiki. umn.edu/CBI_ComputerSecurity/WebHome.) Tracing the history of the IEEE Symposium on Secu- rity and Privacy (SSP), the longest-running computer security research meeting, is considerably easier—and quite relevant to the somewhat shorter history of IEEE Security & Privacy magazine. Indeed, a previous article written for the proceedings of the 31st SSP did exactly that, 1 so it seems unnecessary to duplicate it here. Instead, we focus more on SSP’s evolution and its vital relevance to the research and development com- munities along its path from a community gathering to premier security research meeting. We highlight some of the technological and engineering paradigms that SSP either stimulated or were reflected in intense dis- cussions that ensued, and also to some extent SSP’s potential impact on the world at large. Early Days SSP began in 1980 as the result of Stan Ames and George Davida wanting to hold a meeting with a few practitio- ners and others interested in security and privacy. That May/June 2014 first gathering attracted 50 people who were all seri- ously involved in the field in one way or another. It was more like the traditional notion of a workshop, rather than the modern ACM/IEEE/Usenix notion of a work- shop as a small conference. Initially with invited papers and panels, this informal setting morphed into calls for papers and then into active discussions of beliefs, appar- ent progress, and known open problems and challenges. There were few distractions in SSP’s early years at the Claremont Resort (whose front door is in Oakland and back door in Berkeley). Over 31 years, SSP grew in depth, breadth, and organizational structure, with a mix of practical and academic participants, papers, panels, and occasional invited talks. In 2012, with the number of attendees having outgrown the Claremont fire laws, the symposium moved to San Francisco, with more than 450 people attending in 2013, despite restricted travel budgets and related factors. With attendance approaching 500, the symposium outgrew even the St. Francis in San Francisco. Now, it’ll be held in San Jose, California—at least, in 2014 and 2015. SSP’s early participants genuinely thought they were on track to find solutions to the computer security problem—until reality and justifiable cynicism entered the picture. When worked examples began to be avail- able for study, recognition of the costs of security (effi- ciency, features, and sufficiency), and “new” discoveries (Shannon, Turing, Dijkstra, and Hoare) deepened the recognition that applications and experimental trends were just as important as theoretical research. Copublished by the IEEE Computer and Reliability Societies 1540-7993/14/$31.00 © 2014 IEEE
- Published
- 2014
- Full Text
- View/download PDF
49. Computer forensics in forensis
- Author
-
Sean Peisert, Keith Marzullo, and Matt Bishop
- Subjects
Computer errors ,Computer science ,Digital forensics ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Computer forensics ,Audit ,Computer security ,computer.software_genre ,Data science ,Terminology ,Scientific method ,General Earth and Planetary Sciences ,Suspect ,computer ,General Environmental Science - Abstract
Different users apply computer forensic systems, models, and terminology in very different ways. They often make incompatible assumptions and reach different conclusions about the validity and accuracy of the methods they use to log, audit, and present forensic data. This is problematic, because these fields are related, and results from one can be meaningful to the others. We present several forensic systems and discuss situations in which they produce valid and accurate conclusions and also situations in which their accuracy is suspect. We also present forensic models and discuss areas in which they are useful and areas in which they could be augmented. Finally, we present some recommendations about how computer scientists, forensic practitioners, lawyers, and judges could build more complete models of forensics that take into account appropriate legal details and lead to scientifically valid forensic analysis.
- Published
- 2008
- Full Text
- View/download PDF
50. The Medical Science DMZ
- Author
-
Ari E. Berman, Brian Tierney, Eli Dart, James Cuff, Edward Balas, Anurag Shankar, Robert L. Grossman, Sean Peisert, and William K. Barnett
- Subjects
Biomedical Research ,Medical Records Systems, Computerized ,DMZ ,Computer science ,Process (engineering) ,High Performance Computing ,Health Informatics ,Context (language use) ,02 engineering and technology ,Computer security ,computer.software_genre ,Computing Methodologies ,03 medical and health sciences ,Computer Communication Networks ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Use case ,Reference architecture ,Computer Security ,Health Insurance Portability and Accountability Act ,Data science ,United States ,030220 oncology & carcinogenesis ,Government Regulation ,020201 artificial intelligence & image processing ,Data Intensive Science ,Engineering research ,Brief Communications ,Cloud storage ,computer ,Confidentiality - Abstract
Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.
- Published
- 2015
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.