1,027 results on '"005.8"'
Search Results
2. Novel approaches to applied cybersecurity in privacy, encryption, security systems, web credentials, and education
- Author
-
Ruiz, Rodrigo de Souza
- Subjects
005.8 ,000 Computer science, information & general works - Abstract
Applied Cybersecurity is a domain that interconnects people, processes, technologies, usage environment and vulnerabilities in a complex manner. As a cybersecurity expert at CTI Renato Archer- a research institute from Brazilian Ministry of Science, Technology and Innovations, author developed novel approaches to help solve practical and practice-based problems in applied cybersecurity over the last ten years. The needs of the government, industry, customers, and real-life problems in five categories: Privacy, Encryption, Web Credentials, Security Systems and Education, were the research stimuli. Based on prior outputs, this thesis presents a cohesive narrative of the novel approaches in the mentioned categories consolidating fifteen research publications. The customers and society, in general, expect that companies, universities, and the government will protect them from any cyber threats. Fifteen research papers that compose this thesis elucidate a broader context of cyber threats, errors in security software and gaps in cybersecurity education. This thesis's research points out that a large number of organisations are vulnerable to cyber threats and procedures and practices around cybersecurity are questionable. Therefore, society expects a periodic reassessment of cybersecurity systems, practices and policies. Privacy has been extensively debated in many countries due to personal implications and civil liberties with citizenship at stake. Since 2018, GDPR has been in force in the EU and has been a milestone for people and institutions' privacy. The novel work in privacy, supported by four research papers, discusses the private mode navigation in several browsers and shows how privacy is a fragile feeling. The secrets of different companies, countries and armed forces are entrusted to encryption technologies. Three research papers support the encryption element discussed in this thesis. It explores vulnerabilities in the most used encryption software. It provides data exposure scenarios showing how companies, government and universities are vulnerable and proposes best practices. Credentials are data that give someone the right to access a location or a system. They usually involve a login, a username, email, access code and a password. It is customary to have a rigorous demand for security credentials a sensitive system of information. The work on web credentials in this thesis, supported by one research paper, examines a novel experiment that permits the intruder to extract user credentials in home banking and e-commerce websites, revealing common cyber flaws and vulnerabilities. Antimalware systems are complex software engineering systems purposely designed to be safe and reliable despite numerous operational idiosyncrasies. Antimalware systems have been deployed for protecting information systems for decades. The novel work on security systems presented in the thesis, supported by five research papers, explores antimalware attacks and software engineering structure problems. Cybersecurity's primary awareness is expected through school and University education, but the academic discourse is often dissociated from practice. The discussion-based on two research papers presents a new insight into cybersecurity education and proposes an IRCS Index of Relevance in Cybersecurity (IRCS) to classify the computer science courses offered in UK Universities relevance of cybersecurity in their curricula. In a nutshell, the thesis presents a coherent and novel narrative to applied cybersecurity in five categories spanning software, systems, and education.
- Published
- 2021
3. Privacy and utility in secure computations : optimal trade-offs through quantitative information flow
- Author
-
Ah-Fat, Patrick Wong Fen Kin and Huth, Michael
- Subjects
005.8 - Abstract
Secure Multi-Party Computation is a domain of Cryptography that enables several participants to compute a public function of their own private inputs, while preserving the secrecy of the inputs and without resorting to any trusted third party. Elaborate protocols have been designed in order to help participants to collaborate in order to compute functions in such a way. These protocols ensure that no information about the private inputs is ever revealed, apart from that which flows from the public and intended output of the computation. Intriguingly, the output of a computation, as a function of the inputs, inevitably leaks some information about the private inputs. The main objectives of this thesis are to further investigate this inevitable information flow, to propose a means of quantifying this leakage and to alleviate the risks it may generate. We introduce an attacker model based on a family of entropy-based measures that enable us to formally quantify the information that can be inferred about private inputs in secure computations. The measure of information flow that we use generalises and unifies the notions of Rényi entropy and g-entropy. Based on this model, we design randomising mechanisms that aim at enhancing participants' privacy by introducing a perturbation on function outputs, while guaranteeing a maximal distortion bound. We formally investigate optimal trade-offs between privacy of inputs and utility of output under different assumptions. We develop techniques that realise such trade-offs, which involve solving non-linear and non-convex optimisation problems as well as designing greedy and dynamic algorithms. We experimentally highlight the privacy gains that the solutions we obtain provide. Finally, we demonstrate that our analyses may scale to arbitrarily large input spaces in specific and well-defined applications by examining the special cases of secure three-party affine computations and digital goods auctions. We conclude by discussing this scalability issue along with the adaptation of our approach to continuous input spaces, which we believe may seed interesting prospects.
- Published
- 2021
- Full Text
- View/download PDF
4. A model for describing and encouraging cyber security knowledge sharing to enhance awareness
- Author
-
Alahmari, Saad
- Subjects
005.8 - Published
- 2021
- Full Text
- View/download PDF
5. Intrusion Detection Systems using Machine Learning and Deep Learning techniques
- Author
-
Hindy, Hanan, Coull, Natalie, Bayne, Ethan, ElSayed, Salma, and Bellekens, Xavier
- Subjects
005.8 - Abstract
The increased reliance on networked technologies has led to a digital transformation of general- and special-purpose networks that further interlace technologies and heterogeneous systems. The ever-evolving technological landscape of interconnected devices constantly expands the network attack surface, which has contributed to the number and complexity of cyber attacks in recent years. The analysis of network traffic through Intrusion Detection Systems (IDS) has become an essential element of the networking security toolset. To cope with the increased rate and complexity of cyber attacks, researchers have utilised Machine Learning (ML) and Deep Learning (DL) techniques to develop IDS to cope with new and zero-day attacks. However, the lack of large, realistic, and up-to-date datasets hinders the IDS development process. This thesis proposes an empirical investigation of ML and DL algorithms to detect known and unknown attacks in general- and special-purpose networks. The thesis further investigates how ML and DL algorithms can learn from a limited amount of data while retaining high accuracy. To this effect, a special-purpose IoT dataset is generated and evaluated against six ML techniques. The challenges and limitations of identifying anomalies in special-purpose networks are identified and discussed. In an attempt to reduce the need for large training datasets, this thesis investigates the utilisation of Few-Shot learning paradigm to train IDS using a limited amount of data. For this purpose, Siamese networks are used and evaluated in three scenarios. This thesis further investigates the use of autoencoders to detect zero-day attacks. The zero-day attack detection experiments highlight the problem of discriminating benign-mimicking attacks. To overcome this challenge, an additional layer of feature abstraction is proposed; to improve accuracy through the cumulative aggregation of network traffic. The results of this research demonstrate the effectiveness of the proposed approaches for IDS development. Siamese networks demonstrate their ability to learn from limited data. The proposed autoencoder models exhibit their potential to detect zero-day attacks. Finally, the significance of flow aggregation features in discriminating benign-mimicking attacks is demonstrated.
- Published
- 2021
6. Integration of cybersecurity in BIM-enabled facilities management organisations
- Author
-
Ghadiminia, Nikdokht, Mayouf, Mohammad, Cox, Sharon, and Krasniewicz, Jan
- Subjects
005.8 ,CAH11-01-03 - information systems ,CAH11-01-07 - business computing - Abstract
Building Information Modelling (BIM) enables the creation, exchange and storage of digital information which represents digital and physical assets within a facility. The data within the in-use phase of a BIM project life cycle incorporates the highest level of details, where the as-built data of the facilities are managed and maintained by the facilities management (FM) organisations. The connection of BIM with the FM systems facilitates access to as-built and as-maintained data of all components within a facility, which may enable control of the devices and systems within the facility. Hence, facilities and their occupants become ever more vulnerable to cyber-attacks with malicious intentions of harming the occupants or disrupting and destructing the facilities. Thus, effective cybersecurity management is required to protect data. Findings from the review of literature were summarised in a cybersecurity risk matrix, to bridge the concepts of cybersecurity and BIM in FM by unveiling the impact of a cybersecurity attack, resulting in a compromise of the integrity, availability, and confidentiality of data in various task areas of a BIM-enabled FM (BIM-FM) organisation. Hence, emphasising the significance of effective and efficient management of cybersecurity in preserving the benefits associated with the implementation of BIM in FM. Review of the literature showed that both academia and industry are more focused on the technical aspects of using BIM in FM, which is often coupled with an overdependency on technical cybersecurity measures. Thus, investing in a mature implementation of BIM, that includes cybersecurity considerations from a people and process perspective, is often overlooked in FM organisations. This has resulted in an increased vulnerability to a cybersecurity attack that may compromise the potential BIM benefits in FM. Therefore, this study sought to shift focus to the people and process aspects of the issue of cybersecurity in BIM-enabled FM, by exploring the people and process related BIM and cybersecurity determinants that contribute to a more cybersecure BIM-FM. An inductive approach to the research facilitated a multi-disciplinary exploration of the concepts of BIM and cybersecurity, which resulted in the demarcation of the research focus to the BIM enabled facilities management organisations. This was followed by a literature review and qualitative analysis of secondary data from BIM maturity models and cybersecurity best practice guidelines to investigate the requirements of a cybersecure implementation of BIM in FM. Findings were structured to form the primary research framework, that was further enhanced and improved using the empirical findings collected via 25 semi-structured interviews with facilities management professionals. Findings from the thematic analysis of the interviews were coalesced with the literature review findings to develop the BIMCS-FM framework upon the primary research framework. The BIMCS-FM framework presents the determinants of a cybersecure BIM in FM and their interconnections, to assist BIM-FM organisations in their approach to cybersecurity management. The framework was validated using expert opinion that was carried out using semi-structured questionnaire, that was qualitatively analysed to make final revisions on the framework. The BIMCS-FM framework acts as a prompting mechanism for BIM-FM organisations to integrate cybersecurity within all aspects of BIM in FM. This framework expands the scope of BIM maturity, by incorporating cybersecurity considerations as part of the management of BIM in FM. Hence, creating a unified approach towards the management of both BIM and cybersecurity in FM. The application of this framework to BIM-FM can benefit from the future development of process models to enable the build-up of knowledge, skill sets, awareness and culture that is required for a cybersecure implementation of BIM. This study also provides a foundation for future research into the complexities of cybersecurity in protecting the digital information in various task areas of a BIM-FM organisation.
- Published
- 2021
7. An investigation to cybersecurity countermeasures for global Internet infrastructure
- Author
-
Hammood, Hayder
- Subjects
005.8 - Abstract
The Internet is comprised of entities. These entities are called Autonomous Systems (ASes). Each one of these ASes is managed by an Internet Service Provider (ISP). In return each group of ISPs are managed by Regional Internet Registry (RIR). Finally, all RIRs are managed by Internet Assigned Number Authority (IANA). The different ASes are globally connected via the inter-domain protocol that is Border Gateway Protocol (BGP). BGP was designed to be scalable to handle the massive Internet traffic; however, it has been studied for improvements for its lack of security. Furthermore, it relies on Transmission Control Protocol (TCP) which, in return, makes BGP vulnerable to whatever attacks TCP is vulnerable to. Thus, many researchers have worked on developing proposals for improving BGP security, due to the fact that it is the only external protocol connecting the ASes around the globe. In this thesis, different security proposals are reviewed and discussed for their merits and drawbacks. With the aid of Artificial Immune Systems (AIS), the research reported in this thesis addresses Man-In-The-Middle (MITM) and message replay attacks. Other attacks are discussed regarding the benefits of using AIS to support BGP; however, the focus is on MITM and message replay attacks. This thesis reports on the evaluation of a novel Hybrid AIS model compared with existing methods of securing BGP such as S-BGP and BGPsec as well as the traditional Negative Selection AIS algorithm. The results demonstrate improved precision of detecting attacks for the Hybrid AIS model compared with the Negative Selection AIS. Higher precision was achieved with S-BGP and BGPsec, however, at the cost of higher end-to-end delays. The high precision shown in the collected results for S-BGP and BGPsec is largely due to S-BGP encrypting the data by using public key infrastructure, while BGPsec utilises IPsec security suit to encapsulate the exchanged BGP packets. Therefore, neither of the two methods (S-BGP and BGPsec) are considered as Intrusion Detection Systems (IDS). Furthermore, S-BGP and BGPsec lack in the decision making and require administrative attention to mitigate an intrusion or cyberattack. While on the other hand, the suggested Hybrid AIS can remap the network topology depending on the need and optimise the path to the destination.
- Published
- 2021
8. National cybersecurity capacity building framework for countries in a transitional phase
- Author
-
Naseir, Mohamed Altaher Ben
- Subjects
005.8 - Abstract
Building cybersecurity capacity has become increasingly a subject of global concern in both stable countries and those countries in a transitional phase. National and international Research & Technology Organisations (RTOs) have developed a plethora of guidelines and frameworks to help with the development of a national cybersecurity framework. Current state-of-art literature provides guidelines for developing national cybersecurity frameworks but, relatively little research has focused on the context of cybersecurity capacity building especially for countries in the transitional stage. Countries in a transition phase are typically characterised by civil war; political and economic upheaval; the absence of law. This has resulted in a critical knowledge gap that must be addressed through empirical research to guide these countries to develop and implement cybersecurity capacity platform. This thesis proposes a National Cybersecurity Capacity Building Framework (NCCBF) that relies on a variety of existing standards, guidelines, and practices to enable countries in a transitional phase to transform their current cybersecurity posture by applying activities that reflect desired outcomes. The NCCBF provides stability against unquantifiable threats and enhances security by embedding leading and lagging performance security measures at a national level. The NCCBF is inspired by a Design Science Research methodology (DSR) and guided by utilising modelling approach IDEF0. Developing this framework resulted in two qualitative studies, Interactive Management (IM) and Focus groups as the main data elicitation approach. These studies involving government officials, private sector, managers and general employees participating in security development from areas such as defence, e-services, the private sector, banking, the Digital Crime Unit, the Immigration and Foreigners Affairs Authority, the oil and gas sector and intelligence agencies. A set of objectives was derived from these studies to identify the key initiatives for the development of national cybersecurity capacity in the country. This research also used secondary data sources such as government reports, global indices, to validate the results of the research study. The findings suggest that countries in a transitional phase are vulnerable to cybersecurity risks, such as cybercrime and cyber terrorism, and that they lack of cybersecurity capacity areas such as; an adequate knowledge and awareness of cybersecurity, cybersecurity strategies and policies, technical controls, and incident response capabilities. Based on the research findings and analysis, a National Cybersecurity Capacity Building Framework (NCCBF) was constructed and evaluated, highlighting the key areas necessary for improving cybersecurity capacity of countries that are in a transitional phase. Furthermore, the NCCBF was evaluated by a structured set of criteria conducted within focus groups with experts from different countries including those from countries that were in a transitional phase. The evaluation demonstrated the valuable contribution of the NCCBF's in representing the challenges in National Cybersecurity Capacity Building and the complexities associated in the build.
- Published
- 2021
9. On the impact of privacy policy and app permissions linkage on users' disclosure decisions
- Author
-
Baalous, Rawan
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
Older versions of Android (before version 6.0) require users to make privacy decisions during apps installation process. The privacy decision is either to accept all the requested permissions to access user's data and install the app, or stop the installation process. After several years of criticizing this Android permissions model, a new model (run time model) was announced starting from Android Marshmallow. In the run time permissions model, the app is installed regardless of the required permissions. However, when the app needs access to users' private data (dangerous permissions), such as location or contacts, the user is prompted at the time of requesting this data with allow and deny options. The context of accessing user's data in the run time permissions model may give users more information about the most likely purpose of requesting to access the resource. However, requiring access to user's storage for example after user pressed "upload photo" button, does not mean that the app will only access user's storage for this purpose. After granting this permission, the app may still access user's storage for other legal purposes described in the app's privacy policy. Hence, it is important for the users to know dangerous permissions rationales in order to make more privacy informed decisions. Unfortunately, unlike Apple iOS, Android run time permissions model does not provide an option for including rationales in the standard permission request dialog. Android apps' privacy policies are still the main channel for providing users with data collection and usage practices. Nevertheless, the length of these policies discourages the majority of users from reading them. Therefore, it would be helpful to know if presenting users with rationales of requesting dangerous permissions extracted from apps' privacy policies would help them make more privacy informed decisions. To achieve this goal, this thesis addresses three challenges. The first challenge is to do the linkage between privacy policies statements and dangerous permissions used by Android apps in the run time permissions model. To this end, we built a taxonomy of dangerous permissions related phrases presented in Android apps' privacy policies, since no previous work has provided this dataset. We used this dataset as our gold standard. Given the amount of time and effort needed to build this dataset, the second challenge was to examine if machine learning methods can help in quickly and effectively identifying dangerous permissions relevant phrases in Android apps' privacy policies. In this regard, we discovered the effectiveness of using semantic sentence embedding for dangerous permissions' extraction. We compared the results generated by the sentence embedding model with the gold standard. The results provided insights into the strengths and limitations of sentence embedding in extracting privacy related information from privacy policies text. Finally, in a user study, we explored the role of dangerous permission type, clarity of dangerous permission rationales extracted from Android app's privacy policies, and clarity of context of the resource accessed on users' disclosure decisions. The knowledge gained from this experiment sheds more light on what users take into consideration when deciding to grant or deny the data collection requests in the Android run time permissions system.
- Published
- 2021
- Full Text
- View/download PDF
10. A framework for understanding and establishing an effective information security culture
- Author
-
Tolah, Alaa
- Subjects
005.8 ,Culture Framework ,Information Security Culture ,Human Behaviour ,Insider Threats ,Human Factor - Abstract
A challenge facing organisations is information security, as security breaches pose a serious threat to sensitive information. Organisations face security risks in relation to their information assets, which also stems from their own employees. Individuals who work in organisations can cause serious risks, even though investments are generally provided to improve security control measures and other devices. Organisations need to focus on employee actions and behaviour to limit security failures, as they aim to establish effective security culture with employees acting as a natural safeguard for information assets. However, the literature review highlights the lack of prior research models that are able to direct organisations with effective security culture, which is why the current research was conducted to provide a comprehensive framework that demonstrates the key factors that affect security culture. The main objective was to implement a reliable and valid framework capable of focusing on human behaviour and directing organisations in their assessment and improvement of security culture. The current research developed a comprehensive Information Security Culture and key Factors Framework (ISCFF) that correlates between human factors and security culture, which determined how information assets' security is enhanced. The framework provided a level of structured direction to enhance security management and security culture assessment controls. The development of framework is based on Alnatheer's (2012) model and a review of academic literature in a security culture. In the framework, a security culture comprised of various factors in three categories: influential factors, organisational behaviour factors that influence a security culture and reflection factors, which constitute a security culture. First category includes (top management, security policy, security education and training, security risk analysis and assessment, and ethical conduct); second category includes (personality traits and job satisfaction); and third category includes (security awareness, security ownership, and security compliance). The framework was validated, using a pragmatic approach with mixed-methods that comprised qualitative and quantitative research, with the findings confirmed the significance of the research identified factors in the development of security culture. A semi-structure interview-based investigation was conducted with thirteen experienced security specialists from seven organisations. The findings of interviews concluded that the continuous guidance of employees towards relevant security training sessions and security awareness development to enhance security culture. Additionally, an exploratory survey with 266 valid responses demonstrated the framework levels of validity and reliability through the use of an exploratory factor analysis (EFA), and a confirmatory factor analysis (CFA). Different hypothetical correlations were analysed through the use of structural equation modelling (SEM), with indirect exploratory effect of the moderators achieved through a multi-group analysis (MGA). This research has shown that the framework has validity and achieved an acceptable fit with the data, to initiate and maintain organisational security culture. This research fills an important gap on the significant relationship between personality traits and security culture. It also contributes to improve the knowledge of information security management through the introduction of a comprehensive information security culture and key factors framework in practice, which functions in the cultivation and maintenance of quality security culture. The framework factors are vital in justifying security culture acceptance. The framework is ultimately able to be used by organisations to construct their security culture through a process of enabling employees, directing their assumption and reducing the levels of insider threat. The framework can be used to improve the possibility to measure an organisational security culture and how to assess it. It helps in the design of employee security training for security awareness-advancement that will enhance the security culture.
- Published
- 2021
11. Economic drivers in security decisions in public Wi-Fi context
- Author
-
Sombatruang, Nisamanee
- Subjects
005.8 - Abstract
This thesis investigates economic drivers in security decisions in the context of public Wi-Fi. Four sets of studies took place. The first set examined the risks of public Wi-Fi today. An experimental rogue public Wi-Fi was set up for 150 hours first in London, UK, in 2016, and then in Nara, Japan, in 2017. Sensitive data such as emails and login credentials were found to have been transmitted insecurely. The second set of studies examined decision-making and drivers influencing users to use public Wi-Fi. Participants (106 - UK, 103 - Japan) took part in scenario-based questionnaires. Findings showed that the desire to save mobile data allowance, a form of resource preservation heuristic tendency (RPHT), significantly prompted participants who regularly face mobile data constraints to use public Wi-Fi. The next study examined evidence in the wild. Participants (71 - UK only) were recruited for three months to run My Wi-Fi Choices, an Android app developed to capture factors driving the decisions to use public Wi-Fi. The results emphasised the importance of RPHT in driving users to use public Wi-Fi. Therefore, advising an individual trapped in mobile data RPHT to stop using public Wi-Fi entirely is futile. Alternative security advice is needed. This led to the last set of studies examining user decision to adopt a Virtual Private Network (VPN) app which can help to mitigate public Wi-Fi risks. Discrete choice experiments were run with 243 participants (154 - UK, 94 - Japan) to examine attributes of a VPN app affecting user decision. Various attributes of a VPN app were identified as drivers for the download and installation and the actual use of the app. Combining the knowledge gained from all studies, this thesis proposes a RPHT-decision model explaining the effects of RPHT on security decisions.
- Published
- 2021
12. Software as a weapon : concepts, perceptions, and motivations in pursuit of a new technology of conflict
- Author
-
Silomon, Jantje, Roscoe, Bill, and Kello, Lucas
- Subjects
005.8 ,International relations ,Computer science - Abstract
This thesis addresses the topic of 'Software as a Weapon' (SaaW) using a mixed-methods approach, bringing together elements of Computer Science, International Relations, and Strategic Studies. The thesis therefore first addresses the nature of software, malware, and weaponised software via questionnaire-based public solicitation, with three groups of respondents: military officers, academics, and others. The results show that there is consensus among participants regarding the importance of defensive software capabilities for state security. However, depending on the training and background of respondents, questions pertaining to the nature of software exhibit statistically significant differences. For example, when deciding whether software should be treated like a physical object, or whether malware is a weapon. Yet, there is also consensus, such as that defensive software capabilities are vital to a state's security. The second part of the thesis investigates the factors that contribute to an actor pursuing SaaW. It explores the proliferation debate and examines similarities and differences to traditional weapon groups, including nuclear, biological, and chemical weapons, as well as small arms and light weapons. These factors are then used to create a Bayesian Network model representing an actor's source of impetus. From such a model, it is possible to reason about the interplay of complementary and competing forces. By accounting for restraining and motivating elements, the model introduces objectivity to the debate on actor motivation in the cyber domain, giving a variety of stakeholders a tool to evaluate actors' software weaponisation probabilities. To showcase and evaluate this model, three different actors are used, representing terrorists, state powers, and generic attackers. Quantitative data is combined with qualitative interviews, populating network nodes with prior probabilities and relative weightings of observed dependencies. An approach of weighting relative parent-nodes' influence strength is implemented, creating a linearly growing set of probability distributions. The results show that the probability of the generic actor pursuing SaaW is uncertain, which captures the nature of this scenario well. The state actor also shows ambivalence, but in this case high restraints are being countered by almost equally high capabilities, whilst motivating forces are low. The terrorist actor on the other hand has a medium to low probability, driven by a lack of capabilities and limited motivations despite very low retraining factors. Overall, this thesis emphasises the interdisciplinary nature of cyber security, and provides novel tools and concepts from Computer Science, International Relations, and Strategic Studies to understand SaaW.
- Published
- 2021
13. Network-based advanced malware detection using multi-classifier machine learning
- Author
-
Almashhadani, Ahmad, Sezer, Sakir, and O'Kane, Philip
- Subjects
005.8 ,Network security ,network traffic analysis ,intrusion detection ,machine learning ,malware analysis ,ransomware ,domain generation algorithm (DGA) - Abstract
Over the past decade, cyber threats have significantly evolved in persistence and sophistication. Malware has been the primary choice of weapon to carry out various cyberattacks. Host-based malware detection, as the primary line of defence, evolved into the \Achilles Heel". In particular, the increase of security-aware targeted attacks, comprises of reconnaissance and delivery phases, are capable of identifying deployed security tools and disabling these without being detected. Hence, the deployment of advanced, network-based Intrusion Detection System (IDS) has become an inevitable line-of-defence assisting host-based malware detection. Ransomware is a kind of advanced malware that has spread rapidly in recent years, causing massive financial losses for a broad range of victims, such as healthcare facilities, companies, and individuals. Modern host-based detection methods require the host to be infected first to be able to identify anomalies and detect the malware. By the time of infection, it may be too late as some of the system's assets would have been already encrypted or exfiltrated by the malware. Conversely, the network-based approach can be an effective detection method as most families of ransomware attempt to contact with command and control (C&C) servers before their harmful payloads are executed. Also, some recent ransomware families have evolved and combined the propagation properties of computer worms to be able to spread across the networks. A network-based ransomware detection approach, which complements well-established host-based ransomware detection methods, can be one of the essential means for detecting ransomware attack effectively. It can overcome the limitations of current ransomware defence while enabling early detection and timely deployment of countermeasures. State-of-the-art presents little research work that focuses on network-based approaches for ransomware detection. This thesis investigates the use of machine learning techniques for detecting crypto ransomware network activities. A thorough dynamic analysis of crypto ransomware network traffic is carried out using a dedicated malware testbed. A set of 18 network-based features are extracted from several network protocols of Locky, one of the well-established ransomware families. A new classification scheme is introduced to classify the features into four types. A multi-feature and multi-classifier intrusion detection system is proposed and implemented for detecting the communications between ransomware and its C&C server. This new approach employs two independent classifiers working in parallel on two levels: packet and flow. The experimental evaluation of the presented detection system demonstrates that the system offers high detection accuracy for each level: 97.92% and 97.08% respectively. Second, machine learning techniques are used to detect covert C&C channels established using Domain Generation Algorithm (DGA). DGA is one of the main techniques deployed by ransomware and botnet to connect with attackers by generating many pseudorandom domain names. A malicious domain name detection system, called MaldomDetector, is introduced. Prototyped MaldomDetector can detect the DGA-based communications before the malware is able to establish a successful connection with the C&C server, basing only on the used characters for the domain name MaldomDetector deploys a deterministic algorithm and easy to compute features extracted out of the domain name characters. It is not based on any probabilistic language model, i.e., a language-independent system, and does not utilise any data from an external site or wait for a DNS response packet; hence, significantly reducing the time and computation required to classify the domain names. The evaluation results demonstrate that MaldomDetector provides high accuracy of 98% in detecting different types of DGA-based domains. MaldomDetector can be employed as an early warning system to raise early alarms about potential malicious DNS communications. Finally, a multi-feature and multi-classifier network-based system (MFMCNS) is presented for detecting ransomware propagation activities. A comprehensive analysis of ransomware traffic is performed, and two sets of features are extracted based on two independent flow levels: session-based and time-based. Also, two individual classifiers are built employing the two different feature sets. The experimental results demonstrate a high detection accuracy for the session-based and time-based classifiers: 99.88% and 99.66% respectively validating the effectiveness of the extracted features. MFMCNS employs these classifiers in parallel on different levels where the classifiers' decisions are combined using a fusion rule. Experimental results validate that the overall MFMCNS detection accuracy and reliability have been enhanced.
- Published
- 2021
14. Entropic security information, materiality, and cybersecurity
- Author
-
Fouad, Noran
- Subjects
005.8 ,HF5548.37 Security measures. Data recovery. Disaster recovery ,QA0076.9.A25 Access control. Computer security - Published
- 2021
15. Security risk assessment in systems of systems
- Author
-
Ki-Aries, Duncan
- Subjects
005.8 - Abstract
A System of Systems (SoS) is a set of independent systems that interoperate to achieve capabilities that none of the separate systems can achieve independently. The component systems may be independently operated or managed, and this may cause control problems. An area of particular concern is managing security of the large complex system that is the SoS, because development and operation of component systems may be done independently. Security vulnerabilities may arise at the SoS level that are not present or cannot be determined at the component system level. Security design and management processes typically operate only at component system level. Within this thesis, the problem of security risk assessment at the SoS level is examined by identifying factors specific to SoSs, formulating a framework through which it can be managed, and creating a process with visualisation to support risk managers and security experts in making assessment of security risks for a SoS. Humans must be considered as part of the SoS and feature in risks associated with security. A broadly qualitative methodology has been adopted using interviews, case studies, and a scenario method in which prototype framework elements were tested. Two SoS examples, including the Afghan Mission Network (AMN) as a SoS, and a SmartPowerchair SoS were used to identify, combine, and apply relevant elements in a SoS context towards addressing the research problem. For the AMN, this included interviews and focus groups with stakeholders experienced in NATO security, risk, and network-based roles. Whereas, the SmartPowerchair SoS was based on interviews and on-going communication with a single stakeholder representative as the owner and user of the SoS. Based on the findings, OASoSIS has been developed as a framework combining the use of OCTAVE Allegro and CAIRIS to model and assess Information Security risk in the SoS context. The process for applying OASoSIS is detailed within the thesis. The first contribution of OASoSIS introduces a SoS characterisation process to support a SoS security risk assessment. The second contribution modifies a version of the OCTAVE Allegro Information Security risk assessment process to align with the SoS context. Risk data captured during a first-stage assessment then provides input for a third contribution that integrates concepts, models, and techniques with tool-support from CAIRIS to model the SoS information security risks. Two case studies relating to a Military Medical Evacuation SoS and a Canadian Emergency Response SoS were used to apply and validate the contributions. These were validated through input from expert Military Medical stakeholders experienced in NATO operations, and key Emergency Response SoS stakeholders with further input from an expert Emergency Management stakeholder. To further strengthen the validity of the end-to-end application of OASoSIS in future work, it would benefit from being implemented within the SoS design process for other SoS scenarios.
- Published
- 2021
16. Analysis of implementations and side-channel security of Frodo on embedded devices
- Author
-
Martinoli, Marco
- Subjects
005.8 - Abstract
Frodo is post-quantum cryptographic scheme, submitted to the NIST post-quantum standardisation effort. In this context, my contribution is twofold. First of all, I apply several side-channel techniques to attack Frodo on a (emulated) ARM Cortex-M0. By using a single power consumption trace of a matrix multiplication involving secret material, I show how a divide-and-conquer technique can be used to mount an efficient key recovery attack, which however does not fully exploit the available leakage. Divide-and-conquer indeed assumes that leakage is independent across different subkeys, which is a limitation I overcome by mounting an extend-and-prune attack that exploits previously recovered subkeys to formulate an educated guess on intermediate variables. My study proceeds with the analysis of countermeasures: I show a deterministic countermeasure aimed at thwarting the extend-and-prune attack, I present a countermeasure that masks the Hamming weight thanks to the fact that secret elements are much smaller than the size of the space they live in, and finally I show how well-known countermeasures, such as blinding and masking, can be integrated into Frodo and assess the corresponding overhead. My second contribution is a detailed analysis of the performances of Frodo on another embedded device, the ARM Cortex-M4. Although more powerful than the M0, this is still a very constrained environment where not all the matrices needed in the computations can be fully stored in memory, as they are too large. On-the-fly generation of such matrices is therefore required. I take the optimisations a step further by utilising ARM assembly instructions to multiply and accumulate 16-bit values as halfwords of 32-bit registers. Finally, I challenge the need for cryptographically secure PRNGs for the generation of public matrices in favour of faster non-cryptographic PRNGs. The result is a dramatic improvement in performance accompanied by an educated discussion about whether doing so affects security.
- Published
- 2020
17. Network traffic behaviour profiling
- Author
-
Piskozub, Michal and Martinovic, Ivan
- Subjects
005.8 ,Cyber Security ,Computer Science - Abstract
Nowadays, computer networks have become incredibly complex due to the evolution of online services and the rapid growth of the number of smart devices such as smartphones, tablets and laptops. Most of users' information, even the most sensitive ones, are transmitted over the Internet. Unfortunately, due to this phenomenon we also see an increasing interest of malware developers who are able to find and exploit novel vulnerabilities in network devices to carry out their malicious intents. To tackle these threats, network analysts should be aided with advanced techniques to identify malicious traffic in order to guarantee the security of networks. In this thesis, we aim to reduce the asymmetric advantage of attackers by examining malware detection and classification using flow-level network traffic. Our methods explore the ability to extract network behaviours generated by malware. We further evaluate the challenge of working with limited amount of data offered by flows to detect and classify network traffic of malware. Malicious flows are intertwined with benign ones originating from a production network to simulate the real-world settings. We gather one of the largest network flow datasets of malware in order to evaluate our proposals and show that we can detect unseen malware variants. Moreover, we explore the behaviour profiling of network hosts in order to identify them on large networks. We extract unique behaviours and show that we can work only with the amount of information exchanged by hosts in order to successfully extract their unique behaviours and hence distinguish them from others. We show that while such an approach could be used for maintenance of networks, it may also be employed as an attack against network-based moving target defence (NMTD) systems, which is followed by countermeasures and guidelines to avoid such scenarios. Finally, we propose a novel method of storing network flow data in a domain specific binary file format, which is motivated by the lack of sufficient methods to process large-scale network data on the order of billions of flows. The binary format makes the analyses of methods in this thesis possible, especially when working with the University of Oxford dataset, which contains more than 181 billion flows. We show that our binary format improves the state of the art in terms of storage, while offering faster data processing techniques.
- Published
- 2020
18. Non-state actors and norms of responsible behaviour in cyberspace
- Author
-
Eggenschwiler, Jacqueline, Dunn Cavelty, Myriam, and Williams, Rebecca Ann
- Subjects
005.8 - Abstract
Computer systems and networks have become key determinants for the proper func- tioning of global markets, political institutions, and societies at large. Given their extensive reach into almost all areas of human activity, their safekeeping has become of strategic importance for a diverse range of actors. The proliferation of offensive cyberoperations, such as WannaCry or Petya/NotPetya, has spurred calls for normative measures of restraint, and behaviour-guiding rules of the road. Despite surging numbers of academic publications pertaining to cybersecurity generally, and norm-making processes specifically, the contributions of non-state actors to global cybersecurity governance efforts have remained under-theorised. With a view to offering correctives, this thesis examines the roles assumed by non-state actors in global cybersecurity norm formation processes. Specifically, it analyses how, in which capacities, and how effectively non-state protagonists engage in norm cultivation endeavours by surveying nine exploratory case studies, grouped into three stakeholder clusters, i.e. (a) civil society and academia, (b) corporate actors, and (c) expert communities. Triangulating different qualitative means and methods of data collection and analysis, this thesis suggests that non-state actors have come to exert discernible politico- legal influence over discussions about norms of responsible behaviour in cyberspace. Advancing empirically more informed and varied conceptualisations of the parts played by non-state actors in cybersecurity norm creation projects, this dissertation suggests that their roles can be systematised along the following profiles: (a) knowledge brokers, (b) awareness raisers, (c) norm leaders and cooperation incubators, (d) diplomatic change agents, (e) discussion feeders and gap fillers, (f) implementation assistants and capacity builders, and (g) custom shapers. The case studies reveal noteworthy variations in how non-state entities seek to shape actor behaviour and realise regulatory effects. The results of this inquiry go to show that non-state actors have to be taken seriously as key contributors to global cybersecurity steering efforts, and that their actions and authority have come to extend beyond advocacy or lobbying.
- Published
- 2020
19. Improving automated protocol verification : real world cryptography
- Author
-
Jackson, Dennis, Simpson, Andrew, and Cremers, Cas
- Subjects
005.8 ,Computer security - Abstract
Analysing the security of cryptographic protocols by hand is a challenging endeavour. It requires substantial expertise, weeks of intensive effort and the resulting proof of security often arrives long after the protocol design has been finalised and even deployed. However, automated protocol verification tools, such as Tamarin and ProVerif, offer a compelling alternative as they require less expertise, promise quicker results and have been used to successfully analyse complex real world protocols such as TLS, 5G and Signal. These tools are built on a idealised model of cryptography, termed the symbolic model, which requires strong assumptions about the properties of cryptographic primitives. Consequently, automated protocol analyses may miss attacks which rely on properties of real primitives which are missing from the symbolic model. Furthermore, some protocols are not amenable to automated analysis as they use cryptographic primitives for which no suitable symbolic model currently exists. This motivates natural research questions: How closely does the contemporary symbolic model approximate the real world behaviour of common cryptographic primitives? Can we improve the accuracy and breadth of the symbolic model by developing new approaches to underlying formalism? In this thesis, we set out to address these questions for two popular cryptographic primitives: digital signatures and Diffie-Hellman groups. The symbolic model of digital signatures was first published nearly two decades ago and is widely used. We uncover a startling mismatch between the standard cryptographic definition of signature scheme security and the symbolic description. We repair this mismatch through the development of a novel symbolic model which we use to discover a number of new attacks on deployed protocols. We also document a number of previous analyses which missed these attacks due to their traditional symbolic model. Next we consider Diffie-Hellman groups. Unlike digital signatures, symbolic Diffie-Hellman models have evolved considerably in recent years and we review the progress in this area, highlighting two key limitations. Firstly, that these models only describe prime order groups, despite the popularity of non-prime order groups in practice. We develop new symbolic models to remedy this and use them to discover new attacks on real world protocols. Secondly, that there is no effective procedure for analysing protocols which make use of the full field structure of Diffie-Hellman exponents. We develop a new formalism which can be used to mechanically analyse such protocols and demonstrate its effectiveness on hand-worked examples. Finally, we conclude by examining the methodology behind our approach. We argue that the techniques behind our new symbolic models can be applied to many other cryptographic primitives to yield more accurate symbolic models. We go on to summarise our contributions and sketch several promising lines of future work.
- Published
- 2020
20. Photonic integration of a directly phase-modulated source for Quantum Key Distribution
- Author
-
De Marco, Innocenzo and Razavi, Mohsen
- Subjects
005.8 - Published
- 2020
21. Kleptography and steganography in blockchains
- Author
-
Al-Salami, Nasser
- Subjects
005.8 - Abstract
Despite its vast proliferation, the blockchain technology is still evolving, and witnesses continuous technical innovations to address its numerous unresolved issues. An example of these issues is the excessive electrical power consumed by some consensus protocols. Besides, although various media reports have highlighted the existence of objectionable content in blockchains, this topic has not received sufficient research. Hence, this work investigates the threat and deterrence of arbitrary-content insertion in public blockchains, which poses a legal, moral, and technical challenge. In particular, the overall aim of this work is to thoroughly study the risk of manipulating the implementation of randomized cryptographic primitives in public blockchains to mount kleptographic attacks, establish steganographic communication, and store arbitrary content. As part of our study, we present three new kleptographic attacks on two of the most commonly used digital signatures: ring signature and ECDSA. We also demonstrate our kleptographic attacks on two real cryptocurrencies: Bytecoin and Monero. Moreover, we illustrate the plausibility of hijacking public blockchains to establish steganographic channels. Particularly, we design, implement, and evaluate the first blockchain-based broadcast communication tool on top of a real-world cryptocurrency. Furthermore, we explain the detrimental consequences of kleptography and steganography on the users and the future of the blockchain technology. Namely, we show that kleptography can be used to surreptitiously steal the users' secret signing keys, which are the most valuable and guarded secret in public blockchains. After losing their keys, users of cryptocurrencies will inevitably lose their funds. In addition, we clarify that steganography can be used to establish subliminal communication and secretly store arbitrary content in public blockchains, which turns them into cheap cyberlockers. Consequently, the participation in such blockchains, which are known to store unethical content, can be criminalized, hindering the future adoption of blockchains. After discussing the adverse effects of kleptographic and steganographic attacks on blockchains, we survey all of the existing techniques that can defend against these attacks. Finally, due to the shortcomings of the available techniques, we propose four countermeasures that ensure kleptography and steganography-resistant public blockchains. Our countermeasures include two new cryptographic primitives and a generic steganographyresistant blockchain framework (SRBF). This framework presents a universal solution that deters steganography and practically achieves the right to be forgotten (RtbF) in blockchains, which represents a regulatory challenge for current immutable blockchains.
- Published
- 2020
- Full Text
- View/download PDF
22. Organizational resilience : the case of cybersecurity
- Author
-
Hepfer, Manuel, Lawrence, Thomas B., and Powell, Thomas C.
- Subjects
005.8 - Published
- 2020
23. Inline and sideline approaches for low-cost memory safety in C
- Author
-
Nam, Myoung Jin and Greaves, David J.
- Subjects
005.8 ,Cyber security ,Software security - Abstract
System languages such as C or C++ are widely used for their high performance, however the allowance of arbitrary pointer arithmetic and type cast introduces a risk of memory corruptions. These memory errors cause unexpected termination of programs, or even worse, attackers can exploit them to alter the behavior of programs or leak crucial data. Despite advances in memory safety solutions, high and unpredictable overhead remains a major challenge. Accepting that it is extremely difficult to achieve complete memory safety with the performance level suitable for production deployment, researchers attempt to strike a balance between performance, detection coverage, interoperability, precision, and detection timing. Some properties are much more desirable, e.g. the interoperability with pre-compiled libraries. Comparatively less critical properties are sacrificed for performance, for example, tolerating longer detection delay or narrowing down detection coverage by performing approximate or probabilistic checking or detecting only certain errors. Modern solutions compete for performance. The performance matrix of memory safety solutions have two major assessment criteria - run-time and memory overheads. Researchers trade-off and balance performance metrics depending on its purpose or placement. Many of them tolerate the increase in memory use for better speed, since memory safety enforcement is more desirable for troubleshooting or testing during development, where a memory resource is not the main issue. Run-time overhead, considered more critical, is impacted by cache misses, dynamic instructions, DRAM row activations, branch predictions and other factors. This research proposes, implements, and evaluates MIU: Memory Integrity Utilities containing three solutions - MemPatrol, FRAMER and spaceMiu. MIU suggests new techniques for practical deployment of memory safety by exploiting free resources with the following focuses: (1) achieving memory safety with overhead < 1% by using concurrency and trading off prompt detection and coverage; but yet providing eventual detection by a monitor isolation design of an in-register monitor process and the use of AES instructions (2) complete memory safety with near-zero false negatives focusing on eliminating overhead, that hardware support cannot resolve, by using a new tagged-pointer representation utilising the top unused bits of a pointer.
- Published
- 2020
24. On the foundations of proof-of-work based blockchain protocols
- Author
-
Panagiotakos, Georgios, Kiayias, Aggelos, and Etessami, Kousha
- Subjects
005.8 ,blockchain protocols ,provable security ,proof-of-work - Abstract
Proof-of-work (PoW) based blockchain protocols, are protocols that organize data into blocks, connected through the use of a hash function to form chains, and which make use of PoW to reach agreement, i.e., proofs that require spending some amount of computational power to be generated. This type of protocols rose into prominence with the advent of Bitcoin, the first protocol that provably implements a distributed transaction ledger against an adversary that controls less than half of the total computational power in the network, in a setting where protocol participants join and leave dynamically without the need for a registration service. Protocols in this class were also the first to be shown sufficient to solve consensus under similar conditions, a problem of fundamental importance in distributed computing. In this thesis, we explore foundational issues of PoW-based blockchain protocols that mainly have to do with the assumptions required to ensure their safe operation. We start by examining whether a common random string that is shared at the start of the protocol execution among the protocol participants is required to efficiently run such protocols. Bitcoin's security is based on the existence of such a string, called the genesis block. On the other hand, protocols found in previous works that do not assume such a setup are inefficient, in the sense that their round complexity strongly depends on the number of protocol participants. Our first contribution is the construction of efficient PoW-based blockchain protocols that provably implement a distributed ledger and consensus without such setup. Next, we turn our attention to the PoW primitive. All previous analyses model PoW using a random oracle. While satisfactory as a sanity check, the random oracle methodology has received significant criticism and shown not to be sound. We make progress by introducing a non-idealized security model and appropriate computational assumptions that are sufficient to implement a distributed ledger or consensus when combined with the right PoW-based protocol. Finally, we analyze GHOST, a recently proposed blockchain protocol, and prove its security against a byzantine adversary under similar assumptions as Bitcoin. Previous works only considered specific attacks.
- Published
- 2020
- Full Text
- View/download PDF
25. Continuous authentication on mobile devices
- Author
-
Smith-Creasey, Max
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
Mobile devices are one of the most popular technologies in the world. Sales have increased yearly and almost all households have at least one. They are used for personal and business use, storing a plethora of private data. The data stored on these devices could be used for malicious purposes if obtained by attackers. Recently, traditional authentication techniques have been shown to have flaws that enable attackers to bypass them. Furthermore, traditional techniques only authenticate once, which enables attackers to steal an unlocked device and still maintain the ability to access private data. Researchers have proposed continuous authentication techniques to mitigate these issues, but there is work required to make such approaches both secure and usable. Most schemes currently are limited in modalities and lack the flexibility that can enhance both security and usability. This thesis focuses on advancing continuous authentication on mobile devices through providing new and novel mechanisms. In order to achieve this, the thesis proposes four novel contributions described in the following. The first contribution is a continuous authentication scheme based on a modality with little existing research: gesture typing. A novel set of six feature groupings are constructed to contain different types of features and capture the nuances word-gestures. The result shows the proposed technique performs better than other touchscreen-based features. Within this contribution, the activity performed during the typing sessions are also considered. The second contribution provides a scheme for continuous face authentication, including modules that current schemes were found to lack: liveness detection, contextual awareness and face tracking. Results show that each of these modules can provide significant benefits to security and usability (e.g.: detection of illumination or activity context allows a template from the same context to be selected for enhanced accuracy). The third contribution is a multi-modal behavioural authentication scheme using passively collected sensor data to authenticate users. A set of novel techniques for modelling the uncertainty in the scores obtained from the sensors is produced. The scores and the uncertainty in them can then be fused, using Dempster-Shafer theory. This scheme is shown to provide better accuracy than other commonly used fusion approaches due to the use of uncertainty. The last contribution joins the concepts of the previous contributions and employs touchscreen and face biometrics in an ensemble learning scheme to combine and enhance the biometrics. Furthermore, an adaptive threshold mechanism is introduced to compare the combined touchscreen and face score against. The threshold is adapted based on the score of the passively collected biometrics from the previous contribution. This approach is shown to yield enhanced and adaptable security and usability.
- Published
- 2020
26. Graph spectral domain data hiding
- Author
-
Al-khafaji, Hiba Mohammed Jaafar, Abhayaratne, Charith, and Chu, Xiaoli
- Subjects
005.8 - Abstract
Recent years have witnessed an increase in applications such as social, transportation, and sensor networks. The authentication and protection of these networks' data have become a major concern. Since these data are spread at arbitrary positions, without following a Cartesian grid, the techniques of classical signal processing cannot be applied to these data. This thesis explores the recently advanced signal processing of graphs for spread spectrum data hiding to protect and authenticate data captured via these networks. In this research, we first explore the graph Fourier domain for data hiding. Our proposed method involves two models for reducing the embedding distortion in the host graph that results from hiding the secret data and for enhancing the robustness of the embedded data against attacks namely, noise addition and deletion of random nodes data. We consider two data hiding scenarios: non-blind and blind. The experimental results demonstrate that the proposed methods have reduced the distortion using MSE by an average of 94% and 80% for non-blind and blind algorithms, respectively. In addition, the robustness of the proposed method is enhanced using the Hamming Distance (HD) by an average of 93% and 99.8% for non-blind algorithm and by an average of 60% and 71% for blind algorithm after the additive noise and deleting nodes data, respectively. The second contribution focuses on proposing a new approach for reversible data hiding for unstructured data in the graph Fourier domain. The proposed methodology includes a model to reduce embedding distortion based on establishing the relationship between the value of the embedded bits and the MSE of the modified graph; our methodology includes another model to maximise the robustness of the embedded bits against the additive noise. The experimental results demonstrate that the proposed method outperforms the previous methods by an average of 87% and 92% in terms of the embedding distortion, and by an average of 54% and 86% in terms of the robustness against the additive noise, and by an average 97% and 99% in terms of reversibility of the original graph signal compared to the previous methods, respectively. The third contribution involves exploiting the Graph Wavelet Transform (GWT) properties for graph data hiding. We explore the graph wavelet transform for proposing data hiding methods, including irreversible and reversible data hiding, with new models that minimise distortion in the host graph (resulting from hiding the secret bits) and enhance robustness against attacks. The experimental simulations show that the proposed GWT data hiding method outperforms the original data hiding methods (without using the proposed models) by an average of 99% and 99.4% for non-blind and blind data hiding, respectively in terms of embedding distortion. The robustness of the GWT data hiding algorithms are enhanced by an average of 77%, 71%, 60% and 99% for non-blind and blind algorithms after the additive noise and deleting nodes data, respectively. Similarly, the proposed GWT reversible data hiding method has achieved better performance compared to the previous methods by an average of 68%, 82%, 78%, 92%, 95% and 99% in terms of the embedding distortion, robustness against additive noise and reversibility of the original signal, respectively.
- Published
- 2020
27. Towards analysing cyber physical systems in 3D virtual environments
- Author
-
Leahy, Fergus William, Dulay, Naranker, and McCann, Julie
- Subjects
005.8 - Abstract
Design, development and deployment of cyber-physical systems (CPS), in particular human-centric CPS, is growing at a rapid pace, with the continuing downward trend in silicon costs and increasing demand for automation in the home, office, cities and national infrastructure, to cut costs, increase efficiencies and use less natural resources. Human-centric CPS can play pivotal a part in improving human well-being and safety, improve security and emergency evacuation, as well as helping us meet the worldwide targets for climate change set out by the Paris Agreement, involving the development and deployment of smart-homes, -cities and -countries developed by government, industry and academia, to reduce energy and resource waste. However, simulating and analysing a human-centric CPS remains a challenging task, involving simulation of distributed systems in tandem with human-mobility, real-world physics and phenomena. Instead these simulations are often carried out in distinct disconnected tools, resulting in poor end-to-end testing and disjointed analysis of the CPS in different scenarios. Cyber-physical systems interact with the physical world, hence, poorly tested systems can cause more harm than good, damaging infrastructure, harming people and even risking lives when systems fail. Existing tools for simulating and analysing human-centric cyber-physical systems do not support real-time simulation of environments with dynamic human-mobility and phenomena-on-demand, instead only supporting static scenarios using recorded traces. In this thesis we present a novel cyber-physical system simulation framework for the simulation and analysis of human-centric cyber-physical systems. The framework utilises a video game engine to simulate the real world, create human-mobility of individuals and crowds that adapt to the environment, and generate phenomena-on-demand. The CPS is simulated using the Cooja WSN simulator, providing accurate and scalable simulation of deployable code. Using the virtual world within the game engine, we've developed a novel approach to analysing simulations, coined ''visual diffing'', enabling developers to visually compare the physical and virtual activity of two simulations simultaneously intuitively within the context of the visual simulation. We demonstrate the framework through two human-centric CPS case studies, which evaluate the problems of presence-based office lighting and dynamic fire evacuation navigation. Both problems rely upon the accurate simulation of the virtual environment, the human agents within it and the interaction between the cyber, physical and human aspects of the system. We demonstrate that the use of a game engine to simulate the real world, model human-mobility using behaviour trees and enable visual diffing, provides a feasible way to simulate and analyse human-centric CPS to test and discover differences between different simulated scenarios. We also evaluate the performance and trade-offs of realising this approach.
- Published
- 2020
28. Fully Homomorphic Encryption applications : the strive towards practicality
- Author
-
Crawford, Jack Lik Hon
- Subjects
005.8 - Abstract
Fully Homomorphic Encryption (FHE) schemes are becoming evermore prevalent in the cryptography domain. They allow computation on encrypted data without the necessity of decryption, thus opening a plethora of new applications relating to cloud computing and cryptography. FHE schemes have been viewed generally as being impractical in a real-world scenario, thus leading to a relatively slow uptake within industry despite the high level of interest in the topic. This has caused a lack of FHE applications and thus various practical questions have not been tackled due to such problems not arising or going unnoticed within research. This thesis explores three contrasting FHE applications, each of which contain new ideas and overcome challenges within FHE. Namely, we analyse applications that require signi cant levels of bootstrapping, alternative data representations as well as the possibility of using FHE in the anonymity domain. Proofs of concept have been developed for each application to display the feasibility of each idea. The aim of this research is to present the mathematics of FHE in a comprehensive manner to improve the accessibility of concepts within FHE. Furthermore we analyse the usability and versatility of FHE in various scenarios with the aim to demonstrate the practicality of using FHE in a real-world setting.
- Published
- 2020
29. Risk analysis and management of security threats in virtualised information systems using predictive analytics
- Author
-
Kapasa, Rosemary Mulenga
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
The use of online server applications has increased in recent years. To achieve the benefits of these technologies, cloud computing, with its ability to use virtual machine technologies to overcome limitations and guarantee security and quality of service to its end user customer, is being used as a platform to run online server applications. This however brings about a number of security issues aimed specifically at virtual machine technologies. A number of security solutions like virtual machine introspection, intrusion detection and many more, have been proposed and implemented, but the question to combat security issues in near or even real time still remains. To help answer the above question or even move a step further from the existing solutions, which still use data mining techniques to combat the security issues of virtualisation, we propose the novel use of predictive analytics for risk analysis and management of security threats in virtualised information systems as well as design and implement a novel predictive analytics framework used to design build and implement the same predictive analytics model In this project, we adopt the use of predictive analytics and demonstrate how it can be used for managing risks and security of virtualised environments. An experimental testbed for the simulation of attacks and data collection is set-up. Exploratory data analytics process is carried out to prepare the data for predictive modelling. A linear regression predictive model is built using the results from the exploratory data analytics using linear regression algorithm. The model is then validated and tested for predictive accuracy using Naïve Bayes and logistic algorithms respectively. Time series algorithms are then used to build a time series predictive model that will predict attacks (DoS attacks in this case) in real time using new data. Designing and implementing the proposed predictive analytics model, which is aimed at monitoring, analysing and mitigating security threats in real time successfully demonstrates the use of predictive analytics modelling as a security management tool for virtualised information systems as a novel contribution to virtualisation security.
- Published
- 2020
- Full Text
- View/download PDF
30. A mixed methods approach to understanding cyber-security vulnerability in the baby boomer population
- Author
-
Morrison, Benjamin Alan, Briggs, Pamela, and Coventry, Lynne
- Subjects
005.8 ,C800 Psychology ,G900 Others in Mathematical and Computing Sciences ,L300 Sociology - Abstract
The ongoing development and ubiquitous spread of technology has brought with it new threats and opportunities for online victimisation. Although human factors cyber-security research continues to try to mitigate these threats through the application of behavioural science, some users, such as older adults, remain at particular risk of cyber-attacks, and yet remain heavily under-represented in the extant literature base. This thesis outlines a mixed methods approach to understanding older adult cyber-security vulnerability. The thesis began by identifying a range of technological changes that take place during the transition into retirement. Each of these changes offered avenues for subsequent cyber-security vulnerability. Through conducting a large-scale online survey in retired older adults, these retirement related factors were shown to be associated with engagement in risky online cyber-security behaviours. It was identified that the strongest predictor of these was an individual’s computer self-doubt. A second qualitative study found that older adults see cyber-security as a stressful subject and demonstrated both: the factors that influenced their confidence in relation to engaging in cyber-security behaviours, as well as their reasons for disengaging from cyber-security behaviours. A scale was developed to further understand older adult’s security related stress, which was applied to understand their coping behaviours when faced with a cyber-security challenge. This was effective at predicting older adults’ engagement in dysfunctional coping, highlighting how security stress might promote cyber-security vulnerability. Finally, the research applied the transactional theory of stress and coping to older adults’ cyber-security, demonstrating its effectiveness in predicting both dysfunctional and problem focussed coping strategies. The thesis provides new knowledge as to the factors which promote cyber-security vulnerability in older adults and outlines specific avenues as to how this vulnerability might manifest. Throughout this thesis, recommendations for policy makers, developers and future research are made and discussed in the context of existing literature.
- Published
- 2020
31. Tackling the challenges of information security incident reporting : a decentralized approach
- Author
-
Michail, A.
- Subjects
005.8 - Abstract
Information security incident under-reporting is unambiguously a business problem, as identified by a variety of sources, such as ENISA (2012), Symantec (2016), Newman (2018) and more. This research project identified the underlying issues that cause this problem and proposed a solution, in the form of an innovative artefact, which confronts a number of these issues. This research project was conducted according to the requirements of the Design Science Research Methodology (DSRM) by Peffers et al (2007). The research question set at the beginning of this research project, probed the feasible formation of an incident reporting solution, which would increase the motivational level of users towards the reporting of incidents, by utilizing the positive features offered by existing solutions, on one hand, but also by providing added value to the users, on the other. The comprehensive literature review chapter set the stage, and identified the reasons for incident underreporting, while also evaluating the existing solutions and determining their advantages and disadvantages. The objectives of the proposed artefact were then set, and the artefact was designed and developed. The output of this development endeavour is “IRDA”, the first decentralized incident reporting application (DApp), built on “Quorum”, a permissioned blockchain implementation of Ethereum. Its effectiveness was demonstrated, when six organizations accepted to use the developed artefact and performed a series of pre-defined actions, in order to confirm the platform’s intended functionality. The platform was also evaluated using Venable et al’s (2012) evaluation framework for DSR projects. This research project contributes to knowledge in various ways. It investigates blockchain and incident reporting, two domains which have not been extensively examined and the available literature is rather limited. Furthermore, it also identifies, compares, and evaluates the conventional, reporting platforms, available, up to date. In line with previous findings (e.g Humphrey, 2017), it also confirms the lack of standard taxonomies for information security incidents. This work also contributes by creating a functional, practical artefact in the blockchain domain, a domain where, according to Taylor et al (2019), most studies are either experimental proposals, or theoretical concepts, with limited practicality in solving real-world problems. Through the evaluation activity, and by conducting a series of non-parametric significance tests, it also suggests that IRDA can potentially increase the motivational level of users towards the reporting of incidents. This thesis describes an original attempt in utilizing the newly emergent blockchain technology, and its inherent characteristics, for addressing those concerns which actively contribute to the business problem. To the best of the researcher’s knowledge, there is currently no other solution offering similar benefits to users/organizations for incident reporting purposes. Through the accomplishment of this project’s pre-set objectives, the developed artefact provides a positive answer to the research question. The artefact, featuring increased anonymity, availability, immutability and transparency levels, as well as an overall lower cost, has the potential to increase the motivational level of organizations towards the reporting of incidents, thus improving the currently dismaying statistics of incident under-reporting. The structure of this document follows the flow of activities described in the DSRM by Peffers et al (2007), while also borrowing some elements out of the nominal structure of an empirical research process, including the literature review chapter, the description of the selected research methodology, as well as the “discussion and conclusion” chapter.
- Published
- 2020
- Full Text
- View/download PDF
32. A policy-based management approach to security in cloud systems
- Author
-
Abwnawar, Nasser
- Subjects
005.8 - Abstract
In the era of service-oriented computing, ICT systems exponentially grow in their size and complexity, becoming more and more dynamic and distributed, often spanning across different geographical locations, as well as multiple ownerships and administrative domains. At the same time, complex software systems are serving an increasing number of users accessing digital resources from various locations. In these circumstances, enabling efficient and reliable access control is becoming an inherently challenging task. A representative example here is a hybrid cloud environment, where various parts of a distributed software system may be deployed locally, within a private data centre, or on a remote public cloud. Accordingly, valuable business information is expected to be transferred across these different locations, and yet to be protected from unauthorised/malicious access at all times. Even though existing access control approaches seem to provide a sufficient level of protection, they are often implemented in a rather coarse-grained and inflexible manner, such that access control policies are evaluated without taking into consideration the current locations of requested resources and requesting users. This results in a situation, when in a relatively ‘safe’ environment (e.g., a private enterprise network) unnecessarily complex and resource-consuming access control policies are put in place, and vice versa in external, potentially ‘hostile’ network locations access control enforcement is not sufficient. In these circumstances, it becomes desirable for an access control mechanism to distinguish between various network locations so as to enable differentiated, fine grained, and flexible approach to defining and enforcing access control policies for heterogeneous environments. For example, in its simplest form, more stringent and protective policies need to be in place as long as remote locations are concerned, whereas some constraints may be released as soon as data is moved back to a local secure network. Accordingly, this PhD research efforts aims to address the following research question – How to enable heterogeneous computing systems, spanning across multiple physical and logical network locations, as well as different administrative domains and ownerships, with support for location-aware access control policy enforcement, and implement a differentiated fine-grained access control depending on the current location of users and requested resources? To address this question, the presented thesis introduces the notions of ‘location’ and ‘location-awareness’ that underpin the design and implementation of a novel access control framework, which applies and enforces different access control policies, depending on the current (physical and logical) network locations of policy subjects and objects. To achieve, this the approach takes the existing access control policy language SANTA, which is based on the Interval Temporal Logic, and combines it with the Topological Logic, thereby creating a holistic solution covering both the temporal and the spatial dimensions. As demonstrated by a hypothetical case study, based on a distributed cloud-based file sharing and storage system, the proposed approach has the potential to address the outlined research challenges and advance the state of the art in the field of access control in distributed heterogeneous ICT environments.
- Published
- 2020
33. Towards real-time anomaly detection within X-ray security imagery : self-supervised adversarial training approach
- Author
-
Akcay, Samet
- Subjects
005.8 - Abstract
Automatic threat detection is an increasingly important area in X-ray security imaging since it is critical to aid screening operators to identify concealed threats. Due to the cluttered and occluded nature of X-ray baggage imagery and limited dataset availability, few studies in the literature have systematically evaluated the automated X-ray security screening. This thesis provides an exhaustive evaluation of the use of deep Convolutional Neural Networks (CNN) for the image classification and detection problems posed within the field. The use of transfer learning overcomes the limited availability of the object of interest data examples. A thorough evaluation reveals the superiority of the CNN features over conventional hand-crafted features. Further experimentation also demonstrates the capability of the supervised deep object detection techniques as object localization strategies within cluttered X-ray security imagery. By addressing the limitations of the current X-ray datasets such as annotation and class-imbalance, the thesis subsequently transitions the scope to- wards deep unsupervised techniques for the detection of anomalies based on the training on normal (benign) X-ray samples only. The proposed anomaly detection models within the thesis employ a conditional encoder-decoder generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space — minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution — an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches. Based on the current approaches and open problems in deep learning, the thesis finally provides discussion and future directions for X-ray security imagery.
- Published
- 2020
34. Negotiation transparency and consistency in configurable protocols : an empirical investigation
- Author
-
Alashwali, Eman Salem and Martin, Andrew
- Subjects
005.8 ,Computer Science ,Cyber Security - Abstract
Configurability (also known as agility), is a protocol design framework that allows protocols to support multiple values for parameters such as the protocol version and ciphersuite. At the beginning of a new protocol session, both communicating parties, e.g. client and server, negotiate these parameters to reach a mutual agreement on optimal values for these parameters, which will be used for the rest of the session. The parameters negotiation phase is critical as it defines the security guarantees that the protocol can provide in a particular session. Hence, it has been an attractive target for downgrade attacks. While the literature has looked at the authenticity and integrity of parameters negotiation in configurable protocols to prevent downgrade attacks under the man-in-the-middle attacker model, negotiation transparency and consistency under other attacker models have been largely overlooked. Are there unexplored attacker models that can result in a downgrade? Can a semi-trusted server discriminate against its clients without being detected? Can two clients' requests to the same server receive inconsistent security guarantees? Can we achieve a better balance between security and backward compatibility? In this thesis we aim to answer these unexplored interrelated questions, with a focus on the TLS protocol as one of the most important and widely used configurable protocols. To this end, we first introduce a taxonomy of downgrade attacks in the TLS protocol and application protocols using TLS. Second, we define three types of negotiation models based on a new notion we introduce, which we call the "negotiation power". Third, we introduce a novel attacker model which we call the "discriminatory" model. Fourth, through a measurement-based case study on the Forward Secrecy property and the TLS protocol, we find that there are indeed servers that select non-Forward Secrecy, nevertheless they support it, proving that, in the same vein, discrimination downgrade attacks can go unnoticed. Fifth, through two measurement-based case studies in TLS and HTTPS, we quantify inconsistencies in HTTPS and TLS responses to requests that differ in subtle variables that are not expected to affect the received security guarantees. Namely, we quantify inconsistent servers' responses to requests with versus without the www. prefix, and to requests from different geographic locations. Finally, we examine the concept of "prior knowledge" to reduce the downgrade attacks' surface. The results of this thesis introduce transparency and consistency as needed properties in configurable protocols, and show that they are not perfectly achieved in widely used protocols today such as TLS and HTTPS.
- Published
- 2020
35. User profiling based on network application traffic monitoring
- Author
-
Shaman, Faisal
- Subjects
005.8 ,User profiling ,Network traffic analysis - Abstract
There is increasing interest in identifying users and behaviour profiling from network traffic metadata for traffic engineering and security monitoring. However, user identification and behaviour profiling in real-time network management remains a challenge, as the activities and underlying interactions of network applications are constantly changing. User behaviour is also changing and adapting in parallel, due to changes in the online interaction environment. A major challenge is how to detect user activity among generic network traffic in terms of identifying the user and his/her changing behaviour over time. Another issue is that relying only on computer network information (Internet Protocol [IP] addresses) directly to identify individuals who generate such traffic is not reliable due to user mobility and IP mobility (resulting from the widespread use of the Dynamic Host Configuration Protocol [DHCP]) within a network. In this context, this project aims to identify and extract a set of features that may be adequate for use in identifying users based on their network application activity and timing resolution to describe user behaviour. The project also provides a procedure for traffic capturing and analysis to extract the required profiling parameters; the procedure includes capturing flow traffic and then performing statistical analysis to extract the required features. This will help network administrators and internet service providers to create user behaviour traffic profiles in order to make informed decisions about policing and traffic management and investigate various network security perspectives. The thesis explores the feasibility of user identification and behaviour profiling in order to be able to identify users independently of their IP address. In order to maintain privacy and overcome the issues associated with encryption (which exists on an increasing volume of network traffic), the proposed approach utilises data derived from generic flow network traffic (NetFlow information). A number of methods and techniques have been proposed in prior research for user identification and behaviour profiling from network traffic information, such as port-based monitoring and profiling, deep packet inspection (DPI) and statistical methods. However, the statistical methods proposed in this thesis are based on extracting relevant features from network traffic metadata, which are utilised by the research community to overcome the limitations that occur with port-based and DPI techniques. This research proposes a set of novel statistical timing features extracted by considering application-level flow sessions identified through Domain Name System (DNS) filtering criteria and timing resolution bins: one-hour time bins (0-23) and quarter- hour time bins (0-95). The novel time bin features are utilised to identify users by representing their 24-hour daily activities by analysing the application-level network traffic based on an automated technique. The raw network traffic is analysed based on the development of a features extraction process in terms of representing each user’s daily usage through a combination of timing features, including the flow session, timing and DNS filtering for the top 11 applications. In addition, media access control (MAC) and IP source mapping (in a truth table) is utilised to ensure that profiling is allocated to the correct host, even if the IP addresses change. The feature extraction process developed for this thesis focuses more on the user, rather than machine-to-machine traffic, and the research has sought to use this information to determine whether a behavioural profile could be developed to enable the identification of users. Network traffic was collected and processed using the aforementioned feature extraction process for 23 users for a period of 60 days (8 May-8 July 2018). The traffic was captured from the Centre for Cyber Security, Communications and Network Research (CSCAN) at the University of Plymouth. The results of identifying and profiling users from extracted timing features behaviour show that the system is capable of identifying users with an average true positive identification rate (TPIR) based on hourly time bin features for the whole population of ~86% and ~91% for individual users. Furthermore, the results show that the system has the ability to identify users based on quarter-hour time bin features, with an average TPIR of ~94% for the whole population and ~96% for the individual user.
- Published
- 2020
36. Building cyber defense training capacity
- Author
-
Moore, Erik
- Subjects
005.8 ,cybersecurity ,defense ,training ,education ,cyber defense ,virtualization ,agile ,cyber identity ,collaborative ,bit induction ,Digital Identity ,Psychometric ,risk management - Abstract
As society advances in terms of information technology, the dependency on cyber secure systems increases. Likewise, the need to enhance both the quality and relevance of education, training, and professional development for cybersecurity defenders increases proportionately. Without a continued supply of capable cyber defenders that can come to the challenge well-prepared and continuously advance their skills, the reliability and thus the value of information technology systems will be compromised to the point that new information-driven societal structures in commerce, banking, education, infrastructure, and others across the globe would be put be at risk. The body of research presented here provides a progressive building of capacity to support information technology, cybersecurity, and cyber defense training efforts. The work starts by designing infrastructure virtualization methods and problem modeling, then advances to creating and testing tunable models for both technical and social-psychological support capabilities. The initial research was designed to increase the capacity of Regis University in education simulations and cyber competitions. As this was achieved the goals evolved to include developing effective multi-agency cyber defense exercises for government and private sector participants. The research developing hands-on computer laboratory infrastructure presents novel methods for enhancing the delivery of training and cyber competition resources. The multi-method virtualization model describes a strategy for analyzing a broad range of virtualization services for making agile cyber competition, training, and laboratory spaces that are the technical underpinning of the effort. The work adapts the agile development method SCRUM for producing training events with limited resources. Parallel to agile training systems provisioning, the research includes designing a 3D virtual world avatar-based resource to help students develop spatial skills associated with physical security auditing. It consists of a virtual world datacenter and training program. The second category of contributions includes the presentation of new models for analyzing complex concepts in cybersecurity. These models provide students with tools that allow them to map out newly acquired skills and understanding within a larger context. One model maps how classical security challenges change as digital technologies are introduced using a concept called “bit induction.” The other model maps out how technology can affect one’s sense of identity, and how to manage its disruption. The third area of contribution includes a rapid form of psychometric feedback, a customized quantitative longitudinal capability assessments, and an agile framework that is an extension of the earlier agile method adaptations. The most recent category of contribution extends the training analysis to analyzing the resultant training capabilities and providing new models to describe live operation using operational load analysis to describe characteristic behaviors along an incident timeline. The results of this research include novel cybersecurity frameworks, analytical methods, and education deployment models along with interpretation and documented implementation to support education institutions in meeting the emerging risks of society. Specific contributions include new models for understanding the disruptiveness of cyberattacks,models for agilely and virtually deploying immersive hands-on laboratory experiences, and interdisciplinary approaches to education that meet new psycho-sociological challenges in cyber defense. These contributions extend the forefront of Cybersecurity education and training in a coordinated way to contribute to the effectiveness and relevance of education solutions as society’s cybersecurity needs evolve.
- Published
- 2020
37. Optimizing deterrence strategies in state-state cyber conflicts : theoretical models for strategic cyber deterrence
- Author
-
Al Azwani, Nasser
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
Deterrence has successfully prevented nuclear confrontation for more than five decades. The motive for conducting this research is to response to the problem of cyber threat growth and find out if deterrence is able to stop state cyber adversaries from utilizing cyber threats against its cyber space. A considerable effort of defensive cyber technologies and solutions have been developed, although massive cyber-attacks are still occurring and growing in terms of complexity, severity and quantity. For that, States need a new tactic to deal with cyber threats rather than relying only on cyber defense or offense. However, there is a satisfactory chance for cyber deterrence to work despite of challenges. Research approach is to examine cyber deterrence theory inspired by traditional deterrence theory combined with game theory models. This approach respond to the argument via three dimensions. First, it responds to analyze relevance of credibility to deterrence assumptions and the reasons beyond associating credibility of cyber threat with cyber deterrence strategy and its role either success or failure of deterrence strategy. The developed analytical model consists of two players involved in a cyber conflict. The selected case study assist in generating clear understanding of the pivotal role of credibility to support optimizing deterrence strategy from real life context. Second, cyber escalation model developed reflecting the failure of cyber threat credibility (as a threat of punishment) in deterring state cyber adversaries. The model has attempted to explore nature of cyber escalation ladder either it is going to be limited within cyber space or might exceed to involve nuclear or other domains of conflicts. Third, deterrence by entanglement model as a new approach could be the best approach for succeeding cyber deterrence strategy compared to other traditional deterrence model. Deterrence by entanglement model analysis has moved from general deterrence concepts to more narrowed investigation measuring effectiveness of deterrence by entanglement in reducing conflict heat. It explores the degree to which cyber deterrence by entanglement can assist state in deterring its cyber adversaries within more peacefully approach. Each chapter is concluded with a section that prescribes certain strategies which states can benefit from in real life practice. These strategies and learned lessons will assist states to understand the essential requirements for developing its credibility in cyber space and draw the lines for states optimizing its cyber deterrence.
- Published
- 2020
38. Conceptualising adaptive cyber risk management : complexity, rationality and knowledge
- Author
-
Sallos, Mark and Yoruk, Esin
- Subjects
005.8 - Abstract
The increasing reliance of organisations on ICT-enabled interconnectivity for value creation has redefined the boundaries and attributes of potential security vulnerabilities (i.e. causal intricacy, scope, non-locality and non-linearity). Cybersecurity presents an epistemic climate that is distinctly hostile due to its domain-specific dynamics, complexity, dichotomous objectives, and effect on behavioural tendencies. Within the thesis, the local manifestation of these dynamics is described as a heuristic – a ‘knowledge problem’. This epistemic hostility hinders efforts to address and pre-empt the emerging threat of cybersecurity incidents in a manner that is proportional and contextually appropriate. The research argues that the degree of epistemic hostility faced by organisations, and its underpinning systemic and behavioural mechanisms, are inadequately represented in common inference-based constructs, like risk frameworks, which guide organisational practice, resulting in a ‘context-construct gap’. Throughout the thesis, these premises are deconstructed, explored and addressed in three dimensions: a literature based, theoretical analysis focused on the interaction between risk, complex systems, and ‘rationality’; an empirical, critical realist case study which explores and calibrates the postulated explanatory mechanisms in an illustrative real-world context; and a prescriptive formulation of an Adaptive Cyber Risk Management framework based on the theoretical and empirical findings of the study. The contribution includes a potential avenue for further cross-disciplinary enquiry into organisational cybersecurity management through the ‘knowledge-problem’ heuristic, which explores the pragmatic barriers to inference-based adaptation efforts. In addition, the Adaptive Cyber Risk Management framework proposes a conceptual logic to mitigate against the issues raised by the theoretical and empirical analysis, which include deep uncertainty, actor and decision maker bias, limited situational awareness, and systemic communication/coordination difficulties.
- Published
- 2020
39. Formalising cryptography using CryptHOL
- Author
-
Butler, David Thomas, Aspinall, David, and Gascon, Adria
- Subjects
005.8 ,formal verification ,cryptography ,Isabelle/HOL - Abstract
Security proofs are now a cornerstone of modern cryptography. Provable security has greatly increased the level of rigour of the security statements, however proofs of these statements often present informal or incomplete arguments. In fact, many proofs are still considered to be unverifiable due to their complexity and length. Formal methods offers one way to establish far higher levels of rigour and confidence in proofs and tools have been developed to formally reason about cryptography and obtain machine-checked proof of security statements. In this thesis we use the CryptHOL framework, embedded inside Isabelle/HOL, to reason about cryptography. First we consider two fundamental cryptographic primitives: Σ-protocols and Commitment Schemes. Σ-protocols allow a Prover to convince a Verifier that they know a value x without revealing anything beyond that the fact they know x. Commitment Schemes allow a Committer to commit to a chosen value while keeping it hidden, and be able to reveal the value at a later time. We first formalise abstract definitions for both primitives and then prove multiple case studies and general constructions secure. A highlight of this part of the work is our general proof of the construction of commitment schemes from Σ-protocols. This result means that within our framework for every Σ-protocol proven secure we obtain, for free, a new commitment scheme that is secure also. We also consider compound Σ-protocols that allow for the proof of AND and OR statements. As a result of our formalisation effort here we are able to highlight which of the different definitions of Σ-protocols from the literature is the correct one; in particular we show that the most widely used definition of Σ-protocols is not sufficient for the OR construction. To show our frameworks are usable we also formalise numerous other case studies of Σ-protocols and commitment schemes, namely: the Σ-protocols by Schnorr, Chaum-Pedersen, and Okamoto; and the commitment schemes by Rivest and Pedersen. Second, we consider Multi-Party Computation (MPC). MPC allows for multiple distrusting parties to jointly compute functions over their inputs while keeping their inputs private. We formalise frameworks to abstractly reason about two party security in both the semi-honest and malicious adversary models and then instantiate them for numerous case studies and examples. A particularly important two party MPC protocol is Oblivious Transfer} (OT) which, in its simplest form, allows the Receiver to choose one of two messages from the other party, the Sender; the Receiver learns nothing of the other message held by the sender and the Sender does not learn which message the Receiver chose. Due to OTs fundamental importance we choose to focus much of our formalisation here, a highlight of this section of our work is our general proof of security of a 1-out-of-2 OT (OT21) protocol in the semi-honest model that relies on Extended Trapdoor Permutations (ETPs). We formalise the construction assuming only that an ETP exists meaning any instantiations for known ETPs only require one to prove that it is in fact an ETP --- the security results on the protocol come for free. We demonstrate this by showing how the RSA collection of functions meets the definition of an ETP, and thus show how the security results are obtained easily from the general proof. We also provide proofs of security for the Naor Pinkas (OT21) protocol in the semi-honest model as well as a proof that shows security for the two party GMW protocol --- a protocol that allows for the secure computation of any boolean circuit. The malicious model is more complex as the adversary can behave arbitrarily. In this setting we again consider an OT21 protocol and prove it secure with respect to our abstract definitions.
- Published
- 2020
- Full Text
- View/download PDF
40. ICSrank : a security assessment framework for Industrial Control Systems (ICS)
- Author
-
Alhasawi, S.
- Subjects
005.8 ,QA75 Electronic computers. Computer science ,QA76 Computer software - Abstract
This thesis joins a lively dialogue in the technological arena on the issue of cybersecurity and specifically, the issue of infrastructure cybersecurity as related to Industrial Control Systems. Infrastructure cybersecurity is concerned with issues on the security of the critical infrastructure that have significant value to the physical infrastructure of a country, and infrastructure that is heavily reliant on IT and the security of such technology. It is an undeniable fact that key infrastructure such as the electricity grid, gas, air and rail transport control, and even water and sewerage services rely heavily on technology. Threats to such infrastructure have never been as serious as they are today. The most sensitive of them is the reliance on infrastructure that requires cybersecurity in the energy sector. The call to smart technology and automation is happening nowadays. The Internet is witnessing an increase number of connected industrial control system (ICS). Many of which don’t follow security guidelines. Privacy and sensitive data are also an issue. Sensitive leaked information is being manipulated by adversaries to accomplish certain agendas. Open Source intelligence (OSINT) is adopted by defenders to improve protection and safeguard data. This research presented in thesis, proposes “ICSrank” a novel security risk assessment for ICS devices based on OSINT. ICSrank ranks the risk level of online and offline ICS devices. This framework categorizes, assesses and ranks OSINT data using ICSrank framework. ICSrank provides an additional layer of defence and mitigation in ICS security, by identification of risky OSINT and devices. Security best practices always begin with identification of risk as a first step prior to security implementation. Risk is evaluated using mathematical algorithms to assess the OSINT data. The subsequent results achieved during the assessment and ranking process were informative and realistic. ICSrank framework proved that security and risk levels were more accurate and informative than traditional existing methods.
- Published
- 2020
- Full Text
- View/download PDF
41. Understanding the challenges of using personal data in media experiences
- Author
-
Sailaja, Neelima
- Subjects
005.8 ,QA 75 Electronic computers. Computer science - Abstract
This thesis explores the challenges associated with the turn to personal data in novel media experiences. Emergent media experiences, is turning towards using personal data as a resource to help enhance the possibilities for innovation in media service provision. But while the capabilities presented by personal data are manifold here, historically this shift has seen to introduce many socio-technical challenges that confront the use of these experiences. It is the study and understanding of these challenges, as manifest within the scope of media experiences leveraging personal data that this research turns to. The research studies this problem from the perspective of two stakeholders involved in this scenario, the users and the service providers. Here, an overtly multidisciplinary approach is adopted, starting from the literature review which engages with previous work from the disciplines of media research, technology, digital economy, law and ethics. To do this, a range of methods which support qualitative research like informal interviews, focus groups, scenario based design, design fiction, thematic analysis, grounded theory and endogenous topic analysis are employed within three studies reported here. The two formative studies reported seek to elicit user and service provider viewpoints on the challenges of using personal data in media experiences. This is followed by the co-design of a media experience that leverages personal data while including a ‘data dialogue’ that aims to respond to challenges previously uncovered. This design is presented to users and service providers to evaluate their response on this ‘data dialogue’ and to further probe the challenges of using personal data within the media experience. The contribution of this work could be categorised into two, conceptual contributions and implication for design. The conceptual contributions explicate the following challenges, as reasoned by both users and service providers. They present the practically grounded subtleties embodied by these challenges when considered within the context of media experiences leveraging user personal data. This is done by comparing the findings of the studies reported here to build upon and contribute to previous conceptualisations of these challenges within literature from multiple disciplines. These conceptual contributions are : •Value •Trust •Privacy •Transparency •Control •Accountability The implications for design build upon these conceptual contributions to present some practically reasoned sensitivities to be taken into account when considering the design of media experiences that leverage personal data. These recommendations combine the viewpoints of the users and service providers to present design considerations that are sensitive to the challenges raised by both parties, to work towards responding to these challenges. These sensitivities are focused around the following challenges : •Trust •Privacy •Transparency •Control •Accountability The conceptual engagement with challenges here highlight the importance of enabling the users with a more central role in this scenario while the implications for design provide sensitivities that help realise this shift, to work towards alleviating the challenges of both the users and the service providers.
- Published
- 2020
42. A novel framework for improving cyber security management and awareness for home users
- Author
-
Alotaibi, Fayez Ghazai S.
- Subjects
005.8 ,information security awareness ,information security management ,cyber security - Abstract
A wide and increasing range of different technologies, devices, platforms, applications and services are being used every day by home users. In parallel, home users are also experiencing a range of different online threats and attacks. Indeed, home users are increasingly being targeted as they lack the knowledge and awareness about potential threats and how to protect themselves. The increase in technologies and platforms also increases the burden upon a user to understand how to apply security across the differing technologies, operating systems and applications. This results in managing the security across their technology portfolio increasingly more troublesome and time-consuming. Thus, it is apparent that a more innovative, convenient and usable security management solution is vital. This thesis investigates current online awareness tools and reviews studies which try to enhance cybersecurity awareness and education among the home users. It is evident from the analysis that most of the studies which have made efforts in proposing “one-fits-all” solutions do not have the ability to provide the users with a tailored awareness content based on a number of criteria such as the current needs, prior knowledge, and security priorities for each user. The thesis proposes an approach for improving security management and awareness for home users by providing them with a customised security awareness. A design science research methodology has been used for understanding the current problem, creating and developing an artefact which can enhance security management and awareness for home users. A number of security controls and requirements were identified which need to be managed and monitored for different technologies and services. In addition, the research designed several preliminary interfaces which can show the main components and aspects in the proposed solution based on HCI principles. A participant-based study was undertaken to get feedback on the initial design requirements and interfaces. A survey of 434 digital device users was undertaken and reveal result that there is a positive correlation between the security concern, knowledge and management amongst home users towards different security aspects. Positive feedback and some valuable comments were received about the preliminary interface designs in terms of the usability and functionality aspects. This builds into a final design phase which proposes a novel architecture for enhancing security management and awareness for home users. The proposed framework is capable of creating and assigning different security policies for different digital devices. These assigned policies are monitored, checked and managed in order to review the user’s compliance with the assigned policies and provide bespoke security awareness. In addition. A mockup design was developed to simulate the proposed framework to show different interactions with different components and sections in order to visualise the main concepts and the functions which might be performed when it is deployed in a real environment. Ultimately, two separate focus group discussions, involving experts and end-users have been conducted in order to provide a comprehensive evaluation of the identified research problem, the feasibility and the effectiveness of the proposed approach. The overall feedback of the two discussions can be considered as positive, constructive and encouraging. The experts agreed that the identified research problem is very important and a real problem. In addition, the participants agreed that the proposed framework is feasible and effective in improving security management and awareness for home users. The outcomes have also shown a reasonable level of satisfaction from the participants towards different components and aspects of the proposed design.
- Published
- 2020
43. User-controlled cyber-security using automated key generation
- Author
-
Shnishah, Halima
- Subjects
005.8 ,encryption algorithms - Abstract
Traditionally, several different methods are fully capable of providing an adequate degree of security to the threats and attacks that exists for revealing different keys. Though almost all the traditional methods give a good level of immunity to any possible breach in security keys, the biggest issue that exist with these methods is the dependency over third-party applications. Therefore, use of third-party applications is not an acceptable method to be used by high-security applications. For high-security applications, it is more secure that the key generation process is in the hands of the end users rather than a third-party. Giving access to third parties for high-security applications can also make the applications more venerable to data theft, security breach or even a loss in their integrity. In this research, the evolutionary computing tool Eureqa is used for the generation of encryption keys obtained by modelling pseudo-random input data. Previous approaches using this tool have required a calculation time too long for practical use and addressing this drawback is the main focus of the research. The work proposes a number of new approaches to the generation of secret keys for the encryption and decryption of data files and they are compared in their ability to operate in a secure manner using a range of statistical tests and in their ability to reduce calculation time using realistic practical assessments. A number of common tests of performance are the throughput, chi-square, histogram, time for encryption and decryption, key sensitivity and entropy analysis. From the results of the statistical tests, it can be concluded that the proposed data encryption and decryption algorithms are both reliable and secure. Being both reliable and secure eliminates the need for the dependency over third-party applications for the security keys. It also takes less time for the users to generate highly secure keys compared to the previously known techniques. The keys generated via Eureqa also have great potential to be adapted to data communication applications which require high security.
- Published
- 2020
- Full Text
- View/download PDF
44. Privacy : preserving third party data mining using cryptography
- Author
-
Almutairi, Nawal
- Subjects
005.8 - Abstract
The research presented in this thesis is directed at investigating and evaluating the usage of cryptography to provide secure data analysis using a third party. The motivation is the emergence Data Mining as a Service (DMaaS), which in turn has been motivated by cloud computing technology that provides the potential for reducing the operational cost of analysing data by utilising the storage and computing services provided by cloud service providers. DMaaS has also opened the door for collaborative data mining whereby multiple data owners pool their data for analysis, using a cloud provider offering DMaaS, to gain some mutual benefit. The challenge is for the data analysis to be conducted in a secure manner. Data privacy can be substantially preserved using cryptography. With the emergence of Homomorphic Encryption (HE) schemes encrypted data can, to an extent, be securely processed without decryption. However, current HE schemes have imposed constraints on the computation, both in terms of the arithmetic operations provided (not all operations required by data mining algorithms are supported) and computational overhead (multiplication can become very slow). Solutions that have been introduced in the literature include: (i) resorting to Secure Multi-Party Computation (SMPC) protocols or (ii) substantial data owner involvement whenever unsupported operations are required. In both cases, the amount of data owner participation is significant, calling into question the advantages that DMaaS has to offer. The research presented in this thesis asks the question "Using cryptography is it possible to securely, effectively and efficiently delegate data analysis to a third party data miner while minimising any required interaction with data owners? ". The fundamental idea presented, so as to achieve secure DMaaS, is the idea of using a proxy for the data rather than the data itself. In particular to use the concept of distance matrices as the proxy. A range of distance matrix implementations are presented each of increasing sophistication. The utility of the data proxy idea is illustrated using a collection of proposed secure data clustering and classification algorithms that operate over encrypted data. The thesis also introduces several encryption schemes designed to address the limitations of existing schemes in the context of DMaaS. Throughout the thesis two distinctive DMaaS scenarios are considered, the single data owner scenario and the multiple data owner scenario. The proposed distance matrices directed at the single data owner scenario are: (i) Updatable Distance Matrices (UDMs), (ii) Encrypted Updatable Distance Matrices (EUDMs), (iii) Encrypted Distance Matrices (EDMs) and (iv) Secure Chain Distance Matrices (SCDMs); while the distance matrices directed at the multiple data owner scenario are: (i) Global EDMs (GEDMs) and Super SCDMs (SSCDMs). The proposed concepts, schemes and secure data mining algorithms were evaluated using two categories of data; UCI datasets and randomly generated synthetic datasets. The synthetic datasets were used to evaluate the scalability of proposed solutions by analysing the runtime as the data size increases. The evaluation was conducted to compare the operation of the proposed approaches with each other, and the relevant standard (insecure) algorithms. The evaluations considered the proposed approaches in terms of: (i) the amount and complexity of the data owner participation in preparing data and participating when secure data mining was undertaken by the TPDM, (ii) efficiency of the secure data mining algorithms, (iii) accuracy of proposed approaches, (iv) the security and (v) the scalability in the case of collaborative data mining approaches. The accuracy was measured by comparing the operation of the proposed algorithms to that of standard algorithms operating over unencrypted data. The evaluations indicted that the proposed solutions reduced the data owner participation, compared to alternative approaches, while maintaining the effectiveness of the data analysis.
- Published
- 2020
- Full Text
- View/download PDF
45. Physical layer key generation in resource constrained wireless communication networks
- Author
-
Moara-Nkwe, K.
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
Secure wireless communication between resource constrained devices in dynamic deployment scenarios poses a significant challenge to cryptography. This is primarily due to the fact that the dynamic nature of the device deployment environment calls for sophisticated key management strategies which usually require a trusted third party along with either a highly complex symmetric key management scheme or a public-key scheme. This places a significant burden on the computational resources of a node. Physical layer security (or Information theoretic security) aims to reduce this efficiency burden on devices and add an additional layer of location-based security. Physical layer key generation and refreshment is concerned with techniques for establishing and refreshing cryptographic keys using wireless communication channel measurements between legitimate nodes. Computational security-based public-key schemes usually derive their security on the basis of the difficulty of solving some mathematical problem such as prime number factorisation, discrete logarithm computation and the like. Practical physical layer-based schemes often derive their security on the difficulty of estimating particular wireless channel parameters with the exact same accuracy that a localised node can estimate them when you are not localised. In this thesis, the issue of Physical Layer Secure Key Generation (PLSKG) is dis- cussed and a novel pairwise PLSKG scheme and a novel Group Physical Layer Secure Key Generation (GPLSKG) scheme for resource constrained devices are proposed. The PLSKG scheme improves on the state of the art by proposing a key generation methodology that avoids the use of iterative quantisation for the purposes of key reconciliation, which reduces the loss of key entropy during the key reconciliation process. The proposed GPLSKG scheme improves on the state of the art by i) generating keys in a way that provides a means of evaluating and bounding the entropy of the generated key with respect to an adversary and ii) reducing the number of probes that need to be used for key reconciliation in certain deployment scenarios. The proposed schemes are then implemented on off-the-shelf devices and the performance of the schemes evaluated and compared to current state-of-the-art schemes. The schemes are shown to improve the performance of existing state-of-the-art PLSKG schemes and achieve near 100% success rates at short distances. The thesis also presents results on the error bounding in PLSKG schemes and presents results showing how these bounds can be used to make the key generation process more secure. Moreover, the thesis also discusses practical considerations in the design of PLSKG schemes, focusing on areas that have only received cursory treatment in current literature.
- Published
- 2020
- Full Text
- View/download PDF
46. A novel component based framework for covert data leakage detection
- Author
-
Nafea, H.
- Subjects
005.8 ,QA75 Electronic computers. Computer science - Abstract
Cyber-attacks are causing billions of dollars of losses every year and data breaches are one of the major causes of these losses. The problem of data breach/leakage is attributed as a serious threat to organisations where any incident can inflict cost that is not only limited to monetary value but also can cause damage to organization goodwill, branding and reputation. Steganography is the practice of writing hidden messages via a medium in such a way that only the sender and the intended recipient know about the hidden message. Steganography is categorised into different forms including text, image, audio, video and network/protocol steganography. Network steganography is increasingly being used by malwares to facilitate the data leakage. This study focuses on aspects of network steganography at different levels of network packets. The existing tools for data leakage prevention and detection are often bypassed by the use of sophisticated techniques such as network steganography for stealing the data. This is due to several weaknesses of the existing detection systems. First, these techniques have high time and memory training complexities as well as large training data sets. These are challenging issues as the amount of data generated every second becomes very large in many realms. Secondly, the number of their false positives is high, making them inaccurate. Finally, there is a lack of a framework catering for needs such as raising alerts as well as data monitoring and updating/adapting of a threshold value used for checking packets for covert data. To overcome these weaknesses, this study proposes a novel framework that includes elements such as continuous data monitoring, threshold maintenance and alert notification. The study also proposes a model based on statistical measures to detect covert data leakages especially with regard to non-linear chaotic data. The main advantage of the proposed framework is its capability of providing more efficient results with tolerance/threshold values. Experiment outcomes indicate that the proposed framework performs better in comparison with state-of-the-art techniques in terms of accuracy and efficiency. Additionally, the proposed ii mathematical model can also be used for on-the-fly detection of covert data as opposed to offline processing methods.
- Published
- 2020
- Full Text
- View/download PDF
47. CryptDB mechanism on graph databases
- Author
-
Aburawi, Nahla
- Subjects
005.8 - Abstract
The work presented in this thesis is concerned with the database security aspects. In particular, we address the problem of querying encrypted data in graph databases. The thesis considers the most popular databases security methods from the literature: (i) multi-layered encryption and (ii) encryption adjustment. The encryption is one of the effective ways to protect sensitive data in a database from various attacks. Querying encrypted data includes two challenges. Either the data should be decrypted before the querying, leaving it vulnerable to server-side attacks, or one has to apply computationally expensive methods for querying encrypted data.
- Published
- 2020
- Full Text
- View/download PDF
48. Malware detection in security operation centres
- Author
-
AlAhmadi, Bushra Abdulrahman and Martinovic, Ivan
- Subjects
005.8 ,Computer Science ,Cyber Security - Abstract
Malware has evolved from viruses attacking single victims to more sophisticated malware with disruptive purposes. For example, WannaCry ransomware attacks led to hundreds of disruption to NHS care in 2017. Although organizations might have invested in security technologies, their susceptibility to WannaCry hints that the problem goes beyond technology. Security Operations Centres (SOCs) are the first-line of defence in an organisation, providing 24/7 monitoring, detection, and response to security attacks. This thesis aims to explore the challenges in malware detection in Security Operation Centres (SOCs) providing recommendations for possible technological solutions. We first start by investigating the workflow SOC practitioners follow. Through semi-structured interviews, we recognise the analysts' role in the SOC and their interactions with the technological solutions for malware monitoring, detection, investigation and response. Our results highlight the overwhelming reliance on analysts throughout the SOC operations, which might benefit from automation. We elicit the analysts analytical thinking when making decisions, identifying the influential factors that might impact their decision making. Moreover, we investigate security practitioners' perspectives of the security monitoring tools deployed in SOCs and their perception of the high false-positive rates. By identifying the weaknesses and strengths in current SOC tools and challenges in deploying network-monitoring tools, we derive recommendations for future SOC tools development. Understanding the type of malware is an essential step in determining the best response. Sometimes getting access to the infected host is not possible and analysts refer to the network traffic for analysis. Hence, we propose a system that classifies network flow sequences to a malware family. The proposed system is privacy-preserving and effective in classifying a binary to a malware family based on its network traffic, not requiring access to the malware binary itself. Behavioural malware detection approaches are found to be the most reliable by analysts. We propose a behaviour-based malware detection system that improves over state-of-the-art by detecting new or unseen malware. The system uses behavioural high-level network features preserving the privacy of the monitored hosts. Using this system, malware's network activities are captured and modelled as a Markov Chain. Due to the modeling of general bot network behavior by the Markov Chains, the system can detect new malware that has not been seen before making it robust against malware evolution. The novelty of this research is to provide a systematic study on SOCs processes, people, and technology; providing researchers with an understanding of the challenges and opportunities within; bridging that knowledge gap and thereby setting a better foundation for future research in the field.
- Published
- 2020
49. Appropriate security and confinement technologies : methods for the design of appropriate security and a case study on confinement technologies for desktop computers
- Author
-
Dodier-Lazaro, Steve
- Subjects
005.8 - Abstract
Despite significant research, desktop computers remain fundamentally insecure. Malware is a prime culprit for data breaches in organisations. A vast number of user machines are connected to the Internet unprotected, leading to global ransomware attacks costing billions to world economies. Confinement systems are technologies that can thwart malware. Several security researchers have claimed to have designed usable confinement, and both Microsoft and Apple have deployed confinement into their desktop OSs in the form of sandboxes, but application developers avoid supporting it. Commercial off-the-shelf confinement systems exist which users can adopt and use, but very few do. My thesis investigates the design process of confinement technologies, to understand why they are not in use. It is divided in two parts. Firstly, I examine why the methods of usable security may not be judicious when it comes to designing for adoption. I propose alternative methods and goals, focused on the adoption and appropriation of technologies, rather than on their usability. This alternative process, named appropriate security, rests on four principles: security research is about users, not technology; it is about appropriation, not usability; it should not cause harm to users; and it should involve users in shaping security goals, rather than impose others’ goals onto them. Next, I apply this approach to sandboxes, through a field study with users at risk of being disenfranchised by sandboxing if it were mandatory. In this study, I document users’ appropriations of their computers to elicit design requirements and to invent new types of file access policies for existing sandboxes. I build metrics and tools to evaluate the security provided by file access policies, and their cost to users. Using ground-truth data from users, I demonstrate that my policies (designed with users’ appropriations in mind) outperform existing ones in Windows both on security and usability. I then co-design confinement-based services with users, based on their own experiences of security, and which provide actual value to them, as a way to bootstrap security adoption. This study demonstrates the substantial benefits of implementing an appropriate security design process.
- Published
- 2020
50. Cyber supply chain risks in cloud computing : the effect of transparency on the risk assessment of SaaS applications
- Author
-
Akinrolabu, Olusola, Martin, Andrew, and New, Steve
- Subjects
005.8 ,Computer science ,Cyber Security - Abstract
While the cloud model has many economic and functional advantages, the increased external interactions of cloud applications have expanded the complexity of its architectures and reshaped its supply chain. Due to the variety of parties involved in cloud service delivery and the high degree of supplier autonomy, assessing cloud risks has become a challenge. Also, the widespread application of traditional frameworks to cloud risk assessment has several shortcomings, including the subjectivity of risk evaluation and inability to measure cyber risk in complex systems. Recognising that recent work on cloud risk assessment has focussed on cloud consumer risks, we sought to address the cloud service provider (CSP) risk assessment challenge. This research began with an in-depth assessment of the literature in cloud risk assessment and supply chain transparency. We conducted surveys and semi-structured interviews to validate the transparency gap and establish its link with qualitative risk assessment methods. The results of the studies substantiated the need for more rigour in cloud risk assessments and provided evidence on how this can be improved with supply chain transparency. To address this gap, we proposed the Cyber Supply Chain Cloud Risk Assessment (CSCCRA) model; a quantitative and supply chain-inclusive model targeted at Software-as-a-Service (SaaS) CSPs. The model is made up of three main components, two of which are novel inclusions to cloud risk assessment, i.e. supply chain mapping and supplier security assessment. The CSCCRA model reflects the systems thinking approach, enabling CSPs to visualise information flow through the supply chain, assess supplier security posture, document assumptions regarding the risk factors, and appraise security controls. In evaluating the CSCCRA model, a three-step approach was adopted. First, the developed model was evaluated by the author and members of the academic community to ensure that it met our initial criteria. Second, the model was face-validated by cloud and risk experts within the industry. Third, we conducted three real-world case studies, using the model to assess the risks of SaaS providers. The result of these evaluations confirmed the usefulness and applicability of the model for assessing cloud provider risks. Also, the case study results and subsequent development of the CSCCRA web application showed that a structured and systematic application of the proposed model within a SaaS organisation was capable of yielding objective and defensible results. The model demonstrated its utility by assisting stakeholders to quantify cloud risks, while also promoting cost-effective risk mitigation and optimal risk prioritisation. Overall, these results advance knowledge both for research and in practice, taking us one step further into improving cloud risk assessment.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.