973 results on '"COMPUTER software correctness"'
Search Results
2. Isla: integrating full-scale ISA semantics and axiomatic concurrency models (extended version).
- Author
-
Armstrong, Alasdair, Campbell, Brian, Simner, Ben, Pulte, Christopher, and Sewell, Peter
- Subjects
INSTRUCTION set architecture ,SOFTWARE verification ,INTEGRATED circuit verification ,COMPUTER software correctness ,BINARY codes - Abstract
Architecture specifications such as Armv8-A and RISC-V are the ultimate foundation for software verification and the correctness criteria for hardware verification. They should define the allowed sequential and relaxed-memory concurrency behaviour of programs, but hitherto there has been no integration of full-scale instruction-set architecture (ISA) semantics with axiomatic concurrency models, either in mathematics or in tools. These ISA semantics can be surprisingly large and intricate, e.g. 100k + lines for Armv8-A. In this paper we present a tool, Isla, for computing the allowed behaviours of concurrent litmus tests with respect to full-scale ISA definitions, in the Sail language, and arbitrary axiomatic relaxed-memory concurrency models, in the Cat language. It is based on a generic symbolic engine for Sail ISA specifications. We equip the tool with a web interface to make it widely accessible, and illustrate and evaluate it for Armv8-A and RISC-V. The symbolic execution engine is valuable also for other verification tasks: it has been used in automated ISA test generation for the Arm Morello prototype architecture, extending Armv8-A with CHERI capabilities, and for Iris program-logic reasoning about binary code above the Armv8-A and RISC-V ISA specifications. By using full-scale and authoritative ISA semantics, Isla lets one evaluate litmus tests using arbitrary user instructions with high confidence. Moreover, because these ISA specifications give detailed and validated definitions of the sequential aspects of systems functionality, as used by hypervisors and operating systems, e.g. instruction fetch, exceptions, and address translation, our tool provides a basis for developing concurrency semantics for these. We demonstrate this for the Armv8-A instruction-fetch and virtual-memory models and examples of Simner et al. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Keeper: Automated Testing and Fixing of Machine Learning Software.
- Author
-
Wan, Chengcheng, Liu, Shicheng, Xie, Sophie, Liu, Yuhan, Hoffmann, Henry, Maire, Michael, and Lu, Shan
- Subjects
COMPUTER software correctness ,MACHINE learning ,ENGINE testing ,APPLICATION software ,JUDGMENT (Psychology) - Abstract
The increasing number of software applications incorporating machine learning (ML) solutions has led to the need for testing techniques. However, testing ML software requires tremendous human effort to design realistic and relevant test inputs and to judge software output correctness according to human common sense. Even when misbehavior is exposed, it is often unclear whether the defect is inside ML API or the surrounding code and how to fix the implementation. This article tackles these challenges by proposing Keeper, an automated testing and fixing tool for ML software. The core idea of Keeper is designing pseudo-inverse functions that semantically reverse the corresponding ML task in an empirical way and proxy common human judgment of real-world data. It incorporates these functions into a symbolic execution engine to generate tests. Keeper also detects code smells that degrade software performance. Once misbehavior is exposed, Keeper attempts to change how ML APIs are used to alleviate the misbehavior. Our evaluation on a variety of applications shows that Keeper greatly improves branch coverage, while identifying 74 previously unknown failures and 19 code smells from 56 out of 104 applications. Our user studies show that 78% of end-users and 95% of developers agree with Keeper's detection and fixing results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Modeling and verification of software evolution using bigraphical reactive system.
- Author
-
Pal, Nisha and Yadav, Dharmendra Kumar
- Subjects
- *
COMPUTER software reusability , *SOFTWARE architecture , *COMPUTER software correctness , *SYSTEMS software , *COMPUTER software , *SOFTWARE verification - Abstract
Changes are inevitable in software due to technology advancements, and changes in business requirements. Making changes in the software by insertion, deletion or modification of new code may lead to malfunctioning of the old code. Hence, there is a need for a priori analysis to ensure and capture these types of changes to run the software smoothly. Making changes in the software while it is in use is called dynamic evolution. Due to the lack of formal modeling and verification, this dynamic evolution process of software systems has not become prominent. Hence, we used the bigraphical reactive system (BRS) technique to ensure that changes do not break the software functionality (adversely affect the system). BRS provides a powerful framework for modeling, analyzing, and verifying the dynamic evolution of software systems, resulting in ensuring the reliability and correctness of evolving software system. In this paper, we proposed a formal method technique for modeling and verifying the dynamic evolution process (changing user requirements at run time) using the BRS. We used a bigraph to model software architectures and described the evolution rules for supporting the dynamic changes of the software system. Finally, we have used the BigMC model checker tool to validate this model with its properties and provide associated verification procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. eSTARK: extending STARKs with arguments.
- Author
-
Masip-Ardevol, Héctor, Baylina-Melé, Jordi, Guzmán-Albiol, Marc, and Muñoz-Tapia, Jose Luis
- Subjects
COMPUTER software correctness ,VANILLA ,POLYNOMIALS ,ARGUMENT - Abstract
STARK is a widely used transparent proof system that uses low-degree tests for proving the correctness of a computer program. STARK consumes an intermediate representation known as AIR that is more appropriate for programs with a relatively short and structured description. However, an AIR is not able to succinctly express non-equality constraints, leading to the incorporation of unwanted polynomials. We present the eSTARK protocol, a new probabilistic proof that generalizes the STARK family through the introduction of a more generic intermediate representation called eAIR. We describe eSTARK in the polynomial IOP model, which combines the optimized version of the STARK protocol with the incorporation of three arguments into the protocol. We also explain various techniques that enhance the vanilla STARK complexity, including optimizations applied to polynomial computations, and analyze the tradeoffs between controlling the constraint degree either at the representation of the AIR or inside the eSTARK itself. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Using automated software evaluation to improve the performance of breast radiographers in tomosynthesis screening.
- Author
-
Gennaro, Gisella, Povolo, Letizia, Del Genio, Sara, Ciampani, Lina, Fasoli, Chiara, Carlevaris, Paolo, Petrioli, Maria, Masiero, Tiziana, Maggetto, Federico, and Caumo, Francesca
- Subjects
- *
TOMOSYNTHESIS , *MEDICAL screening , *COMPUTER software correctness , *COMPUTER software quality control , *SOFTWARE development tools - Abstract
Objective: To improve breast radiographers' individual performance by using automated software to assess the correctness of breast positioning and compression in tomosynthesis screening. Materials and methods: In this retrospective longitudinal analysis of prospective cohorts, six breast radiographers with varying experience in the field were asked to use automated software to improve their performance in breast compression and positioning. The software tool automatically analyzes craniocaudal (CC) and mediolateral oblique (MLO) views for their positioning quality by scoring them according to PGMI classifications (perfect, good, moderate, inadequate) and checking whether the compression pressure is within the target range. The positioning and compression data from the studies acquired before the start of the project were used as individual baselines, while the data obtained after the training were used to test whether conscious use of the software could help the radiographers improve their performance. The percentage of views rated perfect or good and the percentage of views in target compression were used as overall metrics to assess changes in performance. Results: Following the use of the software, all radiographers significantly increased the percentage of images rated as perfect or good in both CCs and MLOs. Individual improvements ranged from 7 to 14% for CC and 10 to 16% for MLO views. Moreover, most radiographers exhibited improved compression performance in CCs, with improvements up to 16%. Conclusion: Active use of a software tool to automatically assess the correctness of breast compression and positioning in breast cancer screening can improve the performance of radiographers. Clinical relevance statement: This study suggests that the use of a software tool for automatically evaluating correctness of breast compression and positioning in breast cancer screening can improve the performance of radiographers on these metrics, which may ultimately lead to improved screening outcomes. Key Points: • Proper breast positioning and compression are critical in breast cancer screening to ensure accurate diagnosis. • Active use of the software increased the quality of craniocaudal and mediolateral oblique views acquired by all radiographers. • Improved performance of radiographers is expected to improve screening outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. An architecture refactoring approach to reducing software hierarchy complexity.
- Author
-
Zhao, Yongxin, Wu, Wenhan, Fei, Yuan, Liu, Zhihao, Li, Yang, Yang, Yilong, Shi, Ling, and Zhang, Bo
- Subjects
- *
SOFTWARE refactoring , *COMPUTER software correctness , *COMPUTER software quality control , *SOFTWARE architecture , *COMPUTER programming , *BATTERY management systems - Abstract
Summary: Software complexity is the very essence of computer programming. As the complexity increases, the potential risks and defects of software systems will increase. This makes the software correctness analysis and the software quality improvement more difficult. In this paper, we present a quantitative metric to describe the complexity of a hierarchical software and a Complexity‐oriented Software Architecture Refactoring (CoSSR) approach to reduce the complexity. The main idea is to identify and then reassemble subcomponents into one hierarchical component, which achieves minimum complexity in terms of the solution algorithm. Moreover, our algorithm can be improved by introducing partition constraint, heuristic search strategy, and spectral clustering. We implement the proposed method as an automated refactoring tool and demonstrate our algorithm through a case study of battery management system (BMS). The results show that our approach is more efficient and effective to reduce the complexity of hierarchical software system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Scenario-specific verification of system requirements consistency via time modeling.
- Author
-
Shi, Jingkai and Zheng, Liwei
- Subjects
SOFTWARE reliability ,COMPUTER software correctness ,RELIABILITY in engineering ,NATURAL languages ,TIME management ,SOFTWARE verification - Abstract
Traditional requirement verification methods often fail to fully consider the modeling of time-dependent behaviors during system execution, which may lead to inconsistent system requirements, thereby compromise software reliability and correctness. To address this limitation, this paper proposes the Scenario-Specific Verification Method for System Requirements Consistency via Time Modeling. The Scenario-Specific Verification Method converts use case descriptions and problem graphs into Clock Constraint Specification Language (CCSL) constraints, transformed into clock graphs for verifying requirement consistency. The resulting constraints are solved with the Z3 solver to validate correctness, thereby enhancing system reliability. The novel time modeling approach provides techniques for verifying temporal dependencies in requirements. A case study demonstrates the efficiency and accuracy of the Scenario-Specific Verification Method in detecting inconsistencies. The main contributions include that a technique is demonstrated to formalize use case descriptions as CCSL constraints, improving on natural language and a method is presented utilizing CCSL time modeling and clock graphs to analyze requirements for consistency. Overall, an innovative solution for requirement verification using time modeling is offered by this paper, contributing to the enhancement of system development process reliability and correctness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. LoRe: A Programming Model for Verifiably Safe Local-first Software.
- Author
-
Haas, Julian, Mogk, Ragnar, Yanakieva, Elena, Bieniusa, Annette, and Mezini, Mira
- Subjects
- *
COMPUTER software correctness , *COMPILERS (Computer programs) , *REACTIVE flow , *COMPUTER software - Abstract
Local-first software manages and processes private data locally while still enabling collaboration between multiple parties connected via partially unreliable networks. Such software typically involves interactions with users and the execution environment (the outside world). The unpredictability of such interactions paired with their decentralized nature make reasoning about the correctness of local-first software a challenging endeavor. Yet, existing solutions to develop local-first software do not provide support for automated safety guarantees and instead expect developers to reason about concurrent interactions in an environment with unreliable network conditions. We propose LoRe, a programming model and compiler that automatically verifies developer-supplied safety properties for local-first applications. LoRe combines the declarative data flow of reactive programming with static analysis and verification techniques to precisely determine concurrent interactions that violate safety invariants and to selectively employ strong consistency through coordination where required. We propose a formalized proof principle and demonstrate how to automate the process in a prototype implementation that outputs verified executable code. Our evaluation shows that LoRe simplifies the development of safe local-first software when compared to state-of-the-art approaches and that verification times are acceptable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A Deep Learning-Based Consistency Test Approach for Earth System Models on Heterogeneous Many-Core Systems.
- Author
-
Yangyang Yu, Shaoqing Zhang, Haohuan Fu, Dexun Chen, Yang Gao, Xiaopei Lin, Zhao Liu, and Xiaojing Lv
- Subjects
- *
DEEP learning , *COMPUTER software correctness , *HETEROGENEOUS computing , *SOFTWARE verification , *TEMPORAL integration , *HUMAN error - Abstract
Physical and heat limits of the semiconductor technology require the adaptation of heterogeneous architectures in supercomputers to maintain a continuous increase of computing performance. The coexistence of general-purpose cores and accelerator cores, which usually employ different hardware architectures, can lead to bit-level differences, especially when we try to maximize the performance on both kinds of cores. Such differences further lead to unavoidable computational perturbations through temporal integration, which can blend with software or human errors. Software correctness verification in the form of quality assurance is a critically important step in the development and optimization of Earth system models (ESMs) on heterogeneous many-core systems with mixed perturbations of software changes and hardware updates. We have developed a deep learning-based consistency test approach for Earth System Models referred to as ESM-DCT. The ESM-DCT is based on the unsupervised bidirectional gate recurrent unit-autoencoder (BGRU-AE) model, which can still detect the existence of software or human errors when taking hardware-related perturbations into account. We use the Community Earth System Model (CESM) on the new Sunway system as an example of large-scale ESMs to evaluate the ESM-DCT. The results show that facing with the mixed perturbations caused by hardware designs and software changes in heterogeneous computing, the ESM-DCT can detect software or human errors when determining whether or not the model simulation is consistent with the original results in homogeneous computing. Our ESM-DCT tool provides an efficient and objective approach for verifying the reliability of the development and optimization of scientific computing models on the heterogeneous many-core systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Analytical Solution for the Steady Seepage Field of an Anchor Circular Pit in Layered Soil.
- Author
-
Huang, Jirong, Gu, Lixiong, He, Zhen, and Yu, Jun
- Subjects
ANALYTICAL solutions ,RETAINING walls ,COMPUTER software correctness ,BESSEL functions ,FREE surfaces - Abstract
An analytical study was carried out on an anchored circular pit with a submerged free surface in layered soil. The seepage field around the anchor circular pit was divided into three zones. Separate variable method was used to obtain the graded solution forms of head distribution in the column coordinate system for each of the three regions. Combined with the continuity condition between the regions the Bessel function orthogonality was used to obtain the explicit analytical solution of the seepage field in each region, and the infiltration line was determined. Comparison with the calculation results of Plaxis 2D 8.5 software verified the correctness of the analytical solution. Based on the analytical solution, the influence of the radius of the pit and the distance of the retaining wall from the top surface of the impermeable layer on the total head distribution on both sides of the retaining wall was analyzed. And the variation in the infiltration line was determined with the above parameters. The results show that as the pit radius, r, decreased, the total head on both sides of the retaining wall gradually increased, and the height of the submerged surface drop also increased. As the distance, a, between the retaining wall and the impermeable boundary at the bottom increased, the hydraulic head on the outer side of the retaining wall decreased and the head on the inner side increased. The height of the submerged surface drop increased with decreasing depth of insertion of the retaining wall. The depth of insertion of the retaining wall had a greater influence on the degree of diving surface drop than the pit radius. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Verifying Correctness.
- Author
-
Hoffmann, Leah
- Subjects
- *
CRYPTOGRAPHERS , *CRYPTOGRAPHY , *MATHEMATICAL proofs , *MATHEMATICAL logic , *COMPUTER software correctness - Abstract
An interview is presented with cryptographer Yael Tauman Kalai. She discusses her career as a Senior Principal Researcher at Microsoft Research and an adjunct professor at the Massachusetts Institute of Technology (MIT), her work on proof systems, specifically the Fiat-Shamir paradigm, and her development of certificates that would certify correctness of a computation.
- Published
- 2024
- Full Text
- View/download PDF
13. Model Transformation Testing and Debugging: A Survey.
- Author
-
TROYA, JAVIER, SEGURA, SERGIO, BURGUEÑO, LOLA, and WIMMER, MANUEL
- Subjects
- *
DEBUGGING , *COMPUTER software correctness , *SYSTEMS software , *COMMUNITIES - Abstract
Model transformations are the key technique in Model-Driven Engineering (MDE) to manipulate and construct models. As a consequence, the correctness of software systems built with MDE approaches relies mainly on the correctness of model transformations, and thus, detecting and locating bugs in model transformations have been popular research topics in recent years. This surge of work has led to a vast literature on model transformation testing and debugging, which makes it challenging to gain a comprehensive view of the current state-of-the-art. This is an obstacle for newcomers to this topic and MDE practitioners to apply these approaches. This article presents a survey on testing and debugging model transformations based on the analysis of 140 papers on the topics. We explore the trends, advances, and evolution over the years, bringing together previously disparate streams of work and providing a comprehensive view of these thriving areas. In addition, we present a conceptual framework to understand and categorize the different proposals. Finally, we identify several open research challenges and propose specific action points for the model transformation community. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. A Hybridized Artificial Neural Network for Automated Software Test Oracle.
- Author
-
Kamaraj, K., Lanitha, B., Karthic, S., Prakash, P. N. Senthil, and Mahaveerakannan, R.
- Subjects
ARTIFICIAL neural networks ,COMPUTER software testing ,PARTICLE swarm optimization ,COMPUTER software correctness ,COMPUTER software quality control - Abstract
Software testing is the methodology of analyzing the nature of software to test if it works as anticipated so as to boost its reliability and quality. These two characteristics are very critical in the software applications of present times. When testers want to perform scenario evaluations, test oracles are generally employed in the third phase. Upon test case execution and test outcome generation, it is essential to validate the results so as to establish the software behavior's correctness. By choosing a feasible technique for the test case optimization and prioritization as along with an appropriate assessment of the application, leads to a reduction in the fault detection work with minimal loss of information and would also greatly reduce the cost for clearing up. A hybrid Particle Swarm Optimization (PSO) with Stochastic Diffusion Search (PSO-SDS) based Neural Network, and a hybrid Harmony Search with Stochastic Diffusion Search (HS-SDS) based Neural Network has been proposed in this work. Further to evaluate the performance, it is compared with PSO-SDS based artificial Neural Network (PSO-SDS ANN) and Artificial Neural Network (ANN). The Misclassification of correction output (MCO) of HS-SDS Neural Network is 6.37 for 5 iterations and is well suited for automated testing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Design and Development of Information and Computational System for Energy Facilities' Impact Assessment on Environment †.
- Author
-
Kuzmin, Vladimir R., Vorozhtsova, Tatyana N., and Massel, Liudmila V.
- Subjects
WEB-based user interfaces ,COMPUTATIONAL mathematics ,ENERGY facilities ,COMPUTER software correctness ,SYSTEMS theory - Abstract
In this article we consider authors' information and computational system for energy facilities' impact assessment on the environment. The necessity of such assessments and development of this system is substantiated. We developed this system as a Web application using the agent-service approach. To develop a database for the system, we utilized ontological engineering of energy and ecology. For assessments, we developed a set of information susbsystems that use approved regulatory methods. Our system can be used for assessment of the impact of both existing and planning energy facilities and also for planning measures to reduce the harmful impact of such facilities. We also performed a set of computational experiments in order to test the developed system. Experiments have shown the correctness of the methods used, and the results of one of them are presented in the article. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Using the STEGO Neural Network for Scintigraphic Image Analysis †.
- Author
-
Ulitin, Ivan, Barulina, Marina, and Velikanova, Marina
- Subjects
IMAGE segmentation ,NEURAL computers ,COMPUTER software correctness ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Currently, neural networks are being widely implemented for the diagnosis of various diseases, including cancer of various localizations and stages. The vast majority of such solutions use supervised or unsupervised convolutional neural networks, which require a great deal of training data. Using unsupervised image segmentation algorithms can be considered the preferred trend since their use significantly reduces the complexity of neural network training. So, developing unsupervised image segmentation algorithms is one of the topical tasks of machine learning. This year, a team of developers from Google, MIT, and Cornell University developed the STEGO algorithm, which is an unsupervised and non-convolutional neural network. As its author stated, the STEGO algorithm performs well at image segmentation problems compared with other machine learning models. And this algorithm does not need a large amount of training data, unlike convolutional neural networks, which are widely used for medical image analysis. So, the aim of this work is to check the possibility of using this neural network for scintigraphy image segmentation by testing whether the STEGO algorithm is relevant when applied to a scintigraphy dataset. To achieve this goal, the intersection over union metric (IoU) was chosen for evaluating the correctness of the detection of the location of metastases. The training dataset consists of scintigraphic images of patients with various types of cancer and various metastasis appearances. Another version of this metric (mIoU, mean intersection over union) was also used by the creators of STEGO to assess the quality of the model to segment images with different kinds of content. Since the calculated metrics are not good enough, the use of this algorithm for scintigraphic image analysis is not possible or requires the development of a special methodology for this. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. A Formal Approach for Consistency Management in UML Models.
- Author
-
Wen, Hao, Wu, Jinzhao, Jiang, Jianmin, Tang, Guofu, and Hong, Zhong
- Subjects
COMPUTER software correctness ,COMPUTER software development ,SYSTEMS software ,SOFTWARE verification ,UNIFIED modeling language - Abstract
Consistency is a significant indicator to measure the correctness of a software system in its lifecycle. It is inevitable to introduce inconsistencies between different software artifacts in the software development process. In practice, developers perform consistency checking to detect inconsistencies, and apply their corresponding repairs to restore consistencies. Even if all inconsistencies can be repaired, how to preserve consistencies in the subsequent evolution should be considered. Consistency management (consistency checking and consistency preservation) is a challenging task, especially in the multi-view model-driven software development process. Although there are some efforts to discuss consistency management, most of them lack the support of formal methods. Our work aims to provide a framework for formal consistency management, which may be used in the practical software development process. A formal model, called a Structure model, is first presented for specifying the overall model-based structure of the software system. Next, the definition of consistency is given based on consistency rules. We then investigate consistency preservation under the following two situations. One is that if the initial system is inconsistent, then the consistency can be restored through repairs. The other is that if the initial system is consistent, then the consistency can be maintained through update propagation. To demonstrate the effectiveness of our approach, we finally present a case study with a prototype tool. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Modeling of processes of technological preparation of additive manufacturing based on synthetic and analytical models of surfaces.
- Author
-
Anamova, R. R. and Nartova, L. G.
- Subjects
- *
COMPUTER software correctness , *SURFACE roughness , *COMPUTER-aided design software , *SURFACE properties , *MECHANICAL engineering - Abstract
In aircraft and mechanical engineering, parts with complex internal cavities (channels) are often found. Manufacturing such parts using additive technologies is easier than traditional methods. However, this raises the problem of obtaining the required surface quality, namely, a certain value of the surface roughness parameters. Prediction of the roughness of the channel surface of a part is one of the most important stages in the technological preparation of additive manufacturing. The aim of the study is to model a channel surface based on its analytical description for subsequent forecasting of its roughness. The article provides a systematization of the basic concepts, constructive properties of surfaces of various classes. The features of the design of surfaces defined by a synthetic (graphic) method and by means of a mathematical description have been studied. A metric is introduced for analytically defined surfaces based on the properties of their internal geometry. Specific examples of surfaces are considered. An algorithm is given for modeling tangent planes to channel surfaces in order to further predict the roughness of the channel surface during its manufacture using the additive manufacturing technology. Directions for further research should be related to the development of algorithmic and CAD software, as well as verification of the developed software for the correctness of the results obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Quantum software testing: State of the art.
- Author
-
García de la Barrera, Antonio, García‐Rodríguez de Guzmán, Ignacio, Polo, Macario, and Piattini, Mario
- Subjects
- *
COMPUTER software correctness , *SOFTWARE engineering , *COMPUTER software testing , *QUANTUM computing , *QUANTUM theory , *SOFTWARE engineers - Abstract
Quantum computing is expected to exponentially outperform classic computing on a broad set of problems, including encryption, machine learning, and simulations. It has an impact yet to explore on all software lifecycle's processes and techniques. Testing quantum software raises a significant number of challenges due to the unique properties of quantum physics—such as superposition and entanglementand the stochastic behavior of quantum systems. It is, therefore, an open research issue. In this work, we offer a systematic mapping study of quantum software testing engineering, presenting a comprehensive view of the current state of the art. The main identified trends in testing techniques are (1) the statistic approaches based on repeated measurements and (2) the use of Hoare‐like logics to reason about software correctness. Another relevant line of research is reversible circuit testing, which is partially applicable to quantum software unitary testing. Finally, we have observed a flourishing of secondary studies and frameworks supporting testing processes from 2018 onwards. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Коректність пласкої класифікації.
- Author
-
Ярошинський, М. С., Сіроткін, О. В., Сінько, Д. П., Гунько, С. Б., and Манолюк, Д. О.
- Subjects
SEMANTIC networks (Information theory) ,CLASSIFICATION ,ISOMORPHISM (Mathematics) ,COMPUTER software correctness ,ONTOLOGY - Abstract
Classifications are widely used in semantic networks and decision support systems based on formal knowledge and are part of computer ontologies. Classifications and computer ontologies built on them are the result of the work of one or more experts. As a result, such classifications reflect the subjective view of the author or authors on the world and the relationship between the classes (concepts) of the created classification. In the work, the authors propose an approach that will allow assessing how correctly the classification is constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Sports training auxiliary decision support system based on neural network algorithm.
- Author
-
Wang, Tianyi
- Subjects
- *
DECISION support systems , *PHYSICAL training & conditioning , *MACHINE learning , *COMPUTER software correctness , *STIMULUS & response (Psychology) , *INTELLIGENT tutoring systems - Abstract
In order to improve the effect of sports training auxiliary decision, this paper combines the needs of sports training auxiliary system to carry out functional analysis and improve the traditional machine learning algorithm. The domain adversarial neural network based on maximum entropy loss combines the ability of maximum entropy loss to process misclassified samples and uses classification loss and domain adversarial loss to solve the problem of inconsistent edge distribution of category features between domains. Moreover, this paper takes sports decision as the core and introduces tasks of different difficulty and video training into research. In addition, this paper uses simulation software to measure the correctness of sports training in different scenarios and the data of the response latency and applies the neural network algorithm to the construction of the sports training auxiliary decision system. Finally, this paper designs experiments to study sports training recognition and sports training decision-making and builds an intelligent system through a simulation platform. The experimental research results show that the system constructed in this paper has a good sports training auxiliary decision function. The reliability of the method in this article can be verified in practice in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Electromagnetic control based on Lie symmetry transformation.
- Author
-
Zheng Mingliang
- Subjects
- *
MAXWELL equations , *TRANSFORMATION optics , *COORDINATE transformations , *SYMMETRY , *COMPUTER software correctness - Abstract
A design method of electromagnetic metamaterial based on Lie symmetry of Maxwell's equation is proposed, which is applied to the modulation of electromagnetic wave/light. Firstly, the electromagnetic control model based on metamaterials is introduced. Then, according to the theory of Transformation Optics (TO), Lie symmetry analysis is applied to the coordinate transformation of material physical space, and the key core is the determining equations of Lie symmetry derived. Secondly, the analytical forms of constitutive parameters (permittivity and permeability) of metamaterials are introduced, which can be used to design all kinds of electromagnetic metamaterials. Finally, the Lie symmetry method is applied to the control of electromagnetic beam width. The results show that the metamaterial based on Lie symmetry of Maxwell's equation has good field distribution, and it overcomes the single subjectivity of traditional coordinate transformation in optical transformation. The wave simulation by COMSOL Multiphysics software verifies the correctness of Lie symmetry method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Generation of C++ Code from Isabelle/HOL Specification.
- Author
-
Jiang, Dongchen and Xu, Bo
- Subjects
COMPUTER software correctness ,CODE generators ,C++ ,SOFTWARE reliability - Abstract
Automatic code generation plays an important role in ensuring the reliability and correctness of software programs. Reliable programs can be obtained automatically from verified program specifications by code generators. The target languages of the existing code generators are mainly functional languages, which are relatively less used than C/C + +. As C/C + + is widely used in the industry and many fundamental software facilities and the correctness verification of C/C + + programs is difficult and cumbersome, this paper provides an automatic conversion framework that allows to generate C + + implementation from verified Isabelle/HOL specifications. The framework is characterized by combining the verification convenience of Isabelle/HOL and the efficiency of C + +. Since the correctness of the functional Isabelle/HOL specification can be guaranteed by interactive proofs, the correctness of the relevant generated C + + implementation can also be maintained. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Learning Relationship-Based Access Control Policies from Black-Box Systems.
- Author
-
IYER, PADMAVATHI and MASOUMZADEH, AMIRREZA
- Subjects
ACCESS control ,INFORMATION storage & retrieval systems ,COMPUTATIONAL complexity ,INFORMATION policy ,COMPUTER software correctness - Abstract
Access control policies are crucial in securing data in information systems. Unfortunately, often times, such policies are poorly documented, and gaps between their specification and implementation prevent the system users, and even its developers, from understanding the overall enforced policy of a system. To tackle this problem, we propose the first of its kind systematic approach for learning the enforced authorizations from a target system by interacting with and observing it as a black box. The black-box view of the target system provides the advantage of learning its overall access control policy without dealing with its internal design complexities. Furthermore, compared to the previous literature on policy mining and policy inference, we avoid exhaustive exploration of the authorization space by minimizing our observations. We focus on learning relationship-based access control (ReBAC) policy, and show how we can construct a deterministic finite automaton (DFA) to formally characterize such an enforced policy. We theoretically analyze our proposed learning approach by studying its termination, correctness, and complexity. Furthermore, we conduct extensive experimental analysis based on realistic application scenarios to establish its cost, quality of learning, and scalability in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Subsumption, correctness and relative correctness: Implications for software testing.
- Author
-
AlBlwi, Samia, Marsit, Imen, Khaireddine, Besma, Ayad, Amani, Loh, JiMeng, and Mili, Ali
- Subjects
- *
COMPUTER software testing , *COMPUTER software correctness , *RESEARCH questions , *DETECTORS , *ALGORITHMS - Abstract
Context. Several Research areas emerged and have been proceeding independently when in fact they have much in common. These include: mutant subsumption and mutant set minimization; relative correctness and the semantic definition of faults; differentiator sets and their application to test diversity; generate-and–validate methods of program repair; test suite coverage metrics. Objective. Highlight their analogies, commonalities and overlaps; explore their potential for synergy and shared research goals; unify several disparate concepts around a minimal set of artifacts. Method. Introduce and analyze a minimal set of concepts that enable us to model these disparate research efforts, and explore how these models may enable us to share insights between different research directions, and advance their respective goals. Results. Capturing absolute (total and partial) correctness and relative (total and partial) correctness with a single concept: detector sets. Using the same concept to quantify the effectiveness of test suites, and prove that the proposed measure satisfies appealing monotonicity properties. Using the measure of test suite effectiveness to model mutant set minimization as an optimization problem, characterized by an objective function and a constraint. Generalizing the concept of mutant subsumption using the concept of differentiator sets. Identifying analogies between detector sets and differentiator sets, and inferring relationships between subsumption and relative correctness. Conclusion. This paper does not aim to answer any pressing research question as much as it aims to raise research questions that use the insights gained from one research venue to gain a fresh perspective on a related research issue. • We point out that two concepts that were introduced independently in 2014 are actually equivalent: mutant subsumption , a partial ordering between mutants of a base program, which ranks mutants according to their ability to detect faults; relative correctness , a partial ordering between programs, which ranks candidate program according to how close they are to being correct with respect to a specification. • We revisit the concept of detector set of a program with respect to a specification, and show how it can be used to redefine the properties of absolute (total and partial) correctness and relative (total and partial) correctness. • We revisit the concept of differentiator set and show how it can be used to generalize the definition of mutant subsumption by considering the possibility that programs and their mutants may fail to converge for some inputs. • We use the concept of detector set to define a measure of test suite effectiveness (semantic coverage), which has a number of appealing motonicity properties. • We recast the problem of mutant set minimization as an optimization problem, by providing explicit definitions of the effectiveness of a mutant set, and showing that existing minimization algorithms preserve the effectiveness of mutant sets, as defined. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
26. An anonymous verifiable random function with unbiasability and constant size proof.
- Author
-
Yao, Shuang and Zhang, Dawei
- Subjects
- *
RANDOM functions (Mathematics) , *UNIQUENESS (Mathematics) , *COMPUTER software correctness , *ANONYMITY , *BILINEAR forms - Abstract
Recently, verifiable random function (VRF) has been frequently applied in secure consensus protocols and e-lottery system to achieve random selection. In this context, how to build various secure verifiable random function to suit increasing application requirements receives much attention. Therefore, in this paper, we propose a new type of anonymous verifiable random function (AVRF). Concretely, the proposed anonymous verifiable random function achieves identity anonymity of prover based on the Decision Linear assumption that is secure in bilinear groups in contrast to the known anonymous verifiable random function which is relied on the Decisional Diffie–Hellman (DDH) assumption. The anonymity indicates that the verifier cannot recognize the prover's identity. In addition, our proposed AVRF is also unbiasable. It can provide unpredictability under malicious key generation. Security analysis shows that the proposed scheme satisfies correctness, uniqueness, pseudorandomness, anonymity and unbiasability. Theoretical analysis results indicate that our anonymous verifiable random function is relatively efficient, and its proof size is constant regardless of input size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Mathematical model and motion analysis of a wheeled vibro-impact locomotion system.
- Author
-
Korendiy, Vitaliy, Gursky, Volodymyr, Kachur, Oleksandr, Dmyterko, Petro, Kotsiumbas, Oleh, and Havrylchenko, Oleksandr
- Subjects
- *
MOTION analysis , *MATHEMATICAL models , *COMPUTER software correctness , *MATHEMATICAL optimization - Abstract
The paper is aimed at investigating the motion conditions of the wheeled vibro-impact locomotion system equipped with the twin crank-slider excitation mechanism and the additional braking mechanisms allowing only one-way rotation of the wheels. The novelty of the present research consists in the improved mathematical model describing the motion conditions of the vibro-impact system and the proposed parameters optimization technique that allows for maximizing the average translational velocity of the wheeled platform. The main idea of this technique is to provide the maximal velocities of internal bodies when they get in contact with the corresponding impact plates. The numerical modeling results describing the dynamic behavior of the vibro-impact system are obtained in Mathematica software and substantiate the correctness of the developed mathematical model and of the proposed parameters optimization technique. The paper can be of significant practical and scientific interest for researchers and engineers studying and improving the vibratory locomotion systems, e.g., for inspecting and cleaning the pipelines. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. A Method to Reduce Eddy Current Loss of Underwater Wireless Power Transmission by Current Control.
- Author
-
Wang, Jiale, Song, Baowei, and Wang, Yushan
- Subjects
EDDY current losses ,WIRELESS power transmission ,ENERGY harvesting ,ELECTROMAGNETIC fields ,COMPUTER software correctness ,ENERGY dissipation ,SIMULATION software - Abstract
In recent years, wireless power transmission (WPT) technology based on magnetic resonance has been extensively studied. However, in contrast to that in the air, wireless power transmission in seawater medium will be accompanied by inevitable energy loss, that is, eddy current loss (ECL), which will increase with the frequency and coil current. In this article, an equivalent circuit model of the eddy current loss of underwater wireless power transmission is established, two methods to reduce the eddy current loss are proposed, and the optimal modulus ratio for the coil current of the dual-coil wireless power transmission system to reduce eddy current loss is calculated. Electromagnetic field (EMF) simulation software verifies the correctness of the two methods, and it is concluded that increasing the phase difference of the coil current or controlling the coil current ratio to ensure that the optimal modulus ratio is in a certain range can reduce the eddy current loss effectively and improve the energy transmission efficiency of the system by about 4~5%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. On-the-Fly Repairing of Atomicity Violations in ARINC 653 Software.
- Author
-
Choi, Eu-teum, Kim, Tae-hyung, Jun, Yong-Kee, Lee, Seongjin, and Han, Mingyun
- Subjects
COMPUTER access control software ,COMPUTER software correctness ,COMPUTER software - Abstract
Airborne health management systems prevent functional failure caused by errors or faults in airborne software. The on-the-fly repairing of atomicity violations in ARINC 653 concurrent software is critical for guaranteeing the correctness of software execution. This paper introduces RAV (Repairing Atomicity Violation), which efficiently treats atomicity violations. RAV diagnoses an error on the fly by utilizing the training results of software and treats to control access to the shared variable of the thread where the error has occurred. The evaluation of RAV measured the time overhead by applying methods found in previous works and RAV to five synthesis programs containing an atomicity violation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. On methods and tools for rigorous system design.
- Author
-
Bliudze, Simon, Katsaros, Panagiotis, Bensalem, Saddek, and Wirsing, Martin
- Subjects
- *
SYSTEMS design , *COMPUTER software correctness , *SYSTEMS software , *SOFTWARE verification - Abstract
Full a posteriori verification of the correctness of modern software systems is practically infeasible due to the sheer complexity resulting from their intrinsic concurrent nature. An alternative approach consists of ensuring correctness by construction. We discuss the Rigorous System Design (RSD) approach, which relies on a sequence of semantics-preserving transformations to obtain an implementation of the system from a high-level model while preserving all the properties established along the way. In particular, we highlight some of the key requirements for the feasibility of such an approach, namely availability of (1) methods and tools for the design of correct-by-construction high-level models and (2) definition and proof of the validity of suitable domain-specific abstractions. We summarise the results of the extended versions of seven papers selected among those presented at the 1 st and the 2 nd International Workshops on Methods and Tools for Rigorous System Design (MeTRiD 2018–2019), indicating how they contribute to the advancement of the RSD approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. A Unified Analysis for the Free Vibration of the Sandwich Piezoelectric Laminated Beam with General Boundary Conditions under the Thermal Environment.
- Author
-
Gao, Guohua, Sun, Ningze, Shao, Dong, Tao, Yongqiang, and Wu, Wei
- Subjects
- *
LAMINATED composite beams , *FREE vibration , *HAMILTON'S principle function , *SHEAR (Mechanics) , *COMPUTER software correctness - Abstract
This article mainly analyzes the free vibration characteristic of the sandwich piezoelectric beam under elastic boundary conditions and thermal environment. According to the first-order shear deformation theory and Hamilton's principle, the thermo-electro-elastic coupling equations of the sandwich piezoelectric beam are obtained. Meanwhile, elastic boundary conditions composed of an array of springs are introduced, and the displacement variables and external potential energy of the beam are expressed as wave functions. By using the method of reverberation-ray matrix to integrate and solve the governing equations, a search algorithm based on golden-section search is introduced to calculate the required frequency parameters. A series of numerical results are compared with those reported in literature studies and obtained by simulation software to verify the correctness and versatility of the search algorithm. In addition, three parametric research cases are proposed to investigate the frequency parameters of sandwich piezoelectric beams with elastic restraint conditions, material parameters, thickness ratio, different temperature rises, and external electric potential. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. [formula omitted]: A template to build verified thread-local interfaces with software scheduler abstractions.
- Author
-
Kim, Jieung, Koenig, Jérémie, Chen, Hao, Gu, Ronghui, and Shao, Zhong
- Subjects
- *
COMPUTER software , *COMPUTER software correctness , *SOFTWARE reliability , *RESOURCE allocation , *SOFTWARE engineering , *SOFTWARE verification - Abstract
This paper presents ThreadAbs , an extension of the layer-based software formal verification toolkit CCAL (Gu et al., 2018). ThreadAbs is specifically designed to provide better expressiveness and proof management for thread abstraction in multithreaded libraries. Thread abstraction isolates the behavior of each thread from others when providing a top-level formal specification for software. Compared to the original CCAL , ThreadAbs offers significant improvements in this regard. CCAL is a verification framework that enables a layered approach to building certified software, as demonstrated by multiple examples (Gu et al. 2016; Li et al. 2021; Shin et al. 2019). Obviously, its main targets usually include multithread libraries, which support significant improvement in the utilization and isolation of system resources. However, it poses new challenges for formal verification. Firstly, it requires a sudden change in the granularity of concurrency during the implementation and verification of the target software. Typically, systems are associated with software schedulers that are built on top of several underlying components in the system (e.g. , thread spawn, yield, sleep, and wake-up). Due to the software scheduler, these systems can be divided into low-level components consisting of modules that the software scheduler depends on (e.g. , allocators for shared resources and scheduling queues) and high-level components that use software schedulers (e.g. , condition variables, semaphores, and IPCs). Therefore, software formal verification on those systems has to provide proper method to deal with those distinct features, which is usually abstracting other threads' behavior as much as possible to provide an independent thread model and its formal specification. Secondly, it requires handling side effects from other threads, such as dynamic resource allocation from parents with proper isolation of all threads from each other. CCAL has limited support for two crucial aspects of formal verification in multithreaded systems. Firstly, its previous thread abstraction method does not handle the side effects caused by a parent thread during dynamic initial state allocation properly. Secondly, the proofs produced by CCAL are tied to CertiKOS , which makes it challenging to use them for similar proofs that use CCAL as their verification toolkit. To address these issues, we introduce ThreadAbs , a new mechanized methodology that provides proper thread abstraction to reason about multithreaded programs in conjunction with CCAL. We also extend the previous CertiKOS proof with ThreadAbs to demonstrate its usability and expressiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Empirical studies on software traceability: A mapping study.
- Author
-
Charalampidou, Sofia, Ampatzoglou, Apostolos, Karountzos, Evangelos, and Avgeriou, Paris
- Subjects
- *
CARTOGRAPHY software , *COMPUTER software correctness , *EMPIRICAL research , *MAINTAINABILITY (Engineering) - Abstract
During the last decades, software traceability has been studied in a large number of studies, from different perspectives (e.g., how to create traces and what are its benefits). This large body of knowledge needs to be better explored and exploited by both practitioners and researchers: We need an overview of different aspects of traceability and a structured way to assess and compare existing work in order to extend it with new research or apply it in practice, Thus, we have conducted a secondary study on this large corpus of primary studies, focusing on empirical studies on software traceability, without setting any further restrictions in terms of investigating a specific domain or concrete artifacts. The study explores the goals of existing approaches and the empirical methods used for their evaluation. Its main contributions are the investigation of (a) the type of artifacts linked through traceability approaches; (b) the benefits of using artifact traceability approaches; (c) the ways of measuring their benefit; and (d) the research methods used. The results of the study suggest that (i) requirements artifacts are dominant in traceability; (ii) the research corpus focuses on the proposal of novel techniques for establishing traceability; and (iii) the main benefits are the improvement of software correctness and maintainability. Finally, although many studies include some empirical validation, there is still room for improvements and research methods that can be used more extensively. The obtained results are discussed under the prism of both researchers and practitioners and are compared against the state‐of‐the‐art. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Blame and coercion: Together again for the first time.
- Author
-
SIEK, JEREMY G., THIEMANN, PETER, and WADLER, PHILIP
- Subjects
- *
PROGRAMMING languages , *COMPUTER software correctness , *BISIMULATION , *MATHEMATICAL optimization , *PROOF theory - Abstract
C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop four calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are as follows: $\lambda{B}$ , based on the blame calculus of Wadler and Findler (2009); $\lambda{C}$ , inspired by the coercion calculus of Henglein (1994); $\lambda{S}$ inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006); and $\lambda{T}$ based on the threesome calculus of Siek and Wadler (2010). While $\lambda{B}$ and $\lambda{T}$ are little changed from previous work, $\lambda{C}$ and $\lambda{S}$ are new. Together, $\lambda{B}$ , $\lambda{C}$ , $\lambda{S}$ , and $\lambda{T}$ provide a coherent foundation for design, implementation, and optimization of gradual types. We define translations from $\lambda{B}$ to $\lambda{C}$ , from $\lambda{C}$ to $\lambda{S}$ , and from $\lambda{S}$ to $\lambda{T}$. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: $\lambda{C}$ has a particularly simple definition, and the subtle definition of blame safety for $\lambda{B}$ is justified by the simple definition of blame safety for $\lambda{C}$. Our calculus $\lambda{S}$ is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from $\lambda{C}$ to $\lambda{S}$ to establish an equational theory of coercions; and second, using full abstraction from $\lambda{B}$ to $\lambda{S}$ to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Software Verification using State Diagrams.
- Author
-
Bhowmik, Madhuparna, Chowdhary, Aastha, and Rudra, Bhawana
- Subjects
SOFTWARE verification ,COMPUTER software correctness ,LOGICAL fallacies ,ALGORITHMS ,COMPUTER software development - Abstract
During the development of software, a programmer will commit many logical errors unknowingly such that the software is not in accordance with the requirements. Such logical errors affect the correctness of the software. The requirements specify some important properties of the software and this knowledge about it will allow to know the behavior of the software which can be leveraged to find certain logical errors. This paper proposes a method which helps to find bugs as well as describes a way by which the programmer can specify software requirements. Based on these programmer specified requirements, the system can be automatically used to verify the software. Also, the method proposed in this paper does not need to use the expected result of a test case to verify the software's correctness. The proposed algorithm completely relies on the requirements specified by the programmer for finding bugs in the software. The software verification process and the algorithm used is explained with the help of a case study. The paper highlights the advantages of the method and algorithm proposed for software verification along with the implementation details. [ABSTRACT FROM AUTHOR]
- Published
- 2021
36. The simplest binary word with only three squares.
- Author
-
Gabric, Daniel and Shallit, Jeffrey
- Subjects
COMPUTER software correctness ,SQUARE ,VOCABULARY - Abstract
We re-examine previous constructions of infinite binary words containing few distinct squares with the goal of finding the "simplest", in a certain sense. We exhibit several new constructions. Rather than using tedious case-based arguments to prove that the constructions have the desired property, we rely instead on theorem-proving software for their correctness. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Beyond Relational Databases: Preserving the Data.
- Author
-
Ramalho, José Carlos, Ferreira, Bruno, Faria, Luis, and Ferreira, Miguel
- Subjects
- *
RELATIONAL databases , *INFORMATION storage & retrieval systems , *ELECTRONIC information resources , *DATABASES , *COMPUTER software correctness - Abstract
Relational databases are one of the main technologies supporting information assets in today's organizations. They are designed to store, organize and retrieve digital information, and are such a fundamental part of information systems that most would not be able to function without them. Very often, the information contained in databases is irreplaceable or prohibitively expensive to reacquire; therefore, steps must be taken to ensure that the information within databases is preserved. This paper describes a methodology for long-term preservation of relational databases based on information extraction and format migration to a preservation format. It also presents a tool that was developed to support this methodology: Database Preservation Toolkit (DBPTK), as well as the processes and formats needed to preserve databases. The DBPTK connects to live relational databases and extracts information into formats more adequate for long-term preservation. Supported preservation formats include the SIARD 2, created by a cooperation between the Swiss Federal Archives and the E-ARK project that is becoming a standard in the area. DBPTK has a flexible plugin-based architecture enabling its use for other purposes like database upgrade and database migration between different systems. Presented real case scenarios demonstrate the usefulness, correctness and performance of the tool. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Why There is no General Solution to the Problem of Software Verification.
- Author
-
Symons, John and Horner, Jack K.
- Subjects
- *
SOFTWARE engineering , *SOFTWARE verification , *COMPUTER software correctness , *PHILOSOPHY of science - Abstract
How can we be certain that software is reliable? Is there any method that can verify the correctness of software for all cases of interest? Computer scientists and software engineers have informally assumed that there is no fully general solution to the verification problem. In this paper, we survey approaches to the problem of software verification and offer a new proof for why there can be no general solution. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Parallel self‐testing for device‐independent verifiable blind quantum computation.
- Author
-
Xu, Qingshan, Tan, Xiaoqing, Huang, Rui, and Zeng, Xiaodan
- Subjects
QUANTUM computing ,TECHNOLOGY ,TESTING ,COST ,COMPUTER software correctness - Abstract
With advances in experimental quantum computing, the requirement for verifying the correctness of quantum computation is urgent. The recent protocols of device‐independent verifiable blind quantum computation provide a fruitful solution. However, all existing approaches have relatively high overhead. In this paper, we present a parallel self‐testing technology to extract the presence of tensor products of Pauli observables on maximally entangled state. We then utilize our parallel self‐testing to propose a device‐independent verification protocol. Finally, compared to other existing protocols, our scheme has a lower overhead, which costs O(n11logn) Bell pairs, where n is the size of original computation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. Modelling Features-Based Birthmarks for Security of End-to-End Communication System.
- Author
-
Li, Meilian, Nazir, Shah, Khan, Habib Ullah, Shahzad, Sara, and Amin, Rohul
- Subjects
NEVUS ,TELECOMMUNICATION systems ,COMPUTER software industry ,COMPUTER software correctness ,SYSTEMS software ,END-to-end delay - Abstract
Feature-based software birthmark is an essential property of software that can be used for the detection of software theft and many other purposes like to assess the security in end-to-end communication systems. Research on feature-based software birthmark shows that using the feature-based software birthmark joint with the practice of software birthmark estimation together can deliver a right and influential method for detecting software piracy and the amount of piracy done by a software. This can also guide developers in improving security of end-to-end communication system. Modern day software industry and systems are in demand to have an unbiased method for comparing the features-based birthmark of software competently, and more concretely for the detecting software piracy and assessing the security of end-to-end communication systems. In this paper, we proposed a mathematical model, which is based on a differential system, to present feature-based software birthmark. The model presented in this paper provides an exclusive way for the features-based birthmark of software and then can be used for comparing birthmark and assessing security of end-to-end communication systems. The results of this method show that the proposed model is efficient in terms of effectiveness and correctness for the features-based software birthmark comparison and security assessment purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. Properties and numerical simulation for self-weight consolidation of the dredged material.
- Author
-
Wang, Liang, Sun, Jinshan, Zhang, Minsheng, Yang, Lijing, Li, Lei, and Yan, Jinhui
- Subjects
- *
DREDGING spoil , *PORE water pressure , *COMPUTER software correctness , *PARTICLE size distribution , *EXTRACTION apparatus , *COMPOSITE columns - Abstract
Large strain self-weight consolidation is widely used for the management of the dredged material (DM) in dredged material disposal sites. A multilayer vacuum extraction method, which consists of a settling column, pore pressure measurement apparatus and multilayer vacuum extraction apparatus, is developed. The interface height, water content, excess pore water pressure, grain size distribution, as well as the compressibility and permeability relationships involved in self-weight consolidation are determined. Experimental results show that the measured data are reasonable and the method is feasible. Gibson's governing equation for one-dimensional finite strain consolidation is discretised using a modified upwind difference form, and a corresponding computer program is compiled. It is found that the interface height settlement, void ratio and excess pore water pressure approximate laboratory experimental results. Therefore, the rationality of the analytical model and finite difference numerical solution as well as the correctness of the computer program are validated. The model possibly provides a satisfactory prediction for the self-weight consolidation of the DM. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. A Survey of Timing Verification Techniques for Multi-Core Real-Time Systems.
- Author
-
MAIZA, CLAIRE, RIHANI, HAMZA, RIVAS, JUAN M., GOOSSENS, JOËL, ALTMEYER, SEBASTIAN, and DAVIS, ROBERT I.
- Subjects
- *
SCIENTIFIC literature , *COMPUTER software correctness - Abstract
This survey provides an overview of the scientific literature on timing verification techniques for multi-core real-time systems. It reviews the key results in the field from its origins around 2006 to the latest research published up to the end of 2018. The survey highlights the key issues involved in providing guarantees of timing correctness for multi-core systems. A detailed review is provided covering four main categories: full integration, temporal isolation, integrating interference effects into schedulability analysis, and mapping and allocation. The survey concludes with a discussion of the advantages and disadvantages of these different approaches, identifying open issues, key challenges, and possible directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. 基于STM32单片机驱动的双液体棱镜.
- Author
-
胡思哲, 葛 屹, 邓天豪, and 张 谦
- Subjects
COMPUTER software correctness ,POWER resources ,LIQUID surfaces ,ELECTRONIC control ,ELECTRONIC systems - Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
44. Field distribution and dispersion characteristics of a coaxial oversized slow wave structure with deep corrugation operating on high-order mode.
- Author
-
Deng, Bingfang, He, Juntao, Ling, Junpu, and Wang, Lei
- Subjects
- *
SLOW wave structures , *COMPUTER software correctness , *DISPERSION (Chemistry) , *SIMULATION software - Abstract
Compared with the traditional slow wave structure (SWS) with a fundamental mode, the oversized SWS with deep corrugation operating in a high-order mode has a higher power handling capacity and lower beam density. However, for the new structures, the traditional Floquet harmonic expansion method is not applicable in analyzing the dispersion characteristics of the slow wave structure. In this paper, we present a new analysis method based on the mode matching method, which is suitable for the arbitrary depth of corrugation of SWS. Besides, as the field matches at the corrugated surface of inner and outer conductors, the method is applicable whether the conductors are corrugated or uniform. The method is validated by software simulation. The calculation results of the software verify the correctness of the theory and show that the theoretical calculation is more convenient. By the method we present, the dispersion characteristics, field distribution, and coupling impedance are theoretically derived and analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. VERIFICATION AND VALIDATION OF A SOFTWARE: A REVIEW OF THE LITERATURE.
- Author
-
Altaie, Atica M., Alsarraj, Rasha Gh., and Al-Bayati, Asmaa H.
- Subjects
SOFTWARE validation ,SOFTWARE verification ,COMPUTER software quality control ,COMPUTER software correctness ,COMPUTER software development - Abstract
With the development of the Internet, making software is often essential, also it is complicated to succeed in the project's development. There is a necessity in delivering software of top quality. It might be accomplished through using the procedures of Verification and Validation (V&V) via development processes. The main aim of the V&V has been checking if the created software is meeting the needs and specifications of clients. V&V has been considered as collections related to testing as well as analysis activities across the software's full life cycle. Quick developments in software V&V were of high importance in developing approaches and tools for identifying possible concurrent bugs and therefore verifying the correctness of software. It has been reflecting the modern software V&V concerning efficiency. The main aim of this study has been retrospective review related to various researches in software V&V and conduct a comparison between them. In the modern competitive world related to the software, the developers of software must be delivering on-time quality products, also the developers should be verifying that the software has been properly functioning and validating the product for each one of the client's requirements. The significance of V&V in the development of software has been maintaining the quality of software. The approaches of V&V have been utilized in all stages of the System Development Life Cycle. Furthermore, the presented study also provides objectives of V&V and describes V&V tools that can be used in the process of software development, the way of improving the software's quality. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Secure and trusted partial grey-box verification.
- Author
-
Cai, Yixian, Karakostas, George, and Wassyng, Alan
- Subjects
- *
SOFTWARE architecture , *SYSTEMS development , *COMPUTER software correctness , *SOFTWARE verification - Abstract
A crucial aspect in the development of software-intensive systems is verification. This is the process of checking whether the system has been implemented in compliance with its specification. In many situations, the manufacture of one or more components of the system is outsourced. We study the case of how a third party (the verifier) can verify an outsourced component effectively, without access to all the details of the internal design of that component built by the developer. We limit the design detail that is made available to the verifier to a diagram of interconnections between the different design units within the component, but encrypt the design details within the units and also the intermediate values passed between the design units. We formalize this notion of limited information using tabular expressions to describe the functions in both the specifications and the design. The most common form of verification is testing, and it is known that black-box testing of the component is not effective enough in deriving test cases that will adequately determine the correctness of the implementation, and the safety of its behaviour. We have developed protocols that allow for the derivation of test cases that take advantage of the design details disclosed as described above. We can regard this as partial grey-box testing that does not compromise the developer's secret information. Our protocols work with both trusted and untrusted developers, as well as trusted and untrusted verifiers, and allow for the checking of the correctness of the verification process itself by any third party, and at any time. Currently our results are derived under the simplifying assumption that the software design units are linked acyclically. We leave the lifting of this assumption as an open problem for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. Approximation algorithms for querying incomplete databases.
- Author
-
Greco, Sergio, Molinaro, Cristian, and Trubitsyna, Irina
- Subjects
- *
APPROXIMATION algorithms , *POLYNOMIAL time algorithms , *DATABASES , *COMPUTER software correctness , *ALGORITHMS - Abstract
Certain answers are a widely accepted semantics of query answering over incomplete databases. As their computation is a coNP-hard problem, recent research has focused on developing (polynomial time) evaluation algorithms with correctness guarantees , that is, techniques computing a sound but possibly incomplete set of certain answers. The aim is to make the computation of certain answers feasible in practice, settling for under-approximations. In this paper, we present novel evaluation algorithms with correctness guarantees, which provide better approximations than current techniques, while retaining polynomial time data complexity. The central tools of our approach are conditional tables and the conditional evaluation of queries. We propose different strategies to evaluate conditions, leading to different approximation algorithms—more accurate evaluation strategies have higher running times, but they pay off with more certain answers being returned. Thus, our approach offers a suite of approximation algorithms enabling users to choose the technique that best meets their needs in terms of balance between efficiency and quality of the results. • Algorithms to compute sound sets of certain query answers over incomplete databases. • Approximation algorithms trading-off evaluation time vs. quality of the approximation. • Suite of algorithms enabling users to choose the method that best meets their needs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. 基于 FPGA 的 BAN 认证算法硬件实现.
- Author
-
邓 鸿, 林金朝, 庞 宇, and 赵艳霞
- Subjects
BODY area networks ,INTELLIGENT sensors ,COMPUTER software correctness ,POWER resources ,DATA privacy - Abstract
Copyright of Journal of Chongqing University of Posts & Telecommunications (Natural Science Edition) is the property of Chongqing University of Posts & Telecommunications and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2019
- Full Text
- View/download PDF
49. The Influence of Aggregate Diameter Description on the Correctness of the Sedimentation Model – CFD Investigations.
- Author
-
Moskal, A. and Fus, A. A.
- Subjects
- *
MASS transfer coefficients , *COMPUTATIONAL fluid dynamics , *SEDIMENTATION & deposition , *DIAMETER , *COMPUTER software correctness - Abstract
The results of numerical simulation on aggregates subjected to sedimentation show that the process of falling of particles is influenced by many parameters, including the assumption of aggregate diameter. The often used method of the equivalent sphere approach does not fully reflect the movement of particles. The CFD (Computational Fluid Dynamics) simulations were performed for falling aggregates and the diameters most relevant to the particle were found for them. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. A FRAMEWORK FOR VALIDATING INFORMATION SYSTEMS RESEARCH BASED ON A PLURALIST ACCOUNT OF TRUTH AND CORRECTNESS.
- Author
-
Mingers, John and Standing, Craig
- Subjects
INFORMATION storage & retrieval systems ,DESIGN science ,COMPUTER software correctness ,ACTION research ,COMPUTER simulation - Abstract
Research in information systems includes a wide range of approaches that make a contribution in terms of knowledge, understanding, or practical developments. In these days of "fake news" and vast amounts of spurious internet content, scholarly research needs above all to be able to demonstrate its validity -- are its finding true, or its recommendations correct? However, empirical studies show that discussion of validity in research is often weak. In this paper we examine the nature of truth and relatedly correctness in order to construct a validation framework that can potentially encompass all the varied forms of research. Within philosophy, there has been much debate about the nature of truth -- is it correspondence, coherence, consensual or pragmatic? In fact, current debates revolve around the idea of a pluralist view of truth -- that there may be different forms of truth depending on context or domain. Related to truth is the wider concept of correctness -- propositions may be true (and therefore correct) but correctness can also be applied to actions, performances or behavior. Based on these two concepts, we develop a framework for research validity and apply it to a range of research forms including positivist, mathematical, interpretive, design science, critical and action-oriented. The benefits are: i) that a greater and more explicit focus on validity criteria will produce better research; ii) having a single framework can provide some commonality between what at times seem conflicting approaches to research; iii) having criteria made explicit should encourage debate and further development. [ABSTRACT FROM AUTHOR]
- Published
- 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.