582 results
Search Results
2. An Efficient Quantum Circuits Optimizing Scheme Compared with QISKit (Short Paper)
- Author
-
Hong Xiang, Xin Zhang, and Tao Xiang
- Subjects
Scheme (programming language) ,Computer science ,Quantum Physics ,010502 geochemistry & geophysics ,01 natural sciences ,Computer Science::Hardware Architecture ,Quantum circuit ,Computer Science::Emerging Technologies ,Quantum gate ,Computer engineering ,Qubit ,0103 physical sciences ,Quantum algorithm ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,010306 general physics ,Quantum ,computer ,0105 earth and related environmental sciences ,Quantum computer ,Electronic circuit ,computer.programming_language - Abstract
Recently, the development of quantum chips has made great progress – the number of qubits is increasing and the fidelity is getting higher. However, qubits of these chips are not always fully connected, which sets additional barriers for implementing quantum algorithms and programming quantum programs. In this paper, we introduce a general circuit optimizing scheme, which can efficiently adjust and optimize quantum circuits according to arbitrary given qubits’ layout by adding additional quantum gates, exchanging qubits and merging single-qubit gates. Compared with the optimizing algorithm of IBM’s QISKit, the quantum gates consumed by our scheme is 74.7%, and the execution time is only 12.9% on average.
- Published
- 2019
3. Daisy - Framework for Analysis and Optimization of Numerical Programs (Tool Paper)
- Author
-
Anastasiia Izycheva, Eva Darulova, Fariha Nasir, Fabian Ritter, Robert Bastian, and Heiko Becker
- Subjects
Modular structure ,Computer science ,Scala ,Computation ,020207 software engineering ,010103 numerical & computational mathematics ,02 engineering and technology ,Reuse ,01 natural sciences ,Input language ,Computer engineering ,0202 electrical engineering, electronic engineering, information engineering ,Optimization methods ,0101 mathematics ,Fixed-point arithmetic ,computer ,computer.programming_language - Abstract
Automated techniques for analysis and optimization of finite-precision computations have recently garnered significant interest. Most of these were, however, developed independently. As a consequence, reuse and combination of the techniques is challenging and much of the underlying building blocks have been re-implemented several times, including in our own tools. This paper presents a new framework, called Daisy, which provides in a single tool the main building blocks for accuracy analysis of floating-point and fixed-point computations which have emerged from recent related work. Together with its modular structure and optimization methods, Daisy allows developers to easily recombine, explore and develop new techniques. Daisy’s input language, a subset of Scala, and its limited dependencies make it furthermore user-friendly and portable.
- Published
- 2018
4. An Interference-Aware Strategy for Co-locating High Performance Computing Applications in Clouds
- Author
-
Maicon Melo Alves, Lúcia Maria de A. Drummond, Yuri Frota, and Luan Teylo
- Subjects
Computer engineering ,Virtual machine ,Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020207 software engineering ,02 engineering and technology ,Supercomputer ,Heuristics ,computer.software_genre ,Interference (wave propagation) ,computer - Abstract
Cross-interference may happen when applications share a common physical machine, affecting negatively their performances. This problem occurs frequently when high performance applications are executed in clouds. Some papers of the related literature have considered this problem when proposing strategies for Virtual Machine Placement. However, they neither have employed a suitable method for predicting interference nor have considered the minimization of the number of used physical machines and interference at the same time. In this paper, we present a solution based on the Iterated Local Search framework to solve the Interference-aware Virtual Machine Placement Problem for HPC applications in Clouds (IVMP). This problem aims to minimize, at the same time, the interference suffered by HPC applications which share common physical machines and the number of physical machines used to allocate them. Experiments were conducted in a real scenario by using applications from oil and gas industry and applications from the HPCC benchmark. They showed that our method reduced interference in more than 40%, using the same number of physical machines of the most widely employed heuristics to solve the problem.
- Published
- 2020
5. Graphics Processing Methods Based on Deep Learning in the Context of Big Data
- Author
-
Wei Yang
- Subjects
Computer engineering ,Computer science ,Synchronization (computer science) ,Graph traversal ,Overhead (computing) ,Hardware acceleration ,Image processing ,Central processing unit ,Load balancing (computing) ,Graphics - Abstract
The purpose of this article is to conduct research on graphics processing (GP) methods based on DL in the context of big data (BD). First, this article studies the characteristics of current BD and investigates related applications. Then based on DL, this article proposes a method of processing graphics content. This method uses three different DL network structures to extract graphic features. In this paper, the OpenMP parallel model is used to optimize the graph search algorithm in parallel on the CPU platform, and the algorithm is optimized in parallel by using the principle of program locality, reducing synchronization overhead and load balancing. Thirdly, in view of the irregularity of the memory access of the graph search algorithm, the FPGA algorithm hardware accelerator is customized using BD technology. Finally, it introduces the development tools of DL algorithm hardware acceleration, and compares them with traditional processing methods. Comparative experiments show that the image processing method based on DL proposed in this paper has an average recognition rate of about 10% higher than traditional processing methods, and has better results, providing an important reference for the development of image processing.
- Published
- 2021
6. Training with Reduced Precision of a Support Vector Machine Model for Text Classification
- Author
-
Marcin Pietron, Dominik Żurek, and Kazimierz Wiatr
- Subjects
Reduction (complexity) ,Support vector machine ,Computer engineering ,Computer science ,Computation ,Memory footprint ,Central processing unit ,General-purpose computing on graphics processing units ,Quantization (image processing) ,Field-programmable gate array - Abstract
This paper presents the impact of using quantization on the efficiency of multi-class text classification in the training process of a support vector machine (SVM). This work is focused on comparing the efficiency of SVM model trained using reduced precision with its original form. The main advantage of using quantization is decrease in computation time and in memory footprint on the dedicated hardware platform which supports low precision computation like GPU (16-bit) or FPGA (any bit-width). The paper presents the impact of a precision reduction of the SVM training process on text classification accuracy. The implementation of the CPU was performed using the OpenMP library. Additionally, the results of the implementation of the GPGPU using double, single and half precision are presented.
- Published
- 2021
7. Brief Review of Low-Power GPU Techniques
- Author
-
Pragati Sharma and Hussain Al-Asaad
- Subjects
Constraint (information theory) ,Scope (project management) ,Exploit ,Computer engineering ,Computer science ,Feature (computer vision) ,Overhead (computing) ,Architecture ,Baseline (configuration management) ,Implementation - Abstract
This paper reviews and analyzes different low-power saving techniques in GPUs. The survey attempts to analyze and understand the architectural aspect of novel GPU design suggestions that researchers have come up with to provide cost-effective solutions for increasing GPU power numbers. These power-saving insights venture into the space of system-level low-power implementations, while others exploit the application-dependent optimizations, thereby making this study an interesting mix between architecture, application, and system-level implementations. This paper also focuses on underlining the impact of any power-saving feature in terms of area overhead and timing constraint latencies. This gives the reader a complete picture of the scope and challenges in the low-power GPU exploration space. The study is limited by time and hardware resources, and there is an excellent opportunity to quantify reviewed techniques and results to be compared against a common baseline.
- Published
- 2021
8. A Self-adaptivity Indoor Ranging Algorithm Based on Channel State Information with Weight Gray Prediction Model
- Author
-
Joon Goo Park and Jingjing Wang
- Subjects
Computational complexity theory ,Computer engineering ,Orthogonal frequency-division multiplexing ,Channel state information ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,MIMO ,Computer Science::Networking and Internet Architecture ,Physical layer ,Ranging ,Communications protocol ,Computer Science::Information Theory ,Communication channel - Abstract
With the development of Wi-Fi technology, the IEEE 802.11n series communication protocol and the subsequent wireless LAN protocols use multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) and other technologies. Channel characteristics between Wi-Fi transceivers can be estimated at the physical layer and stored in the form of channel state information (CSI). In this paper, we propose a CSI-based indoor ranging method using a gray prediction method that generates CSI predictions. This paper also provides experimental comparisons of our proposed data generation method with existing indoor ranging methods. Experimental results show that the proposed data generation approach achieves significant ranging accuracy improvement over using an effective CSI ranging method, while it incurs much less computational complexity. Meanwhile, the proposed method also can obtain more accurate ranging results in corner areas of the indoor environment where Wi-Fi signals cannot be obtained.
- Published
- 2021
9. Adaptive Tensor-Train Decomposition for Neural Network Compression
- Author
-
Yang Zhou, Zengrui Zhao, Dongxiao Yu, and Yanwei Zheng
- Subjects
Rank (linear algebra) ,Artificial neural network ,Computer science ,Computation ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Dynamic programming ,Acceleration ,Computer engineering ,0202 electrical engineering, electronic engineering, information engineering ,Decomposition (computer science) ,020201 artificial intelligence & image processing ,Quantization (image processing) ,Mobile device ,0105 earth and related environmental sciences - Abstract
It could be of great difficulty and cost to directly apply complex deep neural network to mobile devices with limited computing and endurance abilities. This paper aims to solve such problem through improving the compactness of the model and efficiency of computing. On the basis of MobileNet, a mainstream lightweight neural network, we proposed an Adaptive Tensor-Train Decomposition (ATTD) algorithm to solve the cumbersome problem of finding optimal decomposition rank. For its non-obviousness in the forward acceleration of GPU side, our strategy of choosing to use lower decomposition dimensions and moderate decomposition rank, and the using of dynamic programming, have effectively reduced the number of parameters and amount of computation. And then, we have also set up a real-time target network for mobile devices. With the support of sufficient amount of experiment results, the method proposed in this paper can greatly reduce the number of parameters and amount of computation, improving the model’s speed in deducing on mobile devices.
- Published
- 2021
10. SANSCrypt: Sporadic-Authentication-Based Sequential Logic Encryption
- Author
-
Yinghua Hu, Shahin Nazarian, Kaixin Yang, and Pierluigi Nuzzo
- Subjects
Hardware security module ,Authentication ,Sequential logic ,Finite-state machine ,Computer engineering ,Computer science ,business.industry ,Key (cryptography) ,State (computer science) ,Encryption ,business ,Reset (computing) - Abstract
Sequential logic encryption is a countermeasure against reverse engineering of sequential circuits based on modifying the original finite state machine of the circuit such that the circuit enters a wrong state upon being reset. A user must apply a certain sequence of input patterns, i.e., a key sequence, for the circuit to transition to the correct state. The circuit then remains functional unless it is powered off or reset again. Most sequential encryption methods require the correct key to be applied only once. In this paper, we propose a novel Sporadic-Authentication-Based Sequential Logic Encryption method (SANSCrypt) that circumvents the potential vulnerability associated with a single-authentication mechanism. SANSCrypt adopts a new temporal dimension to logic encryption, by requiring the user to sporadically perform multiple authentications according to a protocol based on pseudo-random number generation. We provide implementation details of SANSCrypt and present a design that is amenable to time-sensitive applications. In SANSCrypt, the authentication task does not significantly disrupt the normal circuit operation, as it can be interrupted or postponed upon request from a high-priority task with minimal impact on the overall performance. Analysis and validation results on a set of benchmark circuits show that SANSCrypt offers a substantial output corruptibility if the key sequences are applied incorrectly. Moreover, it exhibits exponential resilience to existing attacks, including SAT-based attacks, while maintaining a reasonably low overhead.
- Published
- 2021
11. Implementation of Neural Network Regression Model for Faster Redshift Analysis on Cloud-Based Spark Platform
- Author
-
Pavan Chakraborty, Krishna Pratap Singh, Snehanshu Saha, and Snigdha Sen
- Subjects
Task (computing) ,Computer engineering ,Artificial neural network ,Computer science ,business.industry ,Spark (mathematics) ,Scalability ,Process (computing) ,CPU time ,Cloud computing ,business ,Distributed File System - Abstract
Since observational astronomy has turned into data-driven astronomy recently, analyzing this huge data effectively to extract useful information is becoming an important and essential task day by day. In this paper, we developed a neural network model to analyze redshift data of million of extragalactic objects. In order to do that, two different approaches for faster training of neural networks have been proposed. The first approach deals with the training model using Lipschitz-based adaptive learning rate in a single node/machine whereas the second approach discusses processing astronomy data in a multinode clustered environment. This approach can scale up to accommodate multiple nodes when necessary to handle bulk data using Apache spark and Elephas. Additionally, this paper also addresses the scalability and storage issue by implementing the model on the cloud. We used the distributed processing capability of the spark that reads data directly from HDFS (Hadoop Distributed File System) of multiple machines and our experimental results show that using these approaches we can reduce training time and CPU time tremendously which is a crucial requirement while dealing with the extensive dataset. Although we have tested our experiment on a subset of huge data it can be scaled to process data of any size as well without much hurdle.
- Published
- 2021
12. Efficient and Secure Digital Signature Scheme for Post Quantum Epoch
- Author
-
Maksim Iavich, Andrii Volodymyrovych Tolbatov, Sergiy Gnatyuk, Lela Mirtskhulava, and Giorgi Iashvili
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Hash function ,Quantum key distribution ,Encryption ,Signature (logic) ,Computer engineering ,Digital signature ,business ,Function (engineering) ,Quantum ,Computer Science::Cryptography and Security ,Quantum computer ,media_common - Abstract
It is expected the massive release of quantum computers in the near future. Quantum computers can easily break the crypto schemes, which are used in practice. Therefore, classical encryption systems have become vulnerable to quantum computer-based attacks. This involves the research efforts that look for encryption schemes that are immune to quantum computer-based attacks. This paper describes the digital signature schemes, which are safe against quantum computer attacks, but these schemes have different efficiency problems. The signature size of the scheme is very large and one-way function are used many time during the signature process. The paper offers the ways of reducing the signature size and acceleration the process of using one-way functions. It is offered to integrate the quantum key distribution algorithms into the scheme. It is also offered to use Blake family hash function as the one-way function.
- Published
- 2021
13. Layout-Agnostic Order-Batching Optimization
- Author
-
Jacek Malec, Johan Oxenstierna, and Volker Krueger
- Subjects
Strong solutions ,Order picking ,Computer engineering ,Order (exchange) ,Computer science ,Discrete optimization ,Benchmark (computing) ,Travel cost ,State (computer science) ,Material handling - Abstract
Order-batching is an important methodology in warehouse material handling. This paper addresses three identified shortcomings in the current literature on order-batching optimization. The first concerns the overly large dependence on conventional warehouse layouts. The second is a lack of proposed optimization methods capable of producing approximate solutions in minimal computational time. The third is a scarcity of benchmark datasets, which are necessary for data-driven performance evaluation. This paper introduces an optimization algorithm, SBI, capable of generating reasonably strong solutions to order-batching problems for any warehouse layout at great speed. On an existing benchmark dataset for a conventional layout, Foodmart, results show that the algorithm on average used 6.9% computational time and 105.8% travel cost relative to the state of the art. New benchmark instances and proposed solutions for various layouts and problem settings were shared on a public repository.
- Published
- 2021
14. Application of Generative Neural Networks and Nondestructive Testing in Defect Detection Problem
- Author
-
P. V. Vasiliev and Alexander Senichev
- Subjects
Data set ,Set (abstract data type) ,Computer engineering ,Artificial neural network ,Computer science ,business.industry ,Nondestructive testing ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Ultrasonic sensor ,Object (computer science) ,business ,Convolutional neural network ,Field (computer science) - Abstract
The development of machine learning methods has given a new impetus in the field of solving inverse mechanics problems. Many papers show the successful application of well-proven techniques of ultrasonic, magnetic, thermal non-destructive testing with the latest methods based on machine learning and neural networks, in particular. This paper demonstrates the potential of applying the machine learning methods to the task of two-dimensional ultrasonic imaging. We have built a model of acoustic ultrasonic non-destructive testing that simulates the sounding of the inspecting object and the propagation of an ultrasonic wave in it. In the course of the set of numerical experiments we have created a data set for training of convolutional neural networks. The presented results show a high degree of informativeness of the ultrasonic response and its agreements to the real form of the internal defect inside the inspection object.
- Published
- 2021
15. A Multi-GPU Approach to Fast Wildfire Hazard Mapping
- Author
-
Donato D'Ambrosio, Salvatore Di Gregorio, Giuseppe A. Trunfio, Rocco Rongo, Giuseppe Filippone, and William Spataro
- Subjects
Computer engineering ,Computer science ,Simulation modeling ,Process (computing) ,Point (geometry) ,Forestry ,computer.file_format ,Multi gpu ,Raster graphics ,General-purpose computing on graphics processing units ,Supercomputer ,computer ,Cellular automaton - Abstract
Burn probability maps (BPMs) are among the most effective tools to support strategic wildfire and fuels management. In such maps, an estimate of the probability to be burned by a wildfire is assigned to each point of a raster landscape. A typical approach to build BPMs is based on the explicit propagation of thousands of fires using accurate simulation models. However, given the high number of required simulations, for a large area such a processing usually requires high performance computing. In this paper, we propose a multi-GPU approach for accelerating the process of BPM building. The paper illustrates some alternative implementation strategies and discusses the achieved speedups on a real landscape.
- Published
- 2014
16. Making Picnic Feasible for Embedded Devices
- Author
-
Christian Steger, Johannes Winkler, and Andreas Wallner
- Subjects
Scheme (programming language) ,Standardization ,Digital signature ,Computer engineering ,Computer science ,Picnic ,Process (computing) ,USable ,Mathematical proof ,computer ,Block cipher ,computer.programming_language - Abstract
Picnic is a post-quantum digital signature scheme, where the security is based on the difficulty of inverting a symmetric block cipher and zero-knowledge proofs. Picnic is one of the alternate candidates of the third round of the standardization process. Hence, it could be standardized in case of any weakness found in the round three candidates. Based on our paper at the 23rd Euromicro Conference ([6]), we found an optimization, which reduces memory usage to make it usable on IoT devices. This paper focusses on approaches for the implementation of this optimization. As a proof-of-concept, we implemented our implementation of Picnic on a ST Nucleo-L476RG and measured the cycles of the implementation.
- Published
- 2020
17. Tensor-Based CUDA Optimization for ANN Inferencing Using Parallel Acceleration on Embedded GPU
- Author
-
Rameshkumar G. Ramaswamy, Ahmed Khamis Abdullah Al Ghadani, and Waleeja Mateen
- Subjects
Artificial neural network ,business.industry ,Process (engineering) ,Computer science ,Image processing ,Mobile robot ,02 engineering and technology ,Automation ,020202 computer hardware & architecture ,Identification (information) ,CUDA ,Computer engineering ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,business - Abstract
With image processing, robots acquired visual perception skills; enabling them to become autonomous. Since the emergence of Artificial Intelligence (AI), sophisticated tasks such as object identification have become possible through inferencing Artificial Neural Networks (ANN). Be that as it may, Autonomous Mobile Robots (AMR) are Embedded Systems (ESs) with limited on-board resources. Thus, efficient techniques in ANN inferencing are required for real-time performance. This paper presents the process of optimizing ANNs inferencing using tensor-based optimization on embedded Graphical Processing Unit (GPU) with Computer Unified Device Architecture (CUDA) platform for parallel acceleration on ES. This research evaluates renowned network, namely, You-Only-Look-Once (YOLO), on NVIDIA Jetson TX2 System-On-Module (SOM). The findings of this paper display a significant improvement in inferencing speed in terms of Frames-Per-Second (FPS) up to 3.5 times the non-optimized inferencing speed. Furthermore, the current CUDA model and TensorRT optimization techniques are studied, comments are made on its implementation for inferencing, and improvements are proposed based on the results acquired. These findings will contribute to ES developers and industries will benefit from real-time performance inferencing for AMR automation solutions.
- Published
- 2020
18. Profiling Dilithium Digital Signature Traces for Correlation Differential Side Channel Attacks
- Author
-
Charis Dimopoulos, Odysseas Koufopavlou, and Apostolos P. Fournaris
- Subjects
Profiling (computer programming) ,050101 languages & linguistics ,Standardization ,business.industry ,Computer science ,05 social sciences ,Cryptography ,02 engineering and technology ,Dilithium ,Timing attack ,chemistry.chemical_compound ,Computer engineering ,Digital signature ,chemistry ,0202 electrical engineering, electronic engineering, information engineering ,NIST ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Side channel attack ,business - Abstract
A significant concern for the candidate schemes of the NIST postquantum cryptography standardization project is the protection they support against side-channel attacks. One of these candidate schemes currently in the NIST standardization race is the Dilithium signature scheme. This postquantum signature solution has been analyzed for side channel attack resistance especially against timing attacks. Expanding our attention on other types of side-channel analysis, this work is focused on correlation based differential side channel attacks on the polynomial multiplication operation of Dilithium digital signature generation. In this paper, we describe how a Correlation Power Attack should be adapted for the Dilithium signature generation and describe the attack process to be followed. We determine the conditions to be followed in order for such an attack to be feasible, (isolation of polynomial coefficient multiplication inpower traces) and we create a power trace profiling paradigm for the Dilithium signature scheme executed in embedded systems to showcase that the conditions can be met in practice. Expanding the methodology of recent works that mainly use simulations for power trace collection, in this paper, power trace capturing and profiling analysis of the signature generation process was succesfully done on a, noisy, Commercial off-the-shelf ARM Cortex-M4 embedded system.
- Published
- 2020
19. Parallel Software Testing Sequence Generation Method Target at Full Covering Tested Behaviors
- Author
-
Wenjie Zhong, Tao Sun, Xin Guo, Xiaoyun Wan, and Ting Zhang
- Subjects
Sequence ,Computer engineering ,Computer science ,Intersection (set theory) ,State space ,System testing ,State (computer science) ,Software system ,Net (mathematics) ,System model - Abstract
Parallel software system testing is very difficult because the number of states expands sharply caused by parallel behaviors. In this paper, a testing sequence generation method is proposed, target at full covering tested behaviors and related behaviors over all execution paths. Because of state spaces of parallel software systems are often large, this paper focuses on the state space sub-graph, contained all firing of tested and related behaviors, of system model. Firstly, the software system is modeled with Colored Petri Net (CPN), called system model (SM), and every tested behavior or related behavior is modeled also with CPN, called Behavior Model Unit (BMU). Then, the method proposes mapping operation, intersection operation, and so on to finally realize the generation of test sequence. Practices show that this method is efficient, which could achieve full coverage.
- Published
- 2020
20. QR Code Watermarking for Digital Images
- Author
-
Willy Susilo, Joonsang Baek, Jongkil Kim, and Yang-Wai Chow
- Subjects
021110 strategic, defence & security studies ,Digital rights management ,business.industry ,Computer science ,Data_MISCELLANEOUS ,020208 electrical & electronic engineering ,0211 other engineering and technologies ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,02 engineering and technology ,Digital media ,Digital image ,Computer engineering ,Robustness (computer science) ,Information hiding ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,business ,Error detection and correction ,Digital watermarking - Abstract
With the growing use of online digital media, it is becoming increasingly challenging to protect copyright and intellectual property. Data hiding techniques like digital watermarking can be used to embed data within a signal for purposes such as digital rights management. This paper investigates a watermarking technique for digital images using QR codes. The advantage of using QR codes for watermarking is that properties of the QR code structure include error correction and high data capacity. This paper proposes a QR code watermarking technique, and examines its robustness and security against common digital image attacks.
- Published
- 2020
21. Design and Implementation of AES on FPGA for Security of IOT Data
- Author
-
Shelly R. Wankhede, Dinesh B. Bhoyar, and Swati K. Modod
- Subjects
021110 strategic, defence & security studies ,business.industry ,Computer science ,Advanced Encryption Standard ,Hash function ,0211 other engineering and technologies ,Cryptography ,02 engineering and technology ,Encryption ,Public-key cryptography ,Symmetric-key algorithm ,Computer engineering ,Ciphertext ,Cryptosystem ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,business ,Block cipher - Abstract
As we know that the world is been surrounded by the internet and interconnected to it and the different things or we can say the edge devices which helps us to connect to network and communicate to the different alternatives with the surroundings. In this paper we would be discussing about the different algorithms and their and disadvantages and how the advanced standard algorithm is been applied we has an great effect on cryptography and the different process carried out in the encryption and decryption process. In this paper the encryption and the decryption would be given in 128 bit format in hexadecimal, binary or any other format and the key expansion for getting the cipher text. in cryptography their are three classification for the expansion of the key which can be symmetric, asymmetric and the hash function in this paper we have discussed and the symmetric key is been given for both encryption and decryption process.as we know that we are using the 128 bit length which has 10 rounds and the initial round includes the state and the different transformation in the cryptograpy process which is the symmetric key expansion in which the same key is been applied to both encryption and decryption in binary, hexadecimal, etc. As it is lossless operation many applications can be derived using this algorithm as we are using symmetric key block cipher. We are limiting our paper with the 128 bit length of data in verilog language. For encryption and decryption the proposed paper describes the cryptograpy which would be having the fixed size. In this algorithm where we are using the symmetric key block cipher where the key would be given same with respect to encryption. Many applications it can be used as it is lossless operation. We limit our focus on 12b8 bit AES encryption and decryption where coded in VHDL coding. The proposed paper describes the private key cryptosystems which has a key with fixed size.
- Published
- 2020
22. Pilot Assignment vs Soft Pilot Reuse to Surpass the Pilot Contamination Problem: A Comparative Study in the Uplink Phase
- Author
-
Abdelfettah Belhabib, Moha M'Rabet Hassani, and Mohamed Boulouird
- Subjects
Constraint (information theory) ,Set (abstract data type) ,Base station ,Computer engineering ,Computer science ,media_common.quotation_subject ,MIMO ,Telecommunications link ,Quality (business) ,Graph coloring ,Reuse ,media_common - Abstract
Even though, Massive MIMO (M-MIMO) technology offers good improvement in terms of performance and quality of communications between users and Base Stations (BS). This technology, still limited by a harmful constraint renowned Pilot Contamination (PC) problem. To beat this problem of PC, two approaches have been adapted as a medicine for this problem, which appears strongly in Multi-Cell M-MIMO systems. The first approach is based on assigning extra pilot sequences to different users, while the second approach obliges the reuse of the same set of pilots in different cells. This paper reviews, briefly, the problem of PC in M-MIMO systems. Thereafter, it analyzes and provides a comparison between two decontaminating strategies: the Soft Pilot Reuse (SPR) and the Pilot Assignment based on the Weighted Graph Coloring (WGC-PA), which are respectively based on the above-discussed approaches. The analysis presented in this paper is focused on the uplink phase (i.e reverse link).
- Published
- 2020
23. A Hardware in the Loop Benchmark Suite to Evaluate NIST LWC Ciphers on Microcontrollers
- Author
-
Jürgen Mottok, Enrico Pozzobon, and Sebastian Renner
- Subjects
Standardization ,Computer science ,business.industry ,Hardware-in-the-loop simulation ,02 engineering and technology ,Benchmarking ,020202 computer hardware & architecture ,Test case ,Software ,Computer engineering ,Design rationale ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,NIST ,020201 artificial intelligence & image processing ,business - Abstract
The National Institute of Standards and Technology (NIST) started the standardization process for lightweight cryptography algorithms in 2018. By the end of the first round, 32 submissions have been selected as 2nd round candidates. NIST allowed designers of 2nd round submissions to provide small updates on both their specifications and implementation packages. In this work, we introduce a benchmarking framework for evaluating the performance of NIST Lightweight Cryptography (LWC) candidates on embedded platforms. We show the features and application of the framework and explain its design rationale. Moreover, we provide information on how we aim to present up-to-date performance figures throughout the NIST LWC competition. In this paper, we present an excerpt of our software benchmarking results regarding speed and memory requirements of selected ciphers. All up-to-date results, including benchmarking different test cases for multiple variants of each 2nd round algorithm on five different microcontrollers, are periodically published to a public website. While initially only the reference implementations were available, the ability of automatically testing the performance of the candidate algorithms on multiple platforms becomes especially relevant as more optimized implementations are developed. Finally, we show how the framework can be extended in different directions: support for more target platforms can be easily added, different kinds of algorithms can be tested, and other test metrics can be acquired. The focus of this paper should rather lay on the framework design and testing methodology than on the current results, especially for reference code.
- Published
- 2020
24. Improving and Optimizing Verification and Testing Techniques for Distributed Information Systems
- Author
-
Moez Krichen
- Subjects
Model-based testing ,Correctness ,Computer science ,Process (engineering) ,020206 networking & telecommunications ,020207 software engineering ,02 engineering and technology ,Set (abstract data type) ,System under test ,Computer engineering ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,Formal verification ,Component placement - Abstract
In this paper, we deal with two validation techniques which may be adopted for improving the quality and ensuring the correctness of Distributed Information Systems. These two techniques are Formal Verification and Model Based Techniques. The first one consists in checking the correctness of a mathematical model used to describe the behavior of the considered system before its implementation. The second technique consists in deriving tests suites from the adopted model, executing them and finally deducing verdicts about the correctness of this system under test. In both cases, we need to tackle the explosion state challenge which corresponds to the fact of reaching a very large space of states and consuming a very long time during the validation process. To solve this problem we propose a set of appropriate techniques taken from the literature. We also identify a set of techniques which may be used for the optimization of the test component placement procedure.
- Published
- 2020
25. Concrete Crack Pixel Classification Using an Encoder Decoder Based Deep Learning Architecture
- Author
-
Umme Hafsa Billah, Alireza Tavakkoli, and Hung Manh La
- Subjects
0209 industrial biotechnology ,Pixel ,business.industry ,Computer science ,Deep learning ,Robust statistics ,02 engineering and technology ,01 natural sciences ,Sample (graphics) ,Bridge (nautical) ,Image (mathematics) ,010309 optics ,020901 industrial engineering & automation ,Computer engineering ,0103 physical sciences ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Underwater ,business - Abstract
Civil infrastructure inspection in hazardous areas such as underwater beams, bridge decks, etc., is a perilous task. In addition, other factors like labor intensity, time, etc. influence the inspection of infrastructures. Recent studies [11] represent that, an autonomous inspection of civil infrastructure can eradicate most of the problems stemming from manual inspection. In this paper, we address the problem of detecting cracks in the concrete surface. Most of the recent crack detection techniques use deep architecture. However, finding the exact location of crack efficiently has been a difficult problem recently. Therefore, a deep architecture is proposed in this paper, to identify the exact location of cracks. Our architecture labels each pixel as crack or non-crack, which eliminates the need for using any existing post-processing techniques in the current literature [5, 11]. Moreover, acquiring enough data for learning is another challenge in concrete defect detection. According to previous studies, only 10% of an image contains edge pixels (in our case defected areas) [31]. We proposed a robust data augmentation technique to alleviate the need for collecting more crack image samples. The experimental results show that, with our method, significant accuracy can be obtained with very less sample of data. Our proposed method also outperforms the existing methods of concrete crack classification.
- Published
- 2019
26. A Mixed Method of Parallel Software Auto-Tuning Using Statistical Modeling and Machine Learning
- Author
-
P.A. Ivanenko, O.S. Novak, Anatoliy Doroshenko, and Olena Yatsenko
- Subjects
Hardware architecture ,Sorting algorithm ,Source code ,Artificial neural network ,Computer science ,Computation ,media_common.quotation_subject ,Statistical model ,02 engineering and technology ,Reuse ,01 natural sciences ,Parallel software ,Computer engineering ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,010306 general physics ,media_common - Abstract
A mixed method combining formal and auto-tuning approaches and aimed at maximizing efficiency of parallel programs (in terms of execution time) is proposed. The formal approach is based on algorithmic algebra and the usage of tools for automated design and synthesis of programs based on high-level algorithm specifications (schemes). Parallel software auto-tuning is the method of adjusting some structural parameters of a program to a target hardware platform to speed-up computation as much as possible. Previously, we have developed a framework intended to automate the generation of an auto-tuner from a program source code. However, auto-tuning for complex and nontrivial parallel systems is usually time-consuming due to empirical evaluation of huge amount of parameter values combinations of an initial parallel program in a target environment. In this paper, we extend our approach with statistical modeling and neural network algorithms that allow to reduce significantly the space of possible parameter combinations. The improvement consists in automatic training of a neural network model on results of “traditional” tuning cycles and the subsequent replacement of some auto-tuner calls with an evaluation from the statistical model. The method allows, particularly, transferring knowledge about the influence of parameters on program performance between “similar” (in terms of hardware architecture) computing environments for the same applications. The idea is to reuse a model trained on data from a similar environment. The use of the method is illustrated by an example of tuning a parallel sorting program which combines several sorting methods.
- Published
- 2019
27. Enhancing an Attack to DSA Schemes
- Author
-
Dimitrios Poulakis, Marios Adamoudis, and Konstantinos A. Draziotis
- Subjects
business.industry ,Computer science ,Data_MISCELLANEOUS ,Elliptic Curve Digital Signature Algorithm ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Public-key cryptography ,Digital Signature Algorithm ,Computer engineering ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
In this paper, we improve the theoretical background of the attacks on the DSA schemes of a previous paper, and we present some new more practical attacks.
- Published
- 2019
28. Improving Transition Probability for Detecting Hardware Trojan Using Weighted Random Patterns
- Author
-
Kshirod Chandra Mohapatra, M. Nirmala Devi, and M. Priyatharishini
- Subjects
050101 languages & linguistics ,Number generator ,business.industry ,Computer science ,05 social sciences ,Information processing ,02 engineering and technology ,Adversary ,Python (programming language) ,Software ,Computer engineering ,Hardware Trojan ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,business ,Electronic hardware ,computer ,computer.programming_language - Abstract
Computer system security has related to security of software or the information processed. The underlying hardware used for information processing has considered to as trusted. The emerging attacks from Hardware Trojans (HTs) violate this root of trust. The attacks are in the form of malicious modification of electronic hardware at different stages; possess major security concern in the electronic industries. An adversary can mount HT in a net of the circuit, which has low transition probability. In this paper, the improvement of the transition probability by using test points and weighted random patterns is proposed. The improvement in the transition probability can accelerate the detection of HTs. This paper implements weighted random number generator techniques to improve the transition probability. This technique is evaluated on ISCAS 85’ benchmark circuit using PYTHON and SYNOPSYS TETRAMAX tool.
- Published
- 2019
29. Fault Analysis in Analog Circuits Through Language Manipulation and Abstraction
- Author
-
Enrico Fraccaroli, Franco Fummi, Mark Zwolinski, Francesco Stefanni, Große, D., Vinco, S., and Patel, H.
- Subjects
010302 applied physics ,Functional safety ,Analogue electronics ,Computer science ,Control engineering ,02 engineering and technology ,Fault injection ,Fault (power engineering) ,01 natural sciences ,Field (computer science) ,020202 computer hardware & architecture ,Computer engineering ,visual_art ,0103 physical sciences ,Electronic component ,0202 electrical engineering, electronic engineering, information engineering ,visual_art.visual_art_medium ,Instrumentation (computer programming) ,Abstraction (linguistics) - Abstract
Each year automotive systems are becoming smarter thanks to their enhancement with sensing, actuation and computation features. The recent advancements in the field of autonomous driving have increased even more the complexity of the electronic components used to provide such services. ISO 26262 represents the natural response to the growing concerns in terms of the functional safety of electrical safety-related systems in this area. However, if the functional safety analysis of digital devices is quite a stable methodology, the same analysis for analog components is still in its infancy. This paper aims to explore the problem of fault analysis in analog circuits and how it can be integrated into the design processes with minimum effort. The methodology is based on analog language manipulation, analog fault instrumentation and automatic abstraction. An efficient and comprehensive flow for performing such an activity is proposed and applied to complex case studies.
- Published
- 2018
30. An Efficient Technique of Detecting Program Plagiarism Through Program Slicing
- Author
-
Hwanchul Jung, Junhyun Park, Jongseok Lee, and Jangwu Jo
- Subjects
Identifier ,Statement (computer science) ,Computer engineering ,Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Program slicing ,020201 artificial intelligence & image processing ,Plagiarism detection ,02 engineering and technology ,State (computer science) - Abstract
In this paper, we survey state of the art in plagiarism detection techniques, called GPLAG. This technique can detect five popular plagiarism techniques, such as format alteration, identifier renaming, statement reordering, control replacement, and code insertion. The problem of this technique is that it becomes inefficient when PDG grows. To resolve a problem of time complexity, this paper proposes a way to perform program slicing first and PDG comparison later. Original program and plagiarized program are sliced respectively, then PDG of original program’s slice is compared with PDG of plagiarized program’s slice. Since program slicing reduces the size of PDG, time to compare PDGs can be reduced. We choose most used variables as slicing criterion, and we can maintain accuracy even after program slicing. By experiments we show that efficiency is enhanced and accuracy is also maintained.
- Published
- 2018
31. A Transparent View on Approximate Computing Methods for Tuning Applications
- Author
-
Michael Bromberger and Wolfgang Karl
- Subjects
010302 applied physics ,Approximate computing ,Computer science ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,01 natural sciences ,Set (abstract data type) ,Computer engineering ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,State (computer science) ,Representation (mathematics) ,media_common ,Abstraction (linguistics) - Abstract
Approximation-tolerant applications give a system designer the possibility to improve traditional design values by slightly decreasing the quality of result. Approximate computing methods introduced for various system layers present the right tools to exploit this potential. However, finding a suitable tuning for a set of methods during design or run time according to the constraints and the system state is tough. Therefore, this paper presents an approach that leads to a transparent view on different approximation methods. This transparent and abstract view can be exploited by tuning approaches to find suitable parameter settings for the current purpose. Furthermore, the presented approach takes multiple objectives and conventional methods, which influence traditional design values, into account. Besides this novel representation approach, this paper introduces a first tuning approach exploiting the presented approach.
- Published
- 2018
32. Efficient Load Balancing Techniques for Graph Traversal Applications on GPUs
- Author
-
Nicola Bombieri and Federico Busato
- Subjects
020203 distributed computing ,Computer engineering ,Graph traveral ,Computer science ,Graph traversal ,Graph traveral, Load Balancing, GPU ,GPU ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,02 engineering and technology ,Load balancing (computing) ,Graph ,Load Balancing - Abstract
Efficiently implementing a load balancing technique in graph traversal applications for GPUs is a critical task. It is a key feature of GPU applications as it can sensibly impact on the overall application performance. Different strategies have been proposed to deal with such an issue. Nevertheless, the efficiency of each of them strongly depends on the graph characteristics and no one is the best solution for any graph. This paper presents three different balancing techniques and how they have been implemented to fully exploit the GPU architecture. It also proposes a set of support strategies that can be modularly applied to the main balancing techniques to better address the graph characteristics. The paper presents an analysis and a comparison of the three techniques and support strategies with the best solutions at the state of the art over a large dataset of representative graphs. The analysis allows statically identifying, given graph characteristics and for each of the proposed techniques, the best combination of supports, and that such a solution is more efficient than the techniques at the state of the art.
- Published
- 2018
33. Implementation of Near-Fault Forward Directivity Effects in Seismic Design Codes
- Author
-
Sinan Akkar and Saed Moghimi
- Subjects
Set (abstract data type) ,Computer engineering ,Computer science ,Probabilistic logic ,Probabilistic seismic hazard analysis ,Waveform ,Directivity ,Near fault ,Seismic analysis - Abstract
Near-fault ground motions exhibiting forward directivity effects are critical for seismic design because they impose very large seismic demands on buildings due to their large-amplitude pulselike waveforms. The current challenge in seismic design codes is to recommend simple (easy-to-apply) yet proper rules to explain the near-fault forward directivity (NFFD) phenomenon for seismic demands. This effort is not new and has been the subject of research for over two decades. This paper contributes to these efforts and proposes an alternative set of rules to modify the elastic design spectrum of 475-year and 2475-year return periods for NFFD effects. The directivity rules discussed here are evolved from a relatively large number of probabilistic earthquake scenarios (probabilistic seismic hazard assessment, PSHA) that employ two recent directivity models. The paper first gives the background of the probabilistic earthquake scenarios and then introduces the proposed NFFD rules for seismic design codes. We conclude the paper by presenting some cases with the proposed rules to see how spectral amplitudes modify due to directivity.
- Published
- 2018
34. Combining Tools for Optimization and Analysis of Floating-Point Computations
- Author
-
Eva Darulova, Heiko Becker, Zachary Tatlock, and Pavel Panchekha
- Subjects
Lead (geology) ,Floating point ,Computer engineering ,Computer science ,Computation ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,010103 numerical & computational mathematics ,02 engineering and technology ,0101 mathematics ,Round-off error ,01 natural sciences - Abstract
Recent renewed interest in optimizing and analyzing floating-point programs has lead to a diverse array of new tools for numerical programs. These tools are often complementary, each focusing on a distinct aspect of numerical programming. Building reliable floating point applications typically requires addressing several of these aspects, which makes easy composition essential. This paper describes the composition of two recent floating-point tools: Herbie, which performs accuracy optimization, and Daisy, which performs accuracy verification. We find that the combination provides numerous benefits to users, such as being able to use Daisy to check whether Herbie’s unsound optimizations improved the worst-case roundoff error, as well as benefits to tool authors, including uncovering a number of bugs in both tools. The combination also allowed us to compare the different program rewriting techniques implemented by these tools for the first time. The paper lays out a road map for combining other floating-point tools and for surmounting common challenges.
- Published
- 2018
35. A Clustering Algorithm for the DAP Placement Problem in Smart Grid
- Author
-
Yulong Ying, Yanxiao Zhao, Jun Huang, Guodong Wang, and Robb M. Winter
- Subjects
Smart grid ,Computer engineering ,Smart meter ,Computer science ,010102 general mathematics ,Network partition ,Distance minimization ,02 engineering and technology ,0101 mathematics ,021001 nanoscience & nanotechnology ,0210 nano-technology ,Cluster analysis ,01 natural sciences - Abstract
In this paper, we investigate the DAP placement problem and propose solutions to reduce the distance between DAPs and smart meters. The DAP placement problem is formulated to two objectives, e.g., the average distance minimization and the maximum distance minimization. The concept of network partition is introduced in this paper and practical algorithms are developed to address the DAP placement problem. Extensive simulations are conducted based on a real suburban neighborhood topology. The simulation results verify that the proposed solutions are able to remarkably reduce the communication distance between DAPs and their associated smart meters.
- Published
- 2018
36. A Preview on MIMO Systems in 5G New Radio
- Author
-
Razvan-Florentin Trifan, Andrei Alexandru Enescu, and Constantin Paleologu
- Subjects
Transmission (telecommunications) ,Standardization ,Computer engineering ,Computer science ,Channel state information ,MIMO ,Context (language use) ,Precoding ,5G ,Term (time) - Abstract
With 3GPP 5G new radio (NR), proposals are already being discussed. Although participants offer various suggestions for the first 5G standardization, common ideas can already be identified. The purpose of the paper is to anticipate, in the context of massive multiple input multiple output (MIMO) systems, the main directions the standard would focus on; for example, a unified transmission scheme, multi-level channel state information (CSI), and non-linear precoding (NLP). The latter, an alternative to linear precoding employed in Long Term Evolution (LTE), is analyzed in detail in the paper.
- Published
- 2018
37. Fast Homomorphic Encryption Based on CPU-4GPUs Hybrid System in Cloud
- Author
-
Jianping Xu, Xinfa Dai, Jing Xia, and Zhong Ma
- Subjects
021110 strategic, defence & security studies ,Cloud computing security ,business.industry ,Computer science ,Pipeline (computing) ,0211 other engineering and technologies ,Data security ,Homomorphic encryption ,Cloud computing ,0102 computer and information sciences ,02 engineering and technology ,Encryption ,01 natural sciences ,Parallel processing (DSP implementation) ,Computer engineering ,010201 computation theory & mathematics ,Hybrid system ,business - Abstract
Security is an ever-present consideration for applications and data in the cloud computing environment. As an important method of performing computations directly on encrypted data without any need of decryption and compromising privacy, homomorphic encryption is an increasingly popular topic of protecting the privacy of data in cloud security research. However, as high computational complexity of the homomorphic encryption, it will be a heavy workload for computing resources in the cloud computing paradigm. Motivated by this observation, this paper proposes a fast parallel scheme with DGHV algorithm based on CPU-4GPUs hybrid system. Our main contribution of this paper is to present a parallel processing stream scheme for large-scale data encryption based on CPU-4GPUs hybrid system as fast as possible. Particularly, the proposed method applies CPU-4GPUs parallel implementation to accelerating encryption operation with DGHV algorithm to reduce the time duration and provide a comparative performance study. We also make further efforts to design a pipeline architecture of processing stream in CPU-4GPUs hybrid system to accelerate encryption for DGHV algorithm. The experiment results show that our method gains more than 91% improvement (run time) and 70% improvement compared to the serial addition and multiplication operation with DGHV algorithm respectively.
- Published
- 2018
38. Practical Attacks Against the Walnut Digital Signature Scheme
- Author
-
Ward Beullens and Simon R. Blackburn
- Subjects
Scheme (programming language) ,Structure (mathematical logic) ,business.industry ,Computer science ,010102 general mathematics ,Cryptography ,02 engineering and technology ,01 natural sciences ,Signature (logic) ,law.invention ,Digital signature ,Computer engineering ,law ,0202 electrical engineering, electronic engineering, information engineering ,NIST ,Cryptosystem ,020201 artificial intelligence & image processing ,0101 mathematics ,business ,Cryptanalysis ,computer ,computer.programming_language - Abstract
Recently, NIST started the process of standardizing quantum-resistant public-key cryptographic algorithms. WalnutDSA, the subject of this paper, is one of the 20 proposed signature schemes that are being considered for standardization. Walnut relies on a one-way function called E-Multiplication, which has a rich algebraic structure. This paper shows that this structure can be exploited to launch several practical attacks against the Walnut cryptosystem. The attacks work very well in practice; it is possible to forge signatures and compute equivalent secret keys for the 128-bit and 256-bit security parameters submitted to NIST in less than a second and in less than a minute respectively.
- Published
- 2018
39. Design and Implementation of a Vision System on an Innovative Single Point Micro-machining Device for Tool Tip Localization
- Author
-
Marcelo Fajardo-Pruna, Hilde Pérez, Luis López-Estrada, Antonio Vizán, and Lidia Sánchez-González
- Subjects
Range (mathematics) ,Stereopsis ,Computer engineering ,Machining ,Degree (graph theory) ,Computer science ,Machine vision ,Numerical control ,Single point ,Simulation - Abstract
This paper proposes an innovative single point cutting device that requires less maintenance than traditional micro-milling machines, being the cutting tools required simpler easier to develop. This satisfies the market demands on micro manufacturing, where devices must be more accurate and cheaper, able to create a diverse range of shapes and geometries with a high degree of accuracy. A stereo vision system has been implemented as an alternative to the current technologies in the market to locate the tool tip, and with this, make the proper corrections on the CNC machine. In this paper the development of such system is explored and discussed. Experimental results show the accuracy of the proposed system, given an error in the measurement of \({\pm }3\,{\upmu }\mathrm{m}\).
- Published
- 2017
40. Performance Analysis of a Proposed Architecture for Remote Construction Machines Diagnostics
- Author
-
Ousmane Sadio, Claude Lishou, and Ibrahima Ngom
- Subjects
Engineering ,computer.internet_protocol ,business.industry ,CPU time ,Cryptographic protocol ,Identifier ,Computer engineering ,Remote diagnostics ,Host Identity Protocol ,business ,computer ,Mobility management ,Protocol (object-oriented programming) ,Host (network) ,Computer network - Abstract
The vehicle industry manufacturer is showing an increased interest in remote diagnostics by improving customer relationship management and commercial interest. The special nature of construction machines fact that architectures and applications proposed for vehicular communication are not optimized for these types of vehicles. This paper proposes a new remote diagnostic architecture based on IEEE 802.11n for construction machines communication to the infrastructure on site. The mobile Internet is used to transmit collected data to the central monitoring application. We discuss the possible network selection algorithms, mobility and security protocols that can be implemented on the On Board Units (OBU). The proposed architecture is based on Host Identity Protocol (HIP). This protocol introduces a Host Identifier (HI) for naming the endpoints and use intensive cryptographic operation for security and mobility management. Due to their minimal computing capabilities, many single board computers used as OBU themselves suffer from applications which need intensive CPU usage. Thus, a performance analysis of HIP, implemented on single board computers, is proposed by the end of this paper.
- Published
- 2017
41. Improved Encryption Padding for ECC System with Provable Security
- Author
-
Li Zichen, Zhang Fengjuan, Zhang Yaze, and Yang Yatao
- Subjects
Provable security ,Computer science ,business.industry ,Padding oracle attack ,Data_CODINGANDINFORMATIONTHEORY ,Computer security ,computer.software_genre ,Encryption ,Padding ,Random oracle ,Computer Science::Hardware Architecture ,Computer engineering ,Probabilistic encryption ,Computer Science::Multimedia ,Ciphertext ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,business ,computer ,Optimal asymmetric encryption padding ,Computer Science::Cryptography and Security - Abstract
In order to solve the security problem of ECC cryptosystem, the security deficiency of elliptic curve encryption is described firstly in this paper. Then, the method of OAEP (Optimal Asymmetric Encryption Padding) in the random oracle model is adopted to enhance the security of the existing ECC encryption system. An improved encryption padding scheme for ECC cryptosystem, namely EOAEP (ECC OAEP), is proposed and designed in this paper, Under the one-way assumption of encryption function, it has been proved that our scheme satisfies adaptive chosen ciphertext security by using the Game-Hopping technology according to the random oracle model.
- Published
- 2017
42. A New Approach for Automatic Development of Reconfigurable Real-Time Systems
- Author
-
Nicolas Treves, Wafa Lakhdhar, Mohamed Khalgui, and Rania Mzid
- Subjects
Source code ,Computer engineering ,POSIX ,Computer science ,media_common.quotation_subject ,Redundancy (engineering) ,Control reconfiguration ,Uniprocessor system ,Integer programming ,Real-time operating system ,Implementation ,media_common - Abstract
In the industry, reconfigurable real-time systems are specified as a set of implementations and tasks with timing constraints. The reconfiguration allows to move from one implementation to another by adding/removing real-time tasks. Implementing those systems as threads generates a complex system code due to the large number of threads and the redundancy between the implementation sets. This paper shows an approach for software synthesis in reconfigurable uniprocessor real-time embedded systems. Starting from the specification to a program source code, this approach aims at minimizing the number of threads and the redundancy between the implementation sets while preserving the system feasibility. The proposed approach adopts Mixed Integer Linear Programming (MILP) techniques in the exploration phase in order to provide feasible and optimal task model. An optimal reconfigurable POSIX-based code of the system is manually generated as an output of this technique. An application to a case study and performance evaluation show the effectiveness of the proposed approach.
- Published
- 2017
43. A QR Code Watermarking Approach Based on the DWT-DCT Technique
- Author
-
Wei Zong, Yang-Wai Chow, Joseph Tonien, and Willy Susilo
- Subjects
Computer science ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,020206 networking & telecommunications ,Watermark ,02 engineering and technology ,Digital image ,Computer engineering ,Robustness (computer science) ,Information hiding ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Discrete cosine transform ,020201 artificial intelligence & image processing ,Error detection and correction ,Digital watermarking - Abstract
The rapid growth in Internet and communication technology has facilitated an escalation in the exchange of digital multimedia content. This has resulted in an increase in copyright infringement, which has led to a greater demand for more robust copyright protection mechanisms. Digital watermarking is a means of detecting ownership and illegal use of digital products. This paper presents an approach to watermarking images by embedding QR code information in a digital image. The notion of the proposed scheme is to capitalize on the error correction mechanism that is inherent in the QR code structure, in order to increase the robustness of the watermark. By employing the QR code’s error correction mechanism, watermark information contained within a watermarked image can potentially be decoded even if the image has been altered or distorted by an adversary. This paper studies the characteristics of the proposed scheme and presents experiment results examining the robustness and security of the QR code watermarking approach.
- Published
- 2017
44. Sanitizing Sensitive Data: How to Get It Right (or at Least Less Wrong…)
- Author
-
Roderick Chapman
- Subjects
Computer engineering ,Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,020207 software engineering ,02 engineering and technology ,Coding (social sciences) - Abstract
Coding standards and guidance for secure programming call for sensitive data to be “sanitized” before being de-allocated. This paper considers what this really means in technical terms, why it is actually rather difficult to achieve, and how such a requirement can be realistically implemented and verified, concentrating on the facilities offered by Ada and SPARK. The paper closes with a proposed policy and coding standard that can be applied and adapted to other projects.
- Published
- 2017
45. Stopwatch Automata-Based Model for Efficient Schedulability Analysis of Modular Computer Systems
- Author
-
Alevtina B. Glonina and Anatoly Bahmurov
- Subjects
Model checking ,0209 industrial biotechnology ,Correctness ,business.industry ,Computer science ,020207 software engineering ,02 engineering and technology ,Modular design ,Integrated modular avionics ,Software implementation ,Automaton ,law.invention ,020901 industrial engineering & automation ,Computer engineering ,law ,0202 electrical engineering, electronic engineering, information engineering ,Equivalence (formal languages) ,business ,Stopwatch - Abstract
In this paper we propose a stopwatch automata-based model of a modular computer system operation. This model provides an ability to perform schedulability analysis for a wide class of modular computer systems. It is formally proven that the model satisfies a set of correctness requirements. It is also proven that all the traces, generated by the model interpretation, are equivalent for schedulability analysis purposes. The traces equivalence allows to use any trace for analysis and therefore the proposed approach is much more efficient than Model Checking, especially for parallel systems with many simultaneous events. The software implementation of the proposed approach is also presented in the paper.
- Published
- 2017
46. The Integrated Approach to Solving Large-Size Physical Problems on Supercomputers
- Author
-
Igor Chernykh, Alexey V. Snytnikov, Igor Kulikov, Anna Sapetina, Boris Glinskiy, and Dmitry Weins
- Subjects
Computer science ,020209 energy ,Numerical analysis ,Parallel algorithm ,Context (language use) ,02 engineering and technology ,Supercomputer architecture ,Integrated approach ,01 natural sciences ,Computer engineering ,0103 physical sciences ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,State (computer science) ,010303 astronomy & astrophysics ,Efficient energy use - Abstract
This paper presents the results obtained by the authors on applying an integrated approach to solving geoseismics, astrophysics, and plasma physics problems on high-performance computers. The concept of the integrated approach in the context of mathematical modeling of physical processes is understood as constructing a physico-mathematical model of a phenomenon, a numerical method, a parallel algorithm and its software implementation with the efficient use of a supercomputer architecture. With this approach, it becomes relevant to compare not only the methods of solving a problem but, also, physical and mathematical statements of a problem aimed at creating the most effective implementation of a chosen computing architecture. The scalability of algorithms is investigated using the multi-agent system AGNES simulating the behavior of computing nodes based on the current state of computer equipment characteristics. In addition, special attention in this paper is given to the energy efficiency of algorithms.
- Published
- 2017
47. An FPGA-Based Real-Time Moving Object Tracking Approach
- Author
-
Mingsong Chen, Wenjie Chen, Yangyang Ma, Daojing He, and Zhilei Chai
- Subjects
Hardware architecture ,Matching (graph theory) ,Video Graphics Array ,business.industry ,Computer science ,Computation ,020207 software engineering ,Tracking system ,02 engineering and technology ,Computer engineering ,Feature (computer vision) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Energy (signal processing) - Abstract
Due to high complexity on matching computation, real-time object tracking is generally a very challenging task for practical applications. This paper proposes a new algorithm for moving object tracking, which improves the traditional KLT algorithm by using the motion information for feature points selection to avoid the irrelevant feature points residing in the background area. Moreover, this paper designs the hardware architecture of the FPGA part to accelerate the computation by optimizing the inherent parallelism of the algorithm. The proposed algorithm is able to significantly reduce the computation time. Experimental results show that our algorithm implemented in an FPGA-SoC (Zynq 7020, 667 MHz) requires only 0.030 s to handle a VGA resolution frame, which is suitable for real-time tracking. This achieves up to \(30{\times }\) performance improvement compared with the desktop PC (i3, 3.4 GHz), or \(370{\times }\) compared with the ARM (Cortex-A8, 1 GHz). The experiment also shows that our approach consumes less energy significantly than PC and ARM for the same workload, which indicates that it is suitable for energy-critical system.
- Published
- 2017
48. PLC-Based Systems for Data Acquisition and Supervisory Control of Environment-Friendly Energy-Saving Technologies
- Author
-
Yuriy P. Kondratenko, Oleksiy V. Kozlov, and Oleksiy Korobko
- Subjects
Economic efficiency ,Automatic control ,Computer science ,business.industry ,Process (engineering) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,020206 networking & telecommunications ,Control engineering ,02 engineering and technology ,Software ,Data acquisition ,SCADA ,Computer engineering ,Supervisory control ,Control system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
This paper presents the development of PLC-based systems for data acquisition and supervisory control of environment-friendly energy-saving complex high-tech technologies. The functional structure and main components of PLC-based SCADA-systems for environment-friendly energy-saving technological processes are given. The examples of SCADA applications in design of PLC-based systems for monitoring and automatic control of (a) ecopyrogenesis (EPG) and (b) thermoacoustic technological processes are presented. Paper considers the criteria of energy and economic efficiency of the EPG technological process. The functional structures, software and hardware implementation as well as multi-level human-machine interfaces of the developed PLC-based systems for data acquisition and supervisory control are given. Considerable attention is given to particular qualities of computing of ecopyrogenesis and thermoacoustic processes technological parameters by the proposed SCADA-systems. The developed PLC-based SCADA-systems provide: significant increasing of energy and economic efficiency criteria of the EPG and TAD complexes, high precision control of both technological processes, monitoring of current technological parameters, using the indirect methods for parameters measuring and identifying, and automatic control with high quality indicators and optimal parameters.
- Published
- 2016
49. Local Search Approach to Genetic Programming for RF-PAs Modeling Implemented in FPGA
- Author
-
José Cruz Núñez Pérez, Emigdio Z-Flores, Leonardo Trujillo, and J. R. Cárdenas Valdez
- Subjects
Mathematical optimization ,Syntax (programming languages) ,Computer science ,business.industry ,Heuristic (computer science) ,Amplifier ,020206 networking & telecommunications ,Genetic programming ,02 engineering and technology ,Hardware emulation ,Computer engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Local search (optimization) ,Radio frequency ,business ,Field-programmable gate array - Abstract
This paper presents a genetic programming (GP) approach enhanced with a local search heuristic (GP-LS) to emulate the Doherty 7 W @ 2.11 GHz Radio Frequency (RF) Power Amplifier (PA) conversion curves. GP has been shown to be a powerful modeling tool, but can be compromised by slow convergence and computational cost. The proposal is to combine the explorative search of standard GP, which build the syntax of the solution, with numerical methods that perform an exploitative and greedy local optimization of the evolved structures. The results are compared with traditional modeling techniques, particularly the memory polynomial model (MPM). The main contribution of the paper is the design, comparison and hardware emulation of GP-LS for FPGA real applications. The experimental results show that GP-LS can outperform standard MPM, and suggest a promising new direction of future work on digital pre-distortion (DPD) that requires complex behavioral models.
- Published
- 2016
50. Experimentation System for Path Planning Applied to 3D Printing
- Author
-
Iwona Pozniak-Koszalka, Andrzej Kasprzak, Mateusz Wojcik, and Leszek Koszalka
- Subjects
Engineering drawing ,Fractal ,Zigzag ,Computer engineering ,Computer science ,business.industry ,Path (graph theory) ,Process (computing) ,3D printing ,Motion planning ,business - Abstract
This paper is focused on finding the efficient path generating algorithms. These algorithms can take part in the path planning process, which is the last stage in model processing for 3D printing. The paper provides the comparative analysis of the five implemented path generating algorithms, which are based on four strategies: ZigZag strategy, Contour strategy, Spiral Strategy, and Fractal strategy. The analysis is based on the results of experiments made with the designed and implemented experimentation system. The system allows carrying out two types of experiments: simulation experiments and physical one. The simulation experiment is performed with the programming application written in C#. The physical experiments utilized FDM technology for 3D printing. Basing on the obtained results, we can conclude that the algorithm based on the ZigZag strategy seems to be better than the algorithms based on other approaches.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.