34,742 results on '"Redundancy (engineering)"'
Search Results
2. Good plumbing design
- Author
-
Fennel, John
- Published
- 2024
3. Robot Manipulator Redundancy Resolution
- Author
-
Yunong Zhang, Long Jin, Yunong Zhang, and Long Jin
- Subjects
- Robots--Control systems, Manipulators (Mechanism), Redundancy (Engineering)
- Abstract
Introduces a revolutionary, quadratic-programming based approach to solving long-standing problems in motion planning and control of redundant manipulators This book describes a novel quadratic programming approach to solving redundancy resolutions problems with redundant manipulators. Known as ``QP-unified motion planning and control of redundant manipulators''theory, it systematically solves difficult optimization problems of inequality-constrained motion planning and control of redundant manipulators that have plagued robotics engineers and systems designers for more than a quarter century. An example of redundancy resolution could involve a robotic limb with six joints, or degrees of freedom (DOFs), with which to position an object. As only five numbers are required to specify the position and orientation of the object, the robot can move with one remaining DOF through practically infinite poses while performing a specified task. In this case redundancy resolution refers to the process of choosing an optimal pose from among that infinite set. A critical issue in robotic systems control, the redundancy resolution problem has been widely studied for decades, and numerous solutions have been proposed. This book investigates various approaches to motion planning and control of redundant robot manipulators and describes the most successful strategy thus far developed for resolving redundancy resolution problems. Provides a fully connected, systematic, methodological, consecutive, and easy approach to solving redundancy resolution problems Describes a new approach to the time-varying Jacobian matrix pseudoinversion, applied to the redundant-manipulator kinematic control Introduces The QP-based unification of robots'redundancy resolution Illustrates the effectiveness of the methods presented using a large number of computer simulation results based on PUMA560, PA10, and planar robot manipulators Provides technical details for all schemes and solvers presented, for readers to adopt and customize them for specific industrial applications Robot Manipulator Redundancy Resolution is must-reading for advanced undergraduates and graduate students of robotics, mechatronics, mechanical engineering, tracking control, neural dynamics/neural networks, numerical algorithms, computation and optimization, simulation and modelling, analog, and digital circuits. It is also a valuable working resource for practicing robotics engineers and systems designers and industrial researchers.
- Published
- 2017
4. On-Device Deep Multi-Task Inference via Multi-Task Zipping
- Author
-
Zheng Yang, Xiaoxi He, Xu Wang, Lothar Thiele, Jiahang Wu, and Zimu Zhou
- Subjects
Scheme (programming language) ,Computer Networks and Communications ,Computer science ,Distributed computing ,Inference ,Error function ,Task (computing) ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Latency (engineering) ,Mobile device ,computer ,Software ,Merge (linguistics) ,computer.programming_language - Abstract
Future mobile devices are anticipated to perceive, understand and react to the world on their own by running multiple correlated deep neural networks locally on-device. Yet the complexity of deep models needs to be trimmed down to fit in mobile storage and memory. Previous studies squeeze the redundancy within a single model. In this work, we reduce the redundancy across multiple models. We propose Multi-Task Zipping (MTZ), a framework to merge correlated, pre-trained deep neural networks for cross-model compression. Central in MTZ is a layer-wise neuron sharing and incoming weight updating scheme that induces a minimal change in the error function. MTZ inherits information from eachmodel and demands light retraining to re-boost the accuracy of individual tasks. MTZ supports typical network layers and applies to inference tasks with different input domains. Evaluations show that MTZ can fully merge the hidden layers of two VGG-16 network. Moreover, MTZ can effectively merge nine residual networks for diverse inference tasks and models for different input domains. Withthe joint model merged by MTZ, the latency to switch between these tasks on memory-constrained devices is reduced by 8.71.
- Published
- 2023
5. A Fast, Reliable, Opportunistic Broadcast Scheme With Mitigation of Internal Interference in VANETs
- Author
-
Dan Keun Sung, Xinming Zhang, and Hui Zhang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Retransmission ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Latency (audio) ,Data_CODINGANDINFORMATIONTHEORY ,Interference (wave propagation) ,law.invention ,Relay ,law ,Metric (mathematics) ,Redundancy (engineering) ,Overhead (computing) ,Electrical and Electronic Engineering ,business ,Software ,Selection (genetic algorithm) ,Computer network - Abstract
In VANETs, it is important to support fast and reliable multi-hop broadcast for safety-related applications. The performance of multi-hop broadcast schemes is greatly affected by relay selection strategies. However, the relationship between the relay selection strategies and the expected broadcast performance has not been fully characterized yet. Furthermore, conventional broadcast schemes usually attempt to minimize the waiting time difference between adjacent relay candidates to reduce the waiting time overhead, which makes the relay selection process vulnerable to internal interference, occurring due to retransmissions from previous forwarders and transmissions from redundant relays. In this paper, we jointly take both of the relay selection and the internal interference mitigation into account and propose a fast, reliable, opportunistic multi-hop broadcast scheme, in which we utilize a novel metric called the expected broadcast speed in relay selection and propose a delayed retransmission mechanism to mitigate the adverse effect of retransmissions from previous forwarders and an expected redundancy probability based mechanism to mitigate the adverse effect of redundant relays. The performance evaluation results show that the proposed scheme yields the best broadcast performance among the four schemes in terms of the broadcast coverage ratio and the end-to-end delivery latency.
- Published
- 2023
6. Synthesis of Large-Scale Instant IoT Networks
- Author
-
Pradipta Ghosh, Marcos A. M. Vieira, Gunjan Verma, Jonathan Bunton, Ramesh Govindan, Kevin S. Chan, Paulo Tabuada, Dimitrios Pylorof, and Gaurav S. Sukhatme
- Subjects
Optimization problem ,Linear programming ,Computer Networks and Communications ,Wireless ad hoc network ,Computer science ,Distributed computing ,Convex optimization ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Network topology ,Wireless sensor network ,Software ,Satisfiability - Abstract
While most networks have long lifetimes, temporary network infrastructure is often useful for special events, pop-up retail, or disaster response. An instant IoT network is one that is rapidly constructed, used for a few days, then dismantled. We consider the synthesis of instant IoT networks in urban settings. This synthesis problem must satisfy complex and competing constraints: sensor coverage, line-of-sight visibility, and network connectivity. The central challenge in our synthesis problem is quickly scaling to large regions while producing cost-effective solutions. We explore two qualitatively different representations of the synthesis problems using satisfiability modulo convex optimization (SMC), and mixed-integer linear programming (MILP). The former is more expressive, for our problem, than the latter, but is less well-suited for solving optimization problems like ours. We show how to express our network synthesis in these frameworks. To scale to problem sizes beyond what these frameworks are capable of, we develop a hierarchical synthesis technique that independently synthesizes networks in sub-regions of the deployment area, then combines these. We find that, while MILP outperforms SMC in some settings for smaller problem sizes, the fact that SMC's expressivity matches our problem ensures that it uniformly generates better quality solutions at larger problem sizes.
- Published
- 2023
7. Enhanced Deep Blind Hyperspectral Image Fusion
- Author
-
Ronghui Zhan, Xueyang Fu, Wu Wang, Yue Huang, Xinghao Ding, Weihong Zeng, and Liyan Sun
- Subjects
Artificial neural network ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Multispectral image ,Normalization (image processing) ,Hyperspectral imaging ,Pattern recognition ,Computer Science Applications ,Artificial Intelligence ,Feature (computer vision) ,Redundancy (engineering) ,Artificial intelligence ,business ,Image resolution ,Software - Abstract
The goal of hyperspectral image fusion (HIF) is to reconstruct high spatial resolution hyperspectral images (HR-HSI) via fusing low spatial resolution hyperspectral images (LR-HSI) and high spatial resolution multispectral images (HR-MSI) without loss of spatial and spectral information. Most existing HIF methods are designed based on the assumption that the observation models are known, which is unrealistic in many scenarios. To address this blind HIF problem, we propose a deep learning-based method that optimizes the observation model and fusion processes iteratively and alternatively during the reconstruction to enforce bidirectional data consistency, which leads to better spatial and spectral accuracy. However, general deep neural network inherently suffers from information loss, preventing us to achieve this bidirectional data consistency. To settle this problem, we enhance the blind HIF algorithm by making part of the deep neural network invertible via applying a slightly modified spectral normalization to the weights of the network. Furthermore, in order to reduce spatial distortion and feature redundancy, we introduce a Content-Aware ReAssembly of FEatures module and an SE-ResBlock model to our network. The former module helps to boost the fusion performance, while the latter make our model more compact. Experiments demonstrate that our model performs favorably against compared methods in terms of both nonblind HIF fusion and semiblind HIF fusion.
- Published
- 2023
8. Graph Fusion Network-Based Multimodal Learning for Freezing of Gait Detection
- Author
-
Mohammed Bennamoun, Zhiyong Wang, Kun Hu, Kaylena A. Ehgoetz Martens, Ah Chung Tsoi, Markus Hagenbuchner, and Simon J.G. Lewis
- Subjects
Modality (human–computer interaction) ,Modalities ,genetic structures ,Artificial neural network ,Computer Networks and Communications ,business.industry ,Computer science ,Machine learning ,computer.software_genre ,Computer Science Applications ,Multimodal learning ,Gait (human) ,Artificial Intelligence ,Redundancy (engineering) ,Adjacency list ,Artificial intelligence ,business ,Representation (mathematics) ,computer ,Software - Abstract
Freezing of gait (FoG) is identified as a sudden and brief episode of movement cessation despite the intention to continue walking. It is one of the most disabling symptoms of Parkinson's disease (PD) and often leads to falls and injuries. Many computer-aided FoG detection methods have been proposed to use data collected from unimodal sources, such as motion sensors, pressure sensors, and video cameras. However, there are limited efforts of multimodal-based methods to maximize the value of all the information collected from different modalities in clinical assessments and improve the FoG detection performance. Therefore, in this study, a novel end-to-end deep architecture, namely graph fusion neural network (GFN), is proposed for multimodal learning-based FoG detection by combining footstep pressure maps and video recordings. GFN constructs multimodal graphs by treating the encoded features of each modality as vertex-level inputs and measures their adjacency patterns to construct complementary FoG representations, thus reducing the representation redundancy among different modalities. In addition, since GFN is devised to process multimodal graphs of arbitrary structures, it is expected to achieve superior performance with inputs containing missing modalities, compared to the alternative unimodal methods. A multimodal FoG dataset was collected, which included clinical assessment videos and footstep pressure sequences of 340 trials from 20 PD patients. Our proposed GFN demonstrates a great promise of multimodal FoG detection with an area under the curve (AUC) of 0.882. To the best of our knowledge, this is one of the first studies to utilize multimodal learning for automated FoG detection, which offers significant opportunities for better patient assessments and clinical trials in the future.
- Published
- 2023
9. A game of snakes and ladders: Restraints of trade and ladder clauses
- Author
-
Wilson, John and Allen, Robert
- Published
- 2018
10. Computational Intelligent Sensor-Rank Consolidation Approach for Industrial Internet of Things (IIoT)
- Author
-
Patan Rizwan, Mahammad Shareef Mekala, and Mohammad S. Khan
- Subjects
Computer Networks and Communications ,Heuristic (computer science) ,business.industry ,Computer science ,Deep learning ,Stability (learning theory) ,Cohesion (computer science) ,Data loss ,computer.software_genre ,Computer Science Applications ,Intelligent sensor ,Hardware and Architecture ,Signal Processing ,Redundancy (engineering) ,Benchmark (computing) ,Artificial intelligence ,Data mining ,business ,computer ,Information Systems - Abstract
Continues field monitoring and searching sensor data remains an imminent element emphasizes the influence of the Internet of Things (IoT). Most of the existing systems are concede spatial coordinates or semantic keywords to retrieve the entail data, which are not comprehensive constraints because of sensor cohesion, unique localization haphazardness. To address this issue, we propose deep learning inspired sensor-rank consolidation (DLi-SRC) system that enables 3-set of algorithms. First, sensor cohesion algorithm based on Lyapunov approach to accelerate sensor stability. Second, sensor unique localization algorithm based on rank-inferior measurement index to avoid redundancy data and data loss. Third, a heuristic directive algorithm to improve entail data search efficiency, which returns appropriate ranked sensor results as per searching specifications. We examined thorough simulations to describe the DLi-SRC effectiveness. The outcomes reveal that our approach has significant performance gain, such as search efficiency, service quality, sensor existence rate enhancement by 91%, and sensor energy gain by 49% than benchmark standard approaches.
- Published
- 2023
11. EEG Feature Selection via Global Redundancy Minimization for Emotion Recognition
- Author
-
Long Ye, Xia Wu, Fulin Wei, Xueyuan Xu, Qing Li, and Tianyuan Jia
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Feature extraction ,Feature selection ,Pattern recognition ,Electroencephalography ,Human-Computer Interaction ,Correlation ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminative model ,Feature (machine learning) ,medicine ,Redundancy (engineering) ,Artificial intelligence ,business ,Software ,Selection (genetic algorithm) - Abstract
A common drawback of EEG-based emotion recognition is that volume conduction effects of the human head introduce interchannel dependence and result in highly correlated information among most EEG features. These highly correlated EEG features cannot provide extra useful information, and they actually reduce the performance of emotion recognition. However, the existing feature selection methods, commonly used to remove redundant EEG features for emotion recognition, ignore the correlation between the EEG features or utilize a greedy strategy to evaluate the interdependence, which leads to the algorithms retaining the correlated and redundant features with similar feature scores in the EEG feature subset. To solve this problem, we propose a novel EEG feature selection method for emotion recognition, termed global redundancy minimization in orthogonal regression (GRMOR). GRMOR can effectively evaluate the dependence among all EEG features from a global view and then select a discriminative and nonredundant EEG feature subset for emotion recognition. To verify the performance of GRMOR, we utilized three EEG emotional data sets (DEAP, SEED, and HDED) with different numbers of channels (32, 62, and 128). The experimental results demonstrate that GRMOR is a promising tool for redundant feature removal and informative feature selection from highly correlated EEG features.
- Published
- 2023
12. Exam form automation using facial recognition
- Author
-
Mohd Kaif Ahmed, Hilal Ahmad Mir, Mohd Tabish Siddiqui, and Bishnu Deo Kumar
- Subjects
Web server ,Computer science ,business.industry ,NumPy ,General Medicine ,Python (programming language) ,computer.software_genre ,Automation ,Facial recognition system ,Task (project management) ,Set (abstract data type) ,Human–computer interaction ,Redundancy (engineering) ,business ,computer ,computer.programming_language - Abstract
The way Data Science and Machine Learning have set modern trends for automation. It is thoughtful to simplify our day to day activities which are related to these domains. One such task is the repetitive submission of the same personal details while submitting any online form regarding academic or other purposes. The repetitiveness seems a bit dilatory. Plus, the involvement of machine learning further espouses the concept to automate re-iteration of personal details each time we fill a form, instead of manual entry. To obviate the above redundancy, we have proposed to implement “Exam Form Automation Using Facial Recognition” which includes real time face recognition with the help of webcam, pre-storing data on web servers with Panda and NumPy library of Python and then automating the entries on form using selenium library of python. The system captures a real time image of a user and collects the particular details from data hosted on a web server. Thus, precluding the monotonous procedure of manual entry.
- Published
- 2023
13. An Efficient Sharing Grouped Convolution via Bayesian Learning
- Author
-
Duan Bin, Qi Sun, Tinghuan Chen, Qianru Zhang, Bei Yu, Guoqing Li, Hao Geng, and Meng Zhang
- Subjects
Structure (mathematical logic) ,Computer Networks and Communications ,business.industry ,Computer science ,Bayesian probability ,Pattern recognition ,Bayesian inference ,Convolutional neural network ,Computer Science Applications ,Convolution ,Correlation ,Artificial Intelligence ,Redundancy (engineering) ,Artificial intelligence ,business ,Software - Abstract
Compared with traditional convolutions, grouped convolutional neural networks are promising for both model performance and network parameters. However, existing models with the grouped convolution still have parameter redundancy. In this article, concerning the grouped convolution, we propose a sharing grouped convolution structure to reduce parameters. To efficiently eliminate parameter redundancy and improve model performance, we propose a Bayesian sharing framework to transfer the vanilla grouped convolution to be the sharing structure. Intragroup correlation and intergroup importance are introduced into the prior of the parameters. We handle the Maximum Type II likelihood estimation problem of the intragroup correlation and intergroup importance by a group LASSO-type algorithm. The prior mean of the sharing kernels is iteratively updated. Extensive experiments are conducted to demonstrate that on different grouped convolutional neural networks, the proposed sharing grouped convolution structure with the Bayesian sharing framework can reduce parameters and improve prediction accuracy. The proposed sharing framework can reduce parameters up to 64.17%. For ResNeXt-50 with the sharing grouped convolution on ImageNet dataset, network parameters can be reduced by 96.875% in all grouped convolutional layers, and accuracies are improved to 78.86% and 94.54% for top-1 and top-5, respectively.
- Published
- 2022
14. Backstepping Control of Open-Chain Linkages Actuated by Antagonistic Hill Muscles.
- Author
-
Warner, Holly, Richter, Hanz, and van den Bogert, Antonie J.
- Subjects
- *
KINEMATIC chains , *MUSCLES , *DYNAMIC models - Abstract
For human-machine interaction, the forward progression of technology, particularly controls, regularly brings about new possibilities. Indeed, healthcare applications have flourished in recent years, including robotic rehabilitation, exercise, and prosthetic devices. Testing these devices with human subjects is inherently risky and frequently inconsistent. This work offers a novel simulation framework toward overcoming many of these difficulties. Specifically, generating a closed-loop dynamic model of a human or a human subsystem that can connect to device simulations allows simulated human-machine interaction. In this work, a muscle-actuated open kinematic chain linkage is generated to simulate the human, and a backstepping controller based on inverse dynamics is derived. The control architecture directly addresses muscle redundancy, and two options to resolve this redundancy are evaluated. The specific case of a muscle-actuated arm linkage is developed to illustrate the framework. Trajectory tracking is achieved in simulation. The muscles recruited to meet the tracking goal are in agreement with the method used to solve the redundancy problem. In the future coupling such simulations to any relevant simulation of a machine will provide safe, insightful preprototype test results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Software-Based Fault-Detection Technique for Object Tracking in Autonomous Vehicles
- Author
-
Barcelona Supercomputing Center, Medaglini, Alessio, Bartolini, Sandro, Mandó, Gianluca, Quiñones, Eduardo, Royuela Alcázar, Sara, Barcelona Supercomputing Center, Medaglini, Alessio, Bartolini, Sandro, Mandó, Gianluca, Quiñones, Eduardo, and Royuela Alcázar, Sara
- Abstract
Autonomous vehicles are nowadays gaining popularity in many different sectors, from automotive to aviation, and find application in increasingly complex and strategic contexts. In this domain, Obstacle Detection and Avoidance Systems (ODAS) are crucial and, since they are safety-critical systems, they must employ fault-detection and management techniques to maintain correct behavior. One of the most popular techniques to obtain a reliable system is the use of redundancy, both at the hardware and at the software levels. With the objective of improving fault-detection while producing little impact on the programmability of the system, this paper introduces a general and lightweight monitoring technique based on a user-directed observer design pattern, which aims at monitoring the validity of predicates over state variables of the algorithms in execution. This can increase the fault-detection capability and even anticipate the detection time of some faults that would be caught by replication only at later times. Results are evaluated on a real-world use-case from the railway domain, and show how the proposed fault-detection mechanism can increase the overall reliability of the system by up to 24.4% compared to replication alone in case of crowded scenarios over the entire tracking process, and up to 43.9% in specific phases., Peer Reviewed, Postprint (author's final draft)
- Published
- 2023
16. A Secure Sensor Fusion Framework for Connected and Automated Vehicles Under Sensor Attacks
- Author
-
Tianci Yang and Chen Lv
- Subjects
Exploit ,Computer Networks and Communications ,Computer science ,Wireless network ,Real-time computing ,Systems and Control (eess.SY) ,Sensor fusion ,Electrical Engineering and Systems Science - Systems and Control ,Computer Science Applications ,Hardware and Architecture ,Control theory ,Signal Processing ,FOS: Electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Platoon ,Resilience (network) ,Information Systems ,Vulnerability (computing) - Abstract
As typical applications of cyber-physical systems (CPSs), connected and automated vehicles (CAVs) are able to measure the surroundings and share local information with the other vehicles by using multi-modal sensors and wireless networks. CAVs are expected to increase safety, efficiency, and capacity of our transportation systems. However, the increasing usage of sensors has also increased the vulnerability of CAVs to sensor faults and adversarial attacks. Anomalous sensor values resulting from malicious cyberattacks or faulty sensors may cause severe consequences or even fatalities. In this paper, we increase the resilience of CAVs to faults and attacks by using multiple sensors for measuring the same physical variable to create redundancy. We exploit this redundancy and propose a sensor fusion algorithm for providing a robust estimate of the correct sensor information with bounded errors independent of the attack signals, and for attack detection and isolation. The proposed sensor fusion framework is applicable to a large class of security-critical CPSs. To minimize the performance degradation resulting from the usage of estimation for control, we provide an H∞ controller for CACC-equipped CAVs. The designed controller is capable of stabilizing the closed-loop dynamics of each vehicle in the platoon while reducing the joint effect of estimation errors and communication channel noise on the tracking performance and string behavior of the vehicle platoon. Numerical examples are presented to illustrate the effectiveness of our methods.
- Published
- 2022
17. A Highly Efficient Model to Study the Semantics of Salient Object Detection
- Author
-
Ming-Ming Cheng, Ali Borji, Zheng Lin, Meng Wang, Shang-Hua Gao, and Yong-Qiang Tan
- Subjects
Source code ,Computer science ,business.industry ,Applied Mathematics ,media_common.quotation_subject ,Feature extraction ,Semantics ,Machine learning ,computer.software_genre ,Object detection ,Reduction (complexity) ,Computational Theory and Mathematics ,Artificial Intelligence ,Information leakage ,Feature (machine learning) ,Redundancy (engineering) ,Neural Networks, Computer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithms ,Software ,media_common - Abstract
CNN-based salient object detection (SOD) methods achieve impressive performance. However, the way semantic information is encoded in them and whether they are category-agnostic is less explored. One major obstacle in studying these questions is the fact that SOD models are built on top of the ImageNet pre-trained backbones which may cause information leakage and feature redundancy. To remedy this, here we first propose an extremely light-weight holistic model tied to the SOD task that can be freed from classification backbones and trained from scratch, and then employ it to study the semantics of SOD models. With the holistic network and representation redundancy reduction by a novel dynamic weight decay scheme, our model has only 100K parameters, ∼ 0.2% of parameters of large models, and performs on par with SOTA on popular SOD benchmarks. Using CSNet, we find that a) SOD and classification methods use different mechanisms, b) SOD models are category insensitive, c) ImageNet pre-training is not necessary for SOD training, and d) SOD models require far fewer parameters than the classification models. The source code is publicly available at https://mmcheng.net/sod100k/.
- Published
- 2022
18. A General Framework for Feature Selection Under Orthogonal Regression With Global Redundancy Minimization
- Author
-
Xueyuan Xu, Xia Wu, Feiping Nie, Fulin Wei, and Wei Zhong
- Subjects
Computer science ,business.industry ,Feature selection ,Pattern recognition ,Filter (signal processing) ,Computer Science Applications ,Computational Theory and Mathematics ,Discriminative model ,Margin (machine learning) ,Feature (machine learning) ,Redundancy (engineering) ,Artificial intelligence ,Total least squares ,business ,Subspace topology ,Information Systems - Abstract
Feature selection has attracted a lot of attention in obtaining discriminative and non-redundant features from high-dimension data. Compared with traditional filter and wrapper methods, embedded methods can obtain a more informative feature subset by fully considering the importance of features in the classification tasks. However, the existing embedded methods emphasize the above importance of features and mostly ignore the correlation between the features, which leads to retain the correlated and redundant features with similar scores in the feature subset. To solve the problem, we propose a novel supervised embedded feature selection framework, called feature selection under global redundancy minimization in orthogonal regression (GRMOR). The proposed framework can effectively recognize redundant features from a global view of redundancy among the features. We also incorporate the large margin constraint into GRMOR for robust multi-class classification. Compared with the traditional embedded methods based on least square regression, the proposed framework utilizes orthogonal regression to preserve more discriminative information in the subspace, which can help accurately rank the importance of features in the classification tasks. Experimental results on twelve public datasets demonstrate that the proposed framework can obtain superior classification performance and redundancy removal performance than twelve other feature selection methods.
- Published
- 2022
19. S-CoEA: Subproblems Co-Solving Evolutionary Algorithm for Uncertain Optimization
- Author
-
Jie Chen, Juan Li, Ling Wang, and Bin Xin
- Subjects
Mathematical optimization ,Covariance matrix ,Computer science ,Evolutionary algorithm ,Sampling (statistics) ,Computer Science Applications ,Human-Computer Interaction ,Test case ,Control and Systems Engineering ,Redundancy (engineering) ,A priori and a posteriori ,Probability distribution ,Electrical and Electronic Engineering ,Evolution strategy ,Algorithms ,Software ,Information Systems - Abstract
Existing techniques on dealing with uncertain optimization problems (UOPs) mostly rely on the preference information of decision makers (DMs) or the knowledge involved in probability distributions on uncertainties. Actually, accurate preferences and distribution information of uncertainties are hard to obtain due to the lack of knowledge. Besides, it is risky to make assumptions on this information to handle uncertainties when DMs do not have sufficient knowledge about the problem. This article attempts to treat UOPs in an a posteriori manner and proposes a subproblem co-solving evolutionary algorithm (EA) for UOPs, namely, S-CoEA. It decomposes a UOP into a series of correlated subproblems by using the proposed decomposition strategy embedded with an original ordered weighted-sum (OWS) operator. These subproblems are formulated in different aggregation forms of sampled function values and represent different preferences on uncertainties. They are co-solved in parallel by using information from neighboring subproblems. The sampling strategy is used to gather the distribution information of uncertain functions and alleviate the detrimental effects of uncertainties. A sample-updating scheme based on historical information is presented to further improve the performance of S-CoEA. The proposed S-CoEA is compared with two state-of-the-art competitors, including the EA with the exponential sampling method (E-sampling) and the population-controlled covariance matrix self-adaptation evolution strategy (pcCMSA-ES). Numerical experiments are conducted on a series of test instances with various characteristics and different strength levels of uncertainties. Experimental results show that S-CoEA outperforms or performs competitively against competitors in the majority of 26 continuous test instances and four test cases of discrete redundancy allocation problems.
- Published
- 2022
20. Deep Canonical Correlation Analysis Using Sparsity-Constrained Optimization for Nonlinear Process Monitoring
- Author
-
Ying Yang, Wanquan Liu, Xianchao Xiu, and Zhonghua Miao
- Subjects
Artificial neural network ,Computer science ,Process (computing) ,Constrained optimization ,Computer Science Applications ,Nonlinear system ,Control and Systems Engineering ,Benchmark (computing) ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Representation (mathematics) ,Canonical correlation ,Algorithm ,Information Systems - Abstract
This paper proposes an efficient nonlinear process monitoring method (DCCA-SCO) by integrating canonical correlation analysis (CCA), deep auto-encoder neural networks (DAENNs), and sparsity constrained optimization (SCO). Specifically, DAENNs are first used to learn a nonlinear function automatically, which characterizes intrinsic features of the original data. Then, the CCA is performed in that low-dimensional representation space to extract the most correlated variables. In addition, the SCO is imposed to reduce the redundancy of the hidden representation. Unlike other deep CCA methods, the DCCA-SCO provides a new nonlinear method that is able to learn a nonlinear mapping with a sparse prior. The validity of the proposed DCCA-SCO is extensively demonstrated on the benchmark Tennessee Eastman (TE) process and the diesel generator (DG) process. In particular, compared with the classical CCA, the fault detection rate is increased by 8.00% for the fault IDV(11) in the TE process.
- Published
- 2022
21. Distributed Information Exchange With Low Latency for Decision Making in Vehicular Fog Computing
- Author
-
Xu Wu, Jie Zhang, Victor C. M. Leung, and Junbin Liang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Real-time computing ,Cloud computing ,Computer Science Applications ,law.invention ,Hardware and Architecture ,Relay ,law ,Signal Processing ,Redundancy (engineering) ,Latency (engineering) ,Driving range ,business ,Information exchange ,Information Systems - Abstract
Traditional decision making in a vehicle network includes uploading vehicle sensing data to faraway cloud platforms and then returning correlated results to the vehicles. The data have features of large quantity and high redundancy, which causes high communication latency and vehicle applications to deteriorate. Vehicular fog computing (VFC) is a new network paradigm that uses local fog nodes for decision making. However, how to achieve distributed information exchange with low latency is a challenging issue because the connectivity of the vehicle network is low due to vehicle mobility. In this paper, a distributed information exchange scheme with low latency in VFC is proposed. First, considering the frequent changes in vehicle positions and the randomness in driving routes, public transportation facilities with a wider driving range such as buses and taxis are used as fog nodes to increase the probability of uploading data. Then, the fog nodes should dynamically adjust the data sampling frequency according to the time-space correlation of the data to ensure that only nonredundant data are received. To minimize the interruption latency caused by accidents during an exchange, the fog nodes evaluate and predict connection states among them and their neighboring vehicles when establishing exchanges. If a fog node finds that a vehicle cannot complete information exchange because the vehicle may move outside its communication range in a future period, it will recalculate an optimized relay route for the vehicle by using mixed integer programming. Theoretical analysis and simulation results show that compared with the existing work, the proposed scheme can completely exchange all vehicle data with lower latency.
- Published
- 2022
22. Compressed Sensing Based Low-Power Multi-View Video Coding and Transmission in Wireless Multi-Path Multi-Hop Networks
- Author
-
Tommaso Melodia, Zhangyu Guan, and Nan Cen
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Real-time computing ,Data_CODINGANDINFORMATIONTHEORY ,Video quality ,Compressed sensing ,Transmission (telecommunications) ,Distortion ,Redundancy (engineering) ,Wireless ,Electrical and Electronic Engineering ,business ,Wireless sensor network ,Encoder ,Software ,Decoding methods ,Communication channel - Abstract
Wireless Multimedia Sensor Network (WMSN) is increasingly being deployed for surveillance, monitoring and Internet-of-Things (IoT) sensing applications where a set of cameras capture and compress local images and then transmit the data to a remote controller. Such captured local images may also be compressed in a multi-view fashion to reduce the redundancy among overlapping views. In this paper, we present a novel paradigm for compressed-sensing-enabled multi-view coding and streaming in WMSN. We first propose a new encoding and decoding architecture for multi-view video systems based on Compressed Sensing (CS) principles, composed of cooperative sparsity-aware block-level rate-adaptive encoders, feedback channels and independent decoders. The proposed architecture leverages the properties of CS to overcome many limitations of traditional encoding techniques, specifically massive storage requirements and high computational complexity. Then, we present a modeling framework that exploits the aforementioned coding architecture. The proposed mathematical problem minimizes the power consumption by jointly determining the encoding rate and multi-path rate allocation subject to distortion and energy constraints. Extensive performance evaluation results show that the proposed framework is able to transmit multi-view streams with guaranteed video quality at lower power consumption.
- Published
- 2022
23. Generating Unit Tests for Documentation
- Author
-
Ashvitha Sridharan, Martin P. Robillard, Alexa Hernandez, and Mathieu Nassif
- Subjects
FOS: Computer and information sciences ,Source code ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Artifact (software development) ,Computer Science - Software Engineering ,Documentation ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,media_common ,D.2.1 ,Unit testing ,D.2.5 ,business.industry ,D.2.7 ,020207 software engineering ,Test (assessment) ,Software Engineering (cs.SE) ,Template ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Software engineering ,business - Abstract
Software projects capture information in various kinds of artifacts, including source code, tests, and documentation. Such artifacts routinely encode information that is redundant, i.e., when a specification encoded in the source code is also separately tested and documented. Without supporting technology, such redundancy easily leads to inconsistencies and a degradation of documentation quality. We designed a tool-supported technique, called DScribe, that leverages redundancy between tests and documentation to generate consistent and checkable documentation and unit tests based on a single source of information. DScribe generates unit tests and documentation fragments based on a novel template and artifact generation technology. By pairing tests and documentation generation, DScribe provides a mechanism to automatically detect and replace outdated documentation. Our evaluation of the Apache Commons IO library revealed that of 835 specifications about exception handling, 85% of them were not tested or correctly documented, and DScribe could be used to automatically generate 97% of the tests and documentation.
- Published
- 2022
24. Non-Structured DNN Weight Pruning—Is It Beneficial in Any Platform?
- Author
-
Yanzhi Wang, Zhezhi He, Sia Huat Tan, Sheng Lin, Deliang Fan, Xiaolong Ma, Linfeng Zhang, Zhengang Li, Geng Yuan, Kaisheng Ma, Xuehai Qian, Xue Lin, and Shaokai Ye
- Subjects
Lossless compression ,Artificial neural network ,Computer Networks and Communications ,Computer science ,business.industry ,Machine learning ,computer.software_genre ,Computer Science Applications ,Artificial Intelligence ,Key (cryptography) ,Redundancy (engineering) ,Hardware acceleration ,Pruning (decision trees) ,Artificial intelligence ,Quantization (image processing) ,business ,computer ,Software ,Dram - Abstract
Large deep neural network (DNN) models pose the key challenge to energy efficiency due to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or SRAM operations. It motivates the intensive research on model compression with two main approaches. Weight pruning leverages the redundancy in the number of weights and can be performed in a non-structured, which has higher flexibility and pruning rate but incurs index accesses due to irregular weights, or structured manner, which preserves the full matrix structure with a lower pruning rate. Weight quantization leverages the redundancy in the number of bits in weights. Compared to pruning, quantization is much more hardware-friendly and has become a "must-do" step for FPGA and ASIC implementations. Thus, any evaluation of the effectiveness of pruning should be on top of quantization. The key open question is, with quantization, what kind of pruning (non-structured versus structured) is most beneficial? This question is fundamental because the answer will determine the design aspects that we should really focus on to avoid the diminishing return of certain optimizations. This article provides a definitive answer to the question for the first time. First, we build ADMM-NN-S by extending and enhancing ADMM-NN, a recently proposed joint weight pruning and quantization framework, with the algorithmic supports for structured pruning, dynamic ADMM regulation, and masked mapping and retraining. Second, we develop a methodology for fair and fundamental comparison of non-structured and structured pruning in terms of both storage and computation efficiency. Our results show that ADMM-NN-S consistently outperforms the prior art: 1) it achieves 348× , 36× , and 8× overall weight pruning on LeNet-5, AlexNet, and ResNet-50, respectively, with (almost) zero accuracy loss and 2) we demonstrate the first fully binarized (for all layers) DNNs can be lossless in accuracy in many cases. These results provide a strong baseline and credibility of our study. Based on the proposed comparison framework, with the same accuracy and quantization, the results show that non-structured pruning is not competitive in terms of both storage and computation efficiency. Thus, we conclude that structured pruning has a greater potential compared to non-structured pruning. We encourage the community to focus on studying the DNN inference acceleration with structured sparsity.
- Published
- 2022
25. Distributed Deployment in UAV-Assisted Networks for a Long-Lasting Communication Coverage
- Author
-
Xingwei Wang, Xiaojie Liu, Novella Bartolini, Jie Jia, and Jianhui Lv
- Subjects
Scheme (programming language) ,Service (systems architecture) ,Computer Networks and Communications ,Computer science ,business.industry ,Real-time computing ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Computer Science Applications ,Control and Systems Engineering ,Software deployment ,Distributed algorithm ,UAV communication cellular networks coverage ,Redundancy (engineering) ,Wireless ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Electrical and Electronic Engineering ,business ,Voronoi diagram ,computer ,Energy (signal processing) ,Information Systems ,computer.programming_language - Abstract
Due to the advantage of flexible and quick movement, unmanned aerial vehicles (UAVs) have been widely utilized in assisting wireless communications. A challenging problem is how to deploy UAVs to provide communication services for more user devices for a long term while minimizing the number of UAVs. In this article, we propose a UAV network system with UAV deletion due to redundancy or energy exhaustion and UAV insertion to provide long-lasting communication services. Then, a distributed algorithm based on the virtual Coulomb force and Voronoi diagram is proposed to deploy UAVs improving the communication coverage and turning redundant UAVs off. On the one hand, in order to improve the communication coverage, we propose two moving schemes to move UAVs. On the other hand, in order to save energy, we propose a sleeping scheme to turn several redundant UAVs off. Particularly, the redundancy of a UAV is evaluated according to a proposed definition of the average local coverage rate. Simulation results demonstrate that the proposed algorithm can deploy a minimal number of UAVs to provide enough communication coverage and the presented network system can offer long-lasting service.
- Published
- 2022
26. A novel feature selection approach based on constrained eigenvalues optimization
- Author
-
Nadjia Benblidia and Amina Benkessirat
- Subjects
General Computer Science ,Computer science ,business.industry ,Feature selection ,Pattern recognition ,Task (project management) ,Constraint (information theory) ,Feature (computer vision) ,Benchmark (computing) ,Redundancy (engineering) ,Relevance (information retrieval) ,Artificial intelligence ,business ,Eigenvalues and eigenvectors - Abstract
It is often tricky in real-life classification applications to select model features that would ensure an adequate sample classification, given a large number of candidate features. Our main contribution is threefold: (1) Evaluate the relevance and redundancy of feature. (2) Define the feature selection problem as eigenvalue computation problem with linear constraint. (3) Select the best features in an efficient way. We considered 20 UCI benchmark datasets to validate and test our approach. The results were compared with those obtained using one of the more widely used approaches, namely mRMR, the conventional features and two moderns state-of-the-art approaches. The experimental results revealed that our approach could improve the classification task, using only 20 % of the conventional features.
- Published
- 2022
27. Delta Encoding Correction for Mobile Edge Caching in Internet of Vehicles
- Author
-
Ning Zhang, Danyang Wang, Zan Li, Zhijuan Hu, and Liwei Ren
- Subjects
Divide and conquer algorithms ,Mobile edge computing ,Computer Networks and Communications ,Computer science ,Delta encoding ,Real-time computing ,Computer Science Applications ,Upload ,Control and Systems Engineering ,Encoding (memory) ,Redundancy (engineering) ,Cache ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Information Systems - Abstract
As one of the key components of intelligent transportation systems, the Internet of Vehicles enables vehicles to acquire and exchange essential information with each other and other facilities. As an infrastructure that directly connects the vehicles to the Internet, the roadside unit (RSU) is responsible for collecting information generated by the vehicle and transmitting it to the control center for analysis. However, ever-increasing number of vehicles and sensors will lead to a large amount of data. Furthermore, there exists redundancy in the data collected within a data collection period by vehicles. Both of these can severely reduce the efficiency of data storage and upload. In this article, we consider that each RSU installs a mobile edge computing server and present delta encoding correction for saving data cache space and transmission time. First, we propose a new cost model and an evaluation method for delta encoding of COPY/ADD class. Then, the type of delta encoding and the corresponding merging rules are defined. Based on these rules, we proposed a minimum delta encoding cost (MDC) algorithm, which adopts the idea of divide and conquer to obtain a superior correcting sequence of delta encoding. Theoretical analysis proves that the proposed MDC algorithm can generate the encoding sequence with the smallest total cost without affecting information reconstruction. In addition, we conduct experiments based on the randomly produced synthetic dataset. Experiment results show that, compared with the Greedy, One-Pass, and 1.5-Pass algorithms, the delta encoding output by the MDC algorithm has better performance in saving cache costs.
- Published
- 2022
28. A Novel Unsupervised Approach to Heterogeneous Feature Selection Based on Fuzzy Mutual Information
- Author
-
Zhong Yuan, Pengfei Zhang, Hongmei Chen, Tianrui Li, and Jihong Wan
- Subjects
Computer science ,business.industry ,Applied Mathematics ,Feature selection ,Pattern recognition ,Fuzzy logic ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,Feature (computer vision) ,Redundancy (engineering) ,Relevance (information retrieval) ,Anomaly detection ,Fuzzy mutual information ,Artificial intelligence ,Cluster analysis ,business - Abstract
Aiming at the problem of effectively selecting relevant features from heterogeneous data without decision, a novel feature selection approach is studied based on fuzzy mutual information in fuzzy rough set theory. First, the fuzzy relevance of each feature is defined by using fuzzy mutual information, and then the fuzzy conditional relevance is further given. Next, the fuzzy redundancy is defined by using the difference between the fuzzy relevance and the fuzzy conditional relevance. Thereby the evaluation index of the feature importance is obtained by using the idea of unsupervised minimum redundancy and maximum relevance. Finally, a fuzzy mutual information-based unsupervised feature selection (FMIUFS) algorithm is designed to select feature sequences. Extensive experiments are conducted on public datasets and six unsupervised feature selection algorithms are compared. The selected features are evaluated by classification, clustering, and outlier detection methods. Experimental results show that the proposed algorithm can select fewer heterogeneous features to maintain or improve the performance of learning algorithms.
- Published
- 2022
29. Systematic approach for the design of trigeneration systems based on reliability aspects
- Author
-
CHEMECA (2015 : Melbourne, Vic.), Andiappan, Viknesh, Tan, Raymond R, Aviso, Kathleen B, and Ng, Denny KS
- Published
- 2015
30. Cyberspace Endogenous Safety and Security
- Author
-
Jiangxing Wu
- Subjects
Difficult problem ,Environmental Engineering ,General Computer Science ,Computer science ,Materials Science (miscellaneous) ,General Chemical Engineering ,General Engineering ,Energy Engineering and Power Technology ,Computer security ,computer.software_genre ,Systems architecture ,Redundancy (engineering) ,ComputingMilieux_COMPUTERSANDSOCIETY ,Architecture ,Cyberspace ,computer ,Communication channel ,Coding (social sciences) - Abstract
Uncertain security threats caused by vulnerabilities and backdoors are the most serious and difficult problem in cyberspace. This paper analyzes the philosophical and technical causes of the existence of so-called “dark functions” such as system vulnerabilities and backdoors, and points out that endogenous security problems cannot be completely eliminated at the theoretical and engineering levels; rather, it is necessary to develop or utilize the endogenous security functions of the system architecture itself. In addition, this paper gives a definition for and lists the main technical characteristics of endogenous safety and security in cyberspace, introduces endogenous security mechanisms and characteristics based on dynamic heterogeneous redundancy (DHR) architecture, and describes the theoretical implications of a coding channel based on DHR.
- Published
- 2022
31. Multibank Optimized Redundancy Analysis Using Efficient Fault Collection
- Author
-
Sungho Kang, Hayoung Lee, Donghyun Han, and Hogyeong Kim
- Subjects
Reduction (complexity) ,Computer science ,Spare part ,Redundancy (engineering) ,Process (computing) ,Faulty cell ,Electrical and Electronic Engineering ,Performance improvement ,Repair rate ,Fault (power engineering) ,Computer Graphics and Computer-Aided Design ,Software ,Reliability engineering - Abstract
With technological advancements, the density and capacity of memory are rapidly increasing. As the number of memory cells increases, the difficulty of fault analysis and the number of faults also increase. Hence, the yield and test cost of memory have become essential issues in memory manufacturing. Many manufacturers have used redundancy analysis (RA) to improve the memory yield and decrease the test cost. However, most conventional RA methods require a lengthy analysis time to find a repair solution, and it is difficult to obtain an optimal repair rate with conventional RA algorithms. Although several algorithms using various spare structures to achieve performance improvement have been proposed, those improvements have not been groundbreaking. In this paper, a new multi-bank optimized redundancy analysis (MORA) algorithm is proposed. It achieves a very high repair rate and a drastic reduction in the analysis time compared with conventional RA algorithms using various spare structures. During testing, the proposed algorithm stores the faulty cell information efficiently. Therefore, the analysis time can be shortened through the pre-solution process of the repair analysis using the proposed fault storage spaces. Additionally, the proposed spare structures are used to increase the repair rate. The experimental results reveal that the proposed algorithm can achieve a very high repair rate at a faster speed than conventional RA algorithms.
- Published
- 2022
32. SAVE: Efficient Privacy-Preserving Location-Based Service Bundle Authentication in Self-Organizing Vehicular Social Networks
- Author
-
Xiaolei Dong, Tianhui Zhou, Ying Chen, Kim-Kwang Raymond Choo, Zhenfu Cao, and Jun Zhou
- Subjects
Authentication ,Revocation ,business.industry ,Computer science ,Mechanical Engineering ,Key distribution ,Computer Science Applications ,Bundle ,Automotive Engineering ,Location-based service ,Redundancy (engineering) ,Hash chain ,Key (cryptography) ,business ,Computer network - Abstract
Self-organizing vehicular social networks underpin many location-based services (LBS) such as those that collect and share environmental information (e.g., traffic and weather conditions) among vehicular users and the infrastructure. There are, however, security and privacy considerations in the sharing of such information, and one popular approach is to design lightweight authentication solutions for LBS. Existing approaches may suffer from limitations such as significant computational and/or storage overheads, latency and time delays, and consequently impractical for resource-constrained on-board units. In this paper, we propose an efficient privacy-preserving LBS bundle authentication scheme (hereafter referred to as SAVE) through secure redundancy filtering in self-organizing vehicular social networks. Firstly, an enhanced self-healing key distribution protocol with distributed revocation is proposed to reduce communication cost for retransmitting lost key material and resist free-riding attacks to enhance the authentication efficiency. Then, based on it, a generalized version of online/offline aggregate signature is proposed to achieve batch LBS bundle verification based on arbitrary one-way function holding the property of multiplicative homomorphism. Finally, an efficient zero-knowledge range proof based on lightweight one-way hash chain is designed to decide the redundancy of LBS bundles without disclosing vehicular users' location privacy. Formal security proof and extensive simulation results demonstrate that our proposed SAVE achieves identity privacy, two levels of location privacy and the practicability in reality.
- Published
- 2022
33. A Novel Coding Scheme for Large-Scale Point Cloud Sequences Based on Clustering and Registration
- Author
-
Shing Shin Cheng, Weixun Zuo, Xuebin Sun, Yuxiang Sun, and Ming Liu
- Subjects
Lossless compression ,Control and Systems Engineering ,Computer science ,Distortion ,Point cloud ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Lossy compression ,Cluster analysis ,Residual ,Algorithm ,Volume (compression) - Abstract
Due to the huge volume of point cloud data, storing and transmitting it is currently difficult and expensive in autonomous driving. Learning from the high-efficiency video coding (HEVC) framework, we propose a novel compression scheme for large-scale point cloud sequences, in which several techniques have been developed to remove the spatial and temporal redundancy. The proposed strategy consists mainly of three parts: intracoding, intercoding, and residual data coding. For intracoding, inspired by the depth modeling modes (DMMs), in 3-D HEVC (3-D-HEVC), a cluster-based prediction method is proposed to remove the spatial redundancy. For intercoding, a point cloud registration algorithm is utilized to transform two adjacent point clouds into the same coordinate system. By calculating the residual map of their corresponding depth image, the temporal redundancy can be removed. Finally, the residual data are compressed either by lossless or lossy methods. Our approach can deal with multiple types of point cloud data, from simple to more complex. The lossless method can compress the point cloud data to 3.63% of its original size by intracoding and 2.99% by intercoding without distance distortion. Experiments on the KITTI dataset also demonstrate that our method yields better performance compared with recent well-known methods.
- Published
- 2022
34. Detection and Isolation of Sensor Attacks for Autonomous Vehicles: Framework, Algorithms, and Validation
- Author
-
Danwei Wang, Yuanzhe Wang, Qipeng Liu, Ehsan Mihankhah, and Chen Lv
- Subjects
Extended Kalman filter ,Discriminator ,Computer science ,Robustness (computer science) ,Orientation (computer vision) ,Mechanical Engineering ,Automotive Engineering ,Real-time computing ,Detector ,Redundancy (engineering) ,CUSUM ,Residual ,Computer Science Applications - Abstract
This paper investigates the cyber-security problem for autonomous vehicles under sensor attacks. In particular, a model-based framework is proposed which can detect sensor attacks and identify their sources in order to achieve the secure localization of self-driving vehicles. To ensure robustness of the vehicle against cyber-attacks, sensor redundancy is introduced, that is to deploy multiple sensors, each of which provides real-time pose observations of the vehicle. A bank of attack detectors is developed to capture anomalies in each sensor measurement, which is a combination of an extended Kalman filter (EKF) and a cumulative sum (CUSUM) discriminator. EKFs are employed to estimate the vehicle position and orientation recursively, while each CUSUM discriminator is designed to analyze the residual generated by its combined EKF to detect the possible deviation of the sensor measurement from the expected pose derived according to the mathematical model of the vehicle. To monitor the inconsistency amongst multiple sensor measurements, an auxiliary detector is introduced which fuses observations from multiple sensors. Based on the results of all the detectors, a rule-based isolation scheme is developed to identify the source anomalous sensor. The effectiveness of our proposed framework has been demonstrated on real vehicle data.
- Published
- 2022
35. Lightweight hash-based de-duplication system using the self detection of most repeated patterns as chunks divisors
- Author
-
Loay E. George and Saja Taha Ahmed
- Subjects
General Computer Science ,Computer science ,Hash function ,020206 networking & telecommunications ,02 engineering and technology ,Parallel computing ,Set (abstract data type) ,MD5 ,Scalability ,Chunking (psychology) ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Overhead (computing) ,Data deduplication ,020201 artificial intelligence & image processing - Abstract
Data reduction has gained growing emphasis due to the rapidly unsystematic increase in digital data and has become a sensible approach to big data systems. Data deduplication is a technique to optimize the storage requirements and plays a vital role to eliminate redundancy in large-scale storage. Although it is robust in finding suitable chunk-level break-points for redundancy elimination, it faces key problems of (1) low chunking performance, which causes chunking stage bottleneck, (2) a large variation in chunk-size that reduces the efficiency of deduplication, and (3) hash computing overhead. To handle these challenges, this paper proposes a technique for finding proper cut-points among chunks using a set of commonly repeated patterns (CRP), it picks out the most frequent sequences of adjacent bytes (i.e., contiguous segments of bytes) as breakpoints. Besides to scalable lightweight triple-leveled hashing function (LT-LH) is proposed, to mitigate the cost of hashing function processing and storage overhead; the number of hash levels used in the tests was three, these numbers depend on the size of data to be de-duplicated. To evaluate the performance of the proposed technique, a set of tests was conducted to analyze the dataset characteristics in order to choose the near-optimal length of bytes used as divisors to produce chunks. Besides this, the performance assessment includes determining the proper system parameter values leading to an enhanced deduplication ratio and reduces the system resources needed for data deduplication. Since the conducted results demonstrated the effectiveness of the CRP algorithm is 15 times faster than the basic sliding window (BSW) and about 10 times faster than two thresholds two divisors (TTTD). The proposed LT-LH is faster five times than Secure Hash Algorithm 1 (SHA1) and Message-Digest Algorithm 5 (MD5) with better storage saving.
- Published
- 2022
36. Combinatorial Test Generation for Multiple Input Models With Shared Parameters
- Author
-
Chang Rao, Yu Lei, Raghu N. Kacker, Jin Guo, Nan Li, D. Richard Kuhn, and Y. Zhang
- Subjects
Set (abstract data type) ,Computer science ,Combinatorial testing ,Redundancy (engineering) ,Algorithm ,Multiple input ,Software ,Single test ,Test (assessment) - Abstract
Combinatorial testing typically considers a single input model and creates a single test set that achieves t-way coverage. This paper addresses the problem of combinatorial test generation for multiple input models with shared parameters. We formally define the problem and propose an efficient approach to generating multiple test sets, one for each input model, that together satisfy t-way coverage for all of these input models while minimizing the amount of redundancy between these test sets. We report an experimental evaluation that applies our approach to five real-world applications. The results show that our approach can significantly reduce the amount of redundancy between the test sets generated for multiple input models and perform better than a post-optimization approach.
- Published
- 2022
37. TherMa-MiCs: Thermal-Aware Scheduling for Fault-Tolerant Mixed-Criticality Systems
- Author
-
Sepideh Safari, Mohsen Ansari, Shaahin Hessabi, Pourya Gohari-Nazari, Heba Khdr, and Jorg Henkel
- Subjects
Mixed criticality ,Multi-core processor ,Process (engineering) ,Computer science ,Quality of service ,Reliability (computer networking) ,Scheduling (production processes) ,Fault tolerance ,Hardware_PERFORMANCEANDRELIABILITY ,Reliability engineering ,Computational Theory and Mathematics ,Hardware and Architecture ,Signal Processing ,Redundancy (engineering) - Abstract
Multicore platforms are becoming the dominant trend in designing Mixed-Criticality Systems, which integrate applications of different levels of criticality into the same platform. The availability of multiple cores on a single chip provides opportunities to employ fault-tolerant techniques, to ensure the reliability of MCSs. However, applying fault-tolerant techniques increases the power consumption on the chip, and thereby, on-chip temperatures might increase beyond safe limits. To prevent thermal emergencies, urgent countermeasures, like DVFS or DPM will be triggered to cool down the chip. Such countermeasures, however, might not only lead to suspending low-criticality tasks, but also might lead to violating timing constraints of high-criticality tasks. Therefore, it is indispensable to consider a temperature constraint within the scheduling process of fault-tolerant MCSs. Therefore, this paper presents, for the first time, a thermal-aware scheduling scheme for fault-tolerant MCSs, named TherMa-MiCs, which satisfies the temperature constraint jointly with the timing constraints of the high-criticality tasks, while attempting to maximize the QoS of low-criticality tasks under the predefined constraints. At the same time, a reliability target is satisfied by employing the well-known N-Modular Redundancy fault-tolerant technique. Experimental results show that our proposed scheme meets the temperature and timing constraints, while at the same time, improving the QoS of low-criticality tasks, with an average of 44%.
- Published
- 2022
38. Summarization With Self-Aware Context Selecting Mechanism
- Author
-
Hou Shuai, Wenyu Chen, Yuguo Liu, Hong Qu, and Li Huang
- Subjects
Sequence ,Relation (database) ,Computer science ,business.industry ,Context (language use) ,computer.software_genre ,Automatic summarization ,Semantics ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Redundancy (engineering) ,Learning ,Relevance (information retrieval) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Encoder ,computer ,Software ,Natural language processing ,Natural Language Processing ,Information Systems - Abstract
In the natural language processing family, learning representations is a pioneering study, especially in sequence-to-sequence tasks where outputs are generated, totally relying on the learning representations of source sequence. Generally, classic methods infer that each word occurring in the source sequence, having more or less influence on the target sequence, should all be considered when generating outputs. As the summarization task requires the output sequence to only retain the essence, classic full consideration of the source sequence may not work well on it, which calls for more suitable methods with the ability to discard the misleading noise words. Motivated by this, with both relevance retaining and redundancy removal in mind, we propose a summarization learning model by implementing an encoder with copious contextual information represented and a decoder with a selecting mechanism integrated. Specifically, we equip the encoder with an asynchronous bi directional parallel structure, in order to obtain abundant semantic representation. The decoder, different from the classic attention-based works, employs a self-aware context selecting mechanism to generate summary in a more productive way. We evaluate the proposed methods on three benchmark summarization corpora. The experimental results demonstrate the effectiveness and applicability of the proposed framework in relation to several well-practiced and state-of-the-art summarization methods.
- Published
- 2022
39. Dynamic Proof of Data Possession and Replication With Tree Sharing and Batch Verification in the Cloud
- Author
-
Hua Zhang, Wei Guo, Wenmin Li, Fei Gao, Su-Juan Qin, Qiao-Yan Wen, and Zhengping Jin
- Subjects
Information Systems and Management ,Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Cloud computing ,Computer security model ,Replication (computing) ,Computer Science Applications ,Tree (data structure) ,Hardware and Architecture ,Server ,Bandwidth (computing) ,Redundancy (engineering) ,business ,Cloud storage - Abstract
Cloud storage attracts a lot of clients to join the paradise. For a high data availability, some clients require their files to be replicated and stored on multiple servers. Because clients are generally charged based on the redundancy level required by them, it is critical for clients to obtain convincing evidence that all replicas are stored correctly and are updated to the up-to-date version. In this paper, we propose a dynamic proof of data possession and replication (DPDPR) scheme, which is proved to be secure in the defined security model. Our scheme shares a single authenticated tree across multiple replicas, which reduces the tree's storage cost significantly. Our scheme allows for batch verification for multiple challenged leaves and can verify multiple replicas in a single batch way, which considerably save bandwidth and computation resources during audit process. We also evaluate the DPDPR's performance and compare it with the most related scheme. The evaluation results show that our scheme saves almost 66% tree's storage cost for three replicas, and obtains almost 60% and 80% efficiency improvements in terms of the overall bandwidth and computation costs, respectively, when three replicas are checked and each challenged with 460 blocks.
- Published
- 2022
40. A Multi-Criteria Approach for Fast and Robust Representative Selection from Manifolds
- Author
-
George K. Atia, Michael Geo, and Mahlagha Sedghi
- Subjects
Computer science ,Sampling (statistics) ,02 engineering and technology ,computer.software_genre ,Data structure ,01 natural sciences ,Synthetic data ,Computer Science Applications ,Computational Theory and Mathematics ,Robustness (computer science) ,0103 physical sciences ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,020201 artificial intelligence & image processing ,Data mining ,010306 general physics ,Representation (mathematics) ,computer ,Selection (genetic algorithm) ,Information Systems - Abstract
The problem of representative selection amounts to sampling few informative exemplars from large datasets. Existing approaches to data selection often fall short of simultaneously handling non-linear data structures, sampling concise and non-redundant subsets, rejecting outliers, and yielding interpretable outcomes. This paper presents a novel representative selection approach, dubbed MOSAIC, for drawing descriptive sketches of arbitrary manifold structures. Resting upon a novel quadratic formulation, MOSAIC advances a multi-criteria selection approach that maximizes the global representation power of the sampled subset, ensures novelty of the samples by minimizing redundancy, and rejects disruptive information by effectively detecting outliers. Theoretical analyses shed light on geometrical characterization of the obtained sketch and reveal that the sampled representatives maximize a well-defined notion of data coverage in a transformed space. In addition, we present a highly scalable randomized implementation of the proposed algorithm shown to bring about substantial speedups. MOSAIC's superiority in achieving the desired characteristics of a representative subset all at once while exhibiting remarkable robustness to various outlier types is demonstrated via extensive experiments conducted on both real and synthetic data with comparisons to state-of-the-art algorithms.
- Published
- 2022
41. Efficient Communication Scheduling for Parameter Synchronization of DML in Data Center Networks
- Author
-
Weihong Yang, Zukai Jiang, Shuqi Li, and Yang Qin
- Subjects
Multicast ,Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Bandwidth (signal processing) ,Synchronization ,Bottleneck ,Computer Science Applications ,Scheduling (computing) ,Transmission (telecommunications) ,Control and Systems Engineering ,Redundancy (engineering) ,business ,Computer network - Abstract
It's common practice to speed up machine learning (ML) training by distributing it across a cluster of computing nodes. Data-parallel distributed ML (DML) training relieves the pressure of computing node; however, the communication traffic introduced during parameter synchronization becomes bottleneck of DML training. Regarding communication bottleneck, we identify two primary causes: high contention of concurrent communication and large volume of redundant transmission in the push and pull stage while synchronizing parameters. To address these issues, we propose a novel Group Stale Synchronous Parallel (GSSP) scheme, which divides the nodes into groups and coordinates groups to synchronize in a circular order. GSSP mitigates network contention and is proven to converge. We provide analysis of the optimal group's number based on bandwidth and buffer size. For reducing traffic redundancy, we propose a multicast-based scheme, which generates multicast trees by minimizing links overlap and allocates transmission rate to multicast flows by solving the min-max optimization problem. Finally, we conduct extensive simulations to evaluate the performance of our proposals. We simulate parameter transmission of All-Reduce and parameter server in Fat-Tree with traffics trace of ML models. Simulation results show that our proposals provide communication-efficiency for DML training by mitigating contention and reducing redundancy.
- Published
- 2022
42. Automatic Text Summarization by Providing Coverage, Non-Redundancy, and Novelty Using Sentence Graph
- Author
-
S. R. Balasundaram and P Krishnaveni
- Subjects
General Computer Science ,business.industry ,Computer science ,Novelty ,Redundancy (engineering) ,Graph (abstract data type) ,Artificial intelligence ,business ,computer.software_genre ,computer ,Automatic summarization ,Natural language processing ,Sentence - Abstract
The day-to-day growth of online information necessitates intensive research in automatic text summarization (ATS). The ATS software produces summary text by extracting important information from the original text. With the help of summaries, users can easily read and understand the documents of interest. Most of the approaches for ATS used only local properties of text. Moreover, the numerous properties make the sentence selection difficult and complicated. So this article uses a graph based summarization to utilize structural and global properties of text. It introduces maximal clique based sentence selection (MCBSS) algorithm to select important and non-redundant sentences that cover all concepts of the input text for summary. The MCBSS algorithm finds novel information using maximal cliques (MCs). The experimental results of recall oriented understudy for gisting evaluation (ROUGE) on Timeline dataset show that the proposed work outperforms the existing graph algorithms Bushy Path (BP), Aggregate Similarity (AS), and TextRank (TR).
- Published
- 2022
43. Safe and High-Performance Control Allocation
- Author
-
Ricardo Decastro
- Subjects
Lyapunov function ,Exploit ,Computer science ,Control (management) ,Degrees of freedom (mechanics) ,Computer Science Applications ,Computer Science::Robotics ,Nonlinear system ,symbols.namesake ,Computer Science::Systems and Control ,Control and Systems Engineering ,Control theory ,symbols ,Redundancy (engineering) ,Transient (oscillation) ,Electrical and Electronic Engineering ,Actuator - Abstract
This paper focuses on the control of nonlinear systems with redundant actuation, where the number of actuators is higher than the controllable degrees of freedom. To effectively handle actuation redundancy, we propose a control allocation (CA) approach that combines numerical optimization methods with control Lyapunov and control barrier functions. The inclusion of control Lyapunov functions enhances the CAs ability to minimize performance loss in the presence of unattainable virtual inputs. The integration of barrier functions allows the CA to exploit actuation redundancy to enforce safety constraints. The proposed approach is applied to the control of an over-actuated system, demonstrating superior transient and safety properties when compared to classical CA algorithms.
- Published
- 2022
44. Self-Healing of Redundant FLASH ADCs
- Author
-
Hala Youssef Darweesh, Candid Reig, and Gildas Leger
- Subjects
Offset (computer science) ,Comparator ,Computer science ,Transistor ,law.invention ,Flash (photography) ,Hardware and Architecture ,law ,Spare part ,Calibration ,Electronic engineering ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Software ,Selection (genetic algorithm) - Abstract
For the design of high-speed ADCs, the traditional speed-accuracy trade-off can only be solved at the expense of power consumption. Using fast small transistors takes full advantage of technology scaling but induces large amounts of random variability. Redundancy has been proposed as a way to cope with variability in FLASH converter and open the trade-off. The offset of redundant comparators are measured and only the best candidates which have been selected are powered-up. However, the candidate selection is usually carried out in foreground and a lot of silicon area is thus occupied by comparators that will only be used once, during calibration. In this paper we show how such an approach, combined with an on-line selection mechanism, can take advantage of random variations and use these spare comparators and lead to extremely resilient self-healing circuit.
- Published
- 2022
45. Beyond Redundancy : How Geographic Redundancy Can Improve Service Availability and Reliability of Computer-Based Systems
- Author
-
Eric Bauer, Randee Adams, Daniel Eustace, Eric Bauer, Randee Adams, and Daniel Eustace
- Subjects
- Computer input-output equipment--Reliability, Computer networks--Reliability, Redundancy (Engineering)
- Abstract
While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy.
- Published
- 2012
46. Assessing flooding impact to riverine bridges: an integrated analysis
- Author
-
Dakota Mascarenas, Maria Pregnolato, Paul D. Bates, Andrew O. Winter, Andrew D. Sen, and Michael R. Motley
- Subjects
021110 strategic, defence & security studies ,Flood myth ,Transport network ,0211 other engineering and technologies ,02 engineering and technology ,Flooding (computer networking) ,Goods and services ,Proof of concept ,Natural hazard ,Redundancy (engineering) ,Environmental science ,General Earth and Planetary Sciences ,Environmental planning ,Bank erosion - Abstract
Flood events are the most frequent cause of damage to infrastructure compared to any other natural hazard, and global changes (climate, socioeconomic, technological) are likely to increase this damage. Transportation infrastructure systems are responsible for moving people, goods and services, and ensuring connection within and among urban areas. A failed link in these systems can impact the community by threatening evacuation capability, recovery operations and the overall economy. Bridges are critical links in the wider urban system since they are associated with little redundancy and a high (re)construction cost. Riverine bridges are particularly prone to failure during flood events; in fact, the risks to bridges from high river flows and erosion have been recognized as crucial at global level. The interaction of flow, structure and network is complex, and not fully understood. This study aims to establish a rigorous, multiphysics modeling approach for the assessment of the hydrodynamic forces impacting inundated bridges, and the subsequent structural response, while understanding the consequences of such impact on the surrounding network. The objectives of this study are to model hydrodynamic forces as demand on the bridge structure, to advance a performance evaluation of the structure under the modeled loading, and to assess the overall impact at systemic level. The flood-prone city of Carlisle (UK) is used as a case study and a proof of concept. Implications of the hydrodynamic impact on the performance and functionality of the surrounding transport network are discussed. This research will help to fill the gap between current guidance for design and assessment of bridges within the overall transport system.
- Published
- 2022
47. Cellular Structure-Based Fault-Tolerance TSV Configuration in 3D-IC
- Author
-
Wenhao Sun, Xiaoqing Wen, Song Chen, Qi Xu, and Yi Kang
- Subjects
Through-silicon via ,Heuristic (computer science) ,Computer science ,Reliability (computer networking) ,Three-dimensional integrated circuit ,Fault tolerance ,Hardware_PERFORMANCEANDRELIABILITY ,Topology ,Computer Graphics and Computer-Aided Design ,symbols.namesake ,Lagrangian relaxation ,Hardware_INTEGRATEDCIRCUITS ,Redundancy (engineering) ,symbols ,Overhead (computing) ,Electrical and Electronic Engineering ,Software - Abstract
In three dimensional integrated circuits (3D-ICs), through silicon via (TSV) is a critical technique in providing vertical connections. However, the yield is one of the key obstacles to adopt the TSV based 3D-ICs technology in industry. Various fault-tolerance structures using redundant TSVs to repair faulty functional TSVs have been proposed in literature for yield and reliability enhancement. But the TSV repair paths under delay constraint cannot always be generated due to the lack of appropriate repair algorithms. In this paper, we propose an effective TSV repair strategy for the cellular TSV redundancy architecture, with taking account of the delay overhead. First, we prove that the cellular structure-based fault-tolerance TSV configuration with the delay constraint (CSFTC) is equivalent to the length-bounded multi-commodity flow (LBMCF) problem. Next, an integer linear programming formulation is presented to solve the LBMCF problem. Finally, to speed-up the fault-tolerance structure configuration process, an efficient Lagrangian relaxation based heuristic method is further proposed. Experimental results demonstrate that, compared with the state-of-the-art fault-tolerance structures, the proposed method can provide high yield and low delay overhead.
- Published
- 2022
48. Message-Locked Searchable Encryption: A New Versatile Tool for Secure Cloud Storage
- Author
-
Rongmao Chen, Willy Susilo, Lv Xixiang, Xueqiao Liu, Joseph Tonien, and Guomin Yang
- Subjects
Scheme (programming language) ,Service quality ,Service (systems architecture) ,Information Systems and Management ,Computer Networks and Communications ,business.industry ,Computer science ,Cloud computing ,Encryption ,Computer Science Applications ,Hardware and Architecture ,Redundancy (engineering) ,Data deduplication ,business ,computer ,Cloud storage ,computer.programming_language ,Computer network - Abstract
Message-Locked Encryption (MLE) is a useful tool to enable deduplication over encrypted data in cloud storage. It can significantly improve the cloud service quality by eliminating redundancy to save storage resources, and hence user cost, and also providing defense against different types of attacks, such as duplicate faking attack and brute-force attack. A typical MLE scheme only focuses on deduplication. On the other hand, supporting search operations on stored content is another essential requirement for cloud storage. In this paper, we present a message-locked searchable encryption (MLSE) scheme in a dual-server setting, which achieves simultaneously the desirable features of supporting deduplication and enabling users to perform search operations over encrypted data. In addition, it supports both multi-keyword and negative keyword searches. We formulate the security notions of MLSE and prove our scheme satisfies all the security requirements. Moreover, we provide an interesting extension of our construction to support Proof of Storage (PoS). Compared with the existing solutions, MLSE achieves better functionalities and efficiency, and hence enables more versatile and efficient cloud storage service.
- Published
- 2022
49. Distributed Redundant Placement for Microservice-based Applications at the Edge
- Author
-
Shuiguang Deng, Zijie Liu, Jianwei Yin, Hailiang Zhao, and Schahram Dustdar
- Subjects
FOS: Computer and information sciences ,020203 distributed computing ,Information Systems and Management ,Edge device ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Computer Science Applications ,Scheduling (computing) ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Computer Science - Distributed, Parallel, and Cluster Computing ,Hardware and Architecture ,High availability ,0202 electrical engineering, electronic engineering, information engineering ,Memory footprint ,Redundancy (engineering) ,Stochastic optimization ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,Edge computing - Abstract
Multi-access Edge Computing (MEC) is booming as a promising paradigm to push the computation and communication resources from cloud to the network edge to provide services and to perform computations. With container technologies, mobile devices with small memory footprint can run composite microservice-based applications without time-consuming backbone. Service placement at the edge is of importance to put MEC from theory into practice. However, current state-of-the-art research does not sufficiently take the composite property of services into consideration. Besides, although Kubernetes has certain abilities to heal container failures, high availability cannot be ensured due to heterogeneity and variability of edge sites. To deal with these problems, we propose a distributed redundant placement framework SAA-RP and a GA-based Server Selection (GASS) algorithm for microservice-based applications with sequential combinatorial structure. We formulate a stochastic optimization problem with the uncertainty of microservice request considered, and then decide for each microservice, how it should be deployed and with how many instances as well as on which edge sites to place them. Benchmark policies are implemented in two scenarios, where redundancy is allowed and not, respectively. Numerical results based on a real-world dataset verify that GASS significantly outperforms all the benchmark policies.
- Published
- 2022
50. Solar Power Prediction Based on Satellite Measurements – A Graphical Learning Method for Tracking Cloud Motion
- Author
-
Haixiang Zang, Lilin Cheng, Zhinong Wei, Tao Ding, and Guoqiang Sun
- Subjects
Pixel ,Computer science ,business.industry ,Photovoltaic system ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Energy Engineering and Power Technology ,Directed graph ,Electric power system ,Redundancy (engineering) ,Graph (abstract data type) ,Electrical and Electronic Engineering ,business ,Algorithm ,Image resolution ,Solar power - Abstract
The stochastic cloud cover on photovoltaic (PV) panels affects the solar power outputs, producing high instability in the integrated power systems. It is an effective approach to track cloud motion during short-term PV power forecasting based on data sources of satellite images. However, since temporal variations of these images are noisy and non-stationary, pixel-sensitive prediction methods are critically needed in order to seek a balance between forecast precision and huge computation burden due to a large image size. Hence, a graphical learning framework is proposed for intra-hour PV power prediction. By simulating the cloud motion using bi-directional extrapolation, a directed graph is generated representing the pixel values from multiple frames of historical images. The nodes and edges in the graph denote the shapes and motion directions of the regions of interest (ROIs) in satellite images. A spatial-temporal graph neural network (GNN) is then proposed to deal with the graph. Comparing with conventional deep-learning-based models, GNN is more flexible for varying sizes of input, in order to be able to handle dynamic ROIs. Referring to the comparative studies, the proposed method greatly reduces the redundancy of image inputs without sacrificing the visual scope, and slightly improves the prediction accuracy.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.