7,050 results on '"Distributed database"'
Search Results
2. BIIoVT: Blockchain-Based Secure Storage Architecture for Intelligent Internet of Vehicular Things
- Author
-
Pradip Kumar Sharma, Yi Pan, Jong Hyuk Park, and Sushil Kumar Singh
- Subjects
Vehicular ad hoc network ,Distributed database ,business.industry ,Emerging technologies ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Cloud computing ,Computer security ,computer.software_genre ,Computer Science Applications ,Distributed hash table ,Human-Computer Interaction ,Hardware and Architecture ,Smart city ,The Internet ,Electrical and Electronic Engineering ,business ,computer ,Information exchange - Abstract
Today, the rapid growth of vehicles connected to the Internet enables provides various services to consumers, including traffic management, traffic safety, and entertainment. Vehicular Ad-Hoc Network (VANET) is one of the most prominent and emerging technologies on the Internet of Vehicular Things (IoVT). This technology offers to fulfill requirements such as robust information exchange, and infotainment among vehicles for the smart city environment. Still, it has some challenges such as centralization, storage, security, and privacy because all city vehicular networks send vehicles and road-related information data directly to the cloud. This article proposes BIIoVT: Blockchain-based Secure Storage Architecture for Intelligent Internet of Vehicular Things (IIVoT) to mitigate the above-mention issues. Blockchain provides security and privacy at each city's vehicular networks and decentralized storage at the cloud layer with a Distributed Hash Table (DHT). It also examines how the vehicular network offers a secure platform. The validation results of the proposed architecture show an outstanding balance of secure storage and efficiency for the IoVT compared to existing methods.
- Published
- 2022
3. Joint Data Collection and Resource Allocation for Distributed Machine Learning at the Edge
- Author
-
Min Chen, Jianchun Liu, Yang Xu, He Huang, Haichuan Wang, Zeyu Meng, and Hongli Xu
- Subjects
Distributed database ,Edge device ,Computer Networks and Communications ,Computer science ,business.industry ,Quality of service ,Approximation algorithm ,Machine learning ,computer.software_genre ,Data modeling ,Resource allocation ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,computer ,Software ,Edge computing - Abstract
Under the paradigm of Edge Computing, the enormous data generated at the network edge can be processed locally. To make full utilization of these widely distributed data, we focus on an edge computing system that conducts distributed machine learning using gradient-descent based approaches. To ensure the system's performance, there are two major challenges: how to collect data from multiple data source nodes for training jobs and how to allocate the limited resources on each edge server among these jobs. In this paper, we jointly consider the two challenges for distributed training (without service requirement), aiming to maximize the system throughput while ensuring the system's quality of service (QoS). Specifically, we formulate the joint problem as a mixed-integer non-linear program, which is NP-hard, and propose an efficient approximation algorithm. Furthermore, we take service placement into consideration for diverse training jobs and propose an approximation algorithm. We also analyze that our proposed algorithm can achieve the constant bipartite approximation under many practical situations. We build a test-bed to evaluate the effectiveness of our proposed algorithm in a practical scenario. Extensive simulation results and testing results show that the proposed algorithms can improve the system throughput 56%-69% compared with the conventional algorithms.
- Published
- 2022
4. Adaptive Federated Learning on Non-IID Data With Resource Constraint
- Author
-
Rajendra Akerkar, Song Guo, Deze Zeng, Jie Zhang, Yufeng Zhan, Zhihao Qu, and Qifeng Liu
- Subjects
Hyperparameter ,Independent and identically distributed random variables ,Distributed database ,Computer science ,business.industry ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Data modeling ,Stochastic gradient descent ,Computational Theory and Mathematics ,Hardware and Architecture ,Server ,Key (cryptography) ,Reinforcement learning ,Artificial intelligence ,business ,computer ,Software - Abstract
Federated learning (FL) has been widely recognized as a promising approach by enabling individual end-devices to cooperatively train a global model without exposing their own data. One of the key challenges in FL is the non-independent and identically distributed (Non-IID) data across the clients, which decreases the efficiency of stochastic gradient descent (SGD) based training process. Moreover, clients with different data distributions may cause bias to the global model update, resulting in a degraded model accuracy. To tackle the Non-IID problem in FL, we aim to optimize the local training process and global aggregation simultaneously. For local training, we analyze the effect of hyperparameters (e.g., the batch size, the number of local updates) on the training performance of FL. Guided by the toy example and theoretical analysis, we are motivated to mitigate the negative impacts incurred by Non-IID data via selecting a subset of participants and adaptively adjust their batch size. A deep reinforcement learning based approach has been proposed to adaptively control the training of local models and the phase of global aggregation. Extensive experiments on different datasets show that our method can improve the model accuracy by up to 30\%, as compared to the state-of-the-art approaches.
- Published
- 2022
5. Online Pricing and Trading of Private Data in Correlated Queries
- Author
-
Fu Xiao, Jie Li, Hui Cai, Fan Ye, Yuanyuan Yang, and Yanmin Zhu
- Subjects
Distributed database ,Computer science ,Regret ,computer.software_genre ,Computational Theory and Mathematics ,Hardware and Architecture ,Web browsing history ,Signal Processing ,Targeted advertising ,Differential privacy ,Web navigation ,Data mining ,Dimension (data warehouse) ,Commoditization ,computer - Abstract
With the commoditization of private data, data trading in consideration of user privacy protection has become a fascinating research topic. The trading for private web browsing histories brings huge economic value to data consumers when leveraged by targeted advertising. And the online pricing of these private data further helps achieve more realistic data trading. In this paper, we study the trading and pricing of multiple correlated queries on private web browsing history data at the same time. We propose CTRADE , which is a novel online data CommodiTization fRamework for trAding multiple correlateD queriEs over private data. CTRADE first devises a modified matrix mechanism to perturb query answers. It especially quantifies privacy loss under the relaxation of classical differential privacy and a newly devised mechanism with relaxed matrix sensitivity, and further compensates data owners for their diverse privacy losses in a satisfying manner. CTRADE then proposes an ellipsoid-based query pricing mechanism according to a given linear market value model, which exploits the features of the ellipsoid to explore and exploit the close-optimal dynamic price at each round. In particular, the proposed mechanism produces a low cumulative regret, which is quadratic in the dimension of the feature vector and logarithmic in the number of total rounds. Through real-data based experiments, our analysis and evaluation results demonstrate that CTRADE balances total error and privacy preferences well within acceptable running time, indeed produces a convergent cumulative regret with more rounds, and also achieves all desired economic properties of budget balance, individual rationality, and truthfulness.
- Published
- 2022
6. Efficient closed high-utility pattern fusion model in large-scale databases
- Author
-
Youcef Djenouri, Jerry Chun-Wei Lin, and Gautam Srivastava
- Subjects
Fusion ,Speedup ,Correctness ,Database ,Distributed database ,Computer science ,Scale (chemistry) ,InformationSystems_DATABASEMANAGEMENT ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Information fusion ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Architecture ,Completeness (statistics) ,computer ,Software ,Information Systems - Abstract
High-Utility Itemset Mining (HUIM) is considered a major issue in recent decades since it reveals profit strategies for use in industry for decision-making. Most existing works have focused on mining high-utility itemsets from databases showing large amount of patterns; however exact decisions are still challenging to make from that large amounts of discovered knowledge. Closed High-utility itemset mining (CHUIM) provides a smart way to present concise high-utility itemsets that can be more effective for making correct decisions. However, none of the existing works have focused on handling large-scale databases to integrate discovered knowledge from several distributed databases. In this paper, we first present a large-scale information fusion architecture to integrate discovered closed high-utility patterns from several distributed databases. The generic composite model is used to cluster transactions regarding their relevant correlation that can ensure correctness and completeness of the fusion model. The well-known MapReduce framework is then deployed in the developed DFM-Miner algorithm to handle big datasets for information fusion and integration. Experiments are then compared to the state-of-the-art CHUI-Miner and CLS-Miner algorithms for mining closed high-utility patterns and the results indicated that the designed model is well designed for handling large-scale databases with less memory usage. Moreover, the designed MapReduce framework can speed up the mining performance of closed high-utility patterns in the developed fusion system.
- Published
- 2021
7. Digital Twin for Federated Analytics Using a Bayesian Approach
- Author
-
Dan Wang, Dawei Chen, Zhu Han, and Yifei Zhu
- Subjects
Information privacy ,Distributed database ,Computer Networks and Communications ,business.industry ,Computer science ,Monte Carlo method ,Bayesian probability ,Markov process ,Markov chain Monte Carlo ,computer.software_genre ,Computer Science Applications ,Data modeling ,symbols.namesake ,Hardware and Architecture ,Analytics ,Signal Processing ,symbols ,Data mining ,business ,computer ,Information Systems - Abstract
We are now in an information era and the volume of data is growing explosively. However, due to privacy issues, it is very common that data cannot be freely shared among the data generating. Federated analytics was recently proposed aiming at deriving analytical insights among data-generating devices without exposing the raw data, but the intermediate analytics results. Note that the computing resources at the data generating devices are limited, thus making on-device execution of computing-intensive tasks challenging. We thus propose to apply the digital twin technique, which emulates the resource-limited physical/end side, while utilizing the rich resource at the virtual/computing side. Nevertheless, how to use the digital twin technique to assist federated analytics while preserving distributed data privacy is challenging. To address such a challenge, this work first formulates a problem on digital twin-assisted federated distribution discovery. Then, we propose a federated Markov chain Monte Carlo with a delayed rejection (FMCMC-DR) method to estimate the representative parameters of the global distribution. We combine a rejection–acceptance sampling technique and a delayed rejection technique, allowing our method to be able to explore the full state space. Finally, we evaluate FMCMC-DR against the Metropolis–Hastings (MH) algorithm and random walk Markov chain Monte Carlo method (RW-MCMC) using numerical experiments. The results show our algorithm outperforms the other two methods by 50% and 95% contour accuracy, respectively, and has a better convergence rate.
- Published
- 2021
8. Decentralized Video Input Authentication as an Edge Service for Smart Cities
- Author
-
Ronghua Xu, Yu Chen, and Deeraj Nagothu
- Subjects
Authentication ,Service (systems architecture) ,Situation awareness ,Distributed database ,business.industry ,Computer science ,Frame (networking) ,Computer security ,computer.software_genre ,Computer Science Applications ,Human-Computer Interaction ,Hardware and Architecture ,Disinformation ,The Internet ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,computer - Abstract
Situational awareness is essential for a safe and sustainable urban environment. While the wide deployment of Internet of Video Things (IoVT) enables efficient monitoring of the footage of the Smart Cities, it also attracts attackers and abusers. Disinformation injected into the IoVT can mislead the city administrations, policy makers, and emergency responders, and lead to disastrous consequences. It is very challenging to timely authenticate each video stream from pervasively deployed IoVT devices. This article presents a Blockchain Enhanced Video input Authentication (BEVA) scheme as an edge service to fight against the visual layer attacks on smart IoVT systems such as online false video injection and offline video streams tampering attacks. The BEVA scheme leverages the electrical network frequency (ENF) embedded in video recordings as an environmental fingerprint for online false frame detection. Blockchain is integrated to enable a decentralized security networking infrastructure, and it empowers an immutable, traceable, and auditable distributed ledger for ENF-based authentication scheme without relying on a third-party trust authority. This secure-by-design solution efficiently provides trust and secure services in IoVT in modern Smart Cities.
- Published
- 2021
9. SPDS: A Secure and Auditable Private Data Sharing Scheme for Smart Grid Based on Blockchain
- Author
-
Zhou Su, Ning Zhang, Zhenyu Zhou, Xin Sun, Jianfei Chen, Yuntao Wang, and Zhiyuan Ye
- Subjects
Atomicity ,Information privacy ,Smart contract ,Distributed database ,Computer science ,Computer security ,computer.software_genre ,Computer Science Applications ,Data sharing ,Smart grid ,Control and Systems Engineering ,Overhead (computing) ,Electrical and Electronic Engineering ,computer ,Information Systems ,Efficient energy use - Abstract
The exponential growth of data generated from increasing smart meters and smart appliances brings about huge potentials for more efficient energy production, pricing, and personalized energy services in smart grids. However, it also causes severe concerns due to improper use of individuals’ private data, as well as the lack of transparency and auditability for data usage. To bridge this gap, in this article, we propose a secure and auditable private data sharing (SPDS) scheme under data processing-as-a-service mode in smart grid. Specifically, we first present a novel blockchain-based framework for trust-free private data computation and data usage tracking, where smart contracts are employed to specify fine-grained data usage policies (i.e., who can access what kinds of data, for what purposes, at what price) while the distributed ledgers keep an immutable and transparent record of data usage. A trusted execution environment based off-chain smart contract execution mechanism is exploited as well to process confidential user datasets and relieve the computation overhead in blockchain systems. A two-phase atomic delivery protocol is designed to ensure the atomicity of data transactions in computing result release and payment. Furthermore, based on contract theory, the optimal contracts are designed under information asymmetry to stimulate user's participation and high-quality data sharing while optimizing the payoff of the energy service provider. Extensive simulation results demonstrate that the proposed SPDS can effectively improve the payoffs of participants, compared with conventional schemes.
- Published
- 2021
10. Data Base Management Systems Query Optimization Techniques for Distributed Database Systems
- Author
-
Yashvi Barot
- Subjects
Database ,Distributed database ,Computer science ,Management system ,computer.software_genre ,Base (topology) ,Query optimization ,computer - Abstract
The fundamental goal of this postulation is to introduce various models for single also as numerous inquiry handling in the Distributed data set framework which brings about less question handling cost. One of the significant issues in the plan and execution of Distributed Information Base Management Systems (DDBMS) is productive inquiry handling. The objective of dispersed inquiry improvement decreases to minimization of measure of information to be communicated among destinations for handling a given inquiry. The issue of question handling in DDBS (1 1) has been concentrated broadly in writing. In the greater part of calculations, the capability of the question will contain a grouping of tasks. In such cases, while executing tasks from right to left, as per the request for tasks in arrangement, the aftereffect of an activity might be an operand to the next activity. Since the tasks are subject to each other, at a moment in particular one activity at one site will be executed despite the fact that the climate is dispersed. Then frameworks at any remaining locales will be inactive for this inquiry. Another model, Totally Reducible Relation Model (CRK Medel), which permits parallelism and processes numerous tasks all the while at all important locales is introduced. It is expected that the tasks are in the type of conjunctions. So every activity can be handled freely. In this model at some moment, relations at every single significant site will be totally diminished by relating sets of every appropriate activity (Determinations, Semijoins and Joins) all the while. Thus, every connection will be checked just a single time to deal with all appropriate tasks by decreasing VO cost.
- Published
- 2021
11. A Newly Proposed Cloud Computing Based Strengthen Distributed Database System Model and Its Applications in Database Modelling
- Author
-
Shivankur Thapliyal
- Subjects
Database ,Distributed database ,Computer science ,business.industry ,Cloud computing ,computer.software_genre ,business ,computer ,System model - Abstract
In the modern era of today’s exceptional Information age, the day to day transactions of huge sensitive data sets, which is in the form of PBs (Peta-Bytes) 250 bytes and YBs (Yotta – Bytes) 280 bytes are drastically increases with enormous speed on CLOUD data storage environment. CLOUDs data storage environment are one of the most superior and reliable platform for storing a large sets of data both at enterprise level or local level. Because CLOUD provides online data fetching capability to restore or fetching data at any geographical locations through login their correspondent credentials. But to enhancement or spread of these large data sets are becomes also very complex with respect to maintenance of these data with take concern of consistency and data security, because to maintain these large data sets with full of consistency and integrity are really a very typical and rational tasks, so here In this paper we proposed a distributed database management systems for CLOUD interface also preserves or to take concern data security features with full restoration of CIA (Confidentiality, Integrity, Availability or Authenticity) trade of Information Security. Here we also improvised the mechanisms of traditional distributed database management systems because the tendency to preserves information and recover ability after any misconceptions happens that we restore data which belongs to similar person may have to be stored at different locations, but this newly proposed distributed database systems architecture contains all information or record which belong to similar person are stored in one database rather restore it different databases but the location of these data have to be changes mean while that the content or data which resides in one databases have to be moved to some other database and also preserves the security features, and this model also have capability to run older traditional methodology based distributed database management systems using this model. So the detailed description about these models and communication infrastructure among different CLOUDs are append in the upcoming sections of this paper. Keywords: Cloud based Distributed Database system model, Distributed system, Distributed Database model of CLOUD, Cloud Distributed Database, CLOUD based database systems
- Published
- 2021
12. Distributed query optimization strategies for cloud environment
- Author
-
Rasha M. badry, Mostafa R. Kaseb, and Samar Sh. Haytamy
- Subjects
Service (systems architecture) ,Database ,Distributed database ,business.industry ,Computer science ,Computational intelligence ,Cloud computing ,computer.software_genre ,Query optimization ,Query plan ,Server ,The Internet ,business ,computer - Abstract
Cloud computing services are provided over the internet such as computing, servers, storage, databases, networking, software, and many more. Then, customers get services from cloud providers and pay according to their usage. Nowadays, Database as a service is becoming a more popular service in cloud computing. Due to faster response and more reliability, distributed databases are used in the cloud for storing and managing data. Thus, enterprises are transferring the stored data to the cloud computing center. Although in a distributed database system via cloud environment, the required relations by a query plan may be stored at numerous sites. This exponentially increases the number of potential equivalent plan alternatives to find an optimal Query Execution Plan. Although exhaustively explore all potentials plans in large search space is not computationally reasonable. Therefore, in order to improve the performance in the cloud, query optimization mechanisms are paramount and required in order to optimize data processing time and maximize resource utilization. So, a review of the different techniques used for query optimization is conducted in this paper to provide researchers with a complete view of query processing and its optimization techniques.
- Published
- 2021
13. Integrated Blockchain and Cloud Computing Systems: A Systematic Survey, Solutions, and Challenges
- Author
-
Jinglin Zou, Kkwang Raymond Choo, Huaqun Wang, Neeraj Kumar, Debiao He, and Sherali Zeadally
- Subjects
Service (systems architecture) ,Blockchain ,General Computer Science ,Distributed database ,business.industry ,Computer science ,Service management ,Cloud computing ,Computer security ,computer.software_genre ,Theoretical Computer Science ,Security service ,Backup ,Data integrity ,business ,computer - Abstract
Cloud computing is a network model of on-demand access for sharing configurable computing resource pools. Compared with conventional service architectures, cloud computing introduces new security challenges in secure service management and control, privacy protection, data integrity protection in distributed databases, data backup, and synchronization. Blockchain can be leveraged to address these challenges, partly due to the underlying characteristics such as transparency, traceability, decentralization, security, immutability, and automation. We present a comprehensive survey of how blockchain is applied to provide security services in the cloud computing model and we analyze the research trends of blockchain-related techniques in current cloud computing models. During the reviewing, we also briefly investigate how cloud computing can affect blockchain, especially about the performance improvements that cloud computing can provide for the blockchain. Our contributions include the following: (i) summarizing the possible architectures and models of the integration of blockchain and cloud computing and the roles of cloud computing in blockchain; (ii) classifying and discussing recent, relevant works based on different blockchain-based security services in the cloud computing model; (iii) simply investigating what improvements cloud computing can provide for the blockchain; (iv) introducing the current development status of the industry/major cloud providers in the direction of combining cloud and blockchain; (v) analyzing the main barriers and challenges of integrated blockchain and cloud computing systems; and (vi) providing recommendations for future research and improvement on the integration of blockchain and cloud systems.
- Published
- 2021
14. Data Life Aware Model Updating Strategy for Stream-Based Online Deep Learning
- Author
-
Donglin Yang, Wei Rang, Yu Wang, and Dazhao Cheng
- Subjects
Online model ,020203 distributed computing ,Distributed database ,Computer science ,business.industry ,Deep learning ,Training (meteorology) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Continuous training ,law.invention ,Data modeling ,Computational Theory and Mathematics ,PageRank ,Hardware and Architecture ,law ,Middleware ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,computer - Abstract
Many deep learning applications deployed in dynamic environments change over time, in which the training models are supposed to be continuously updated with streaming data to guarantee better descriptions of data trends. However, most state-of-the-art learning frameworks support well in offline training methods while omitting online model updating strategies. In this work, we propose and implement iDlaLayer , a thin middleware layer on top of existing training frameworks that streamlines the support and implementation of online deep learning applications. In pursuit of good model quality and fast data incorporation, we design a Data Life Aware model updating strategy (DLA), which builds training data samples according to contributions of data from different life stages, and considers the training cost consumed in model updating. We evaluate iDlaLayer's performance through simulations and experiments based on TensorflowOnSpark with three representative online learning workloads. Our experimental results demonstrate that iDlaLayer reduces the overall elapsed time of ResNet, DeepFM and PageRank by 11.3, 28.2, and 15.2 percent compared to the periodic update strategy, respectively. It further achieves an average 20 percent decrease in training cost and brings about a 5 percent improvement in model quality against the traditional continuous training method.
- Published
- 2021
15. Security Issues of Web Services of Large Scale Distributed Database
- Author
-
Manish Jivtode
- Subjects
Distributed database ,Scale (ratio) ,Database ,Computer science ,Web service ,computer.software_genre ,computer - Abstract
Cloud computing is viewed as one of the most promising technologies in computing today. This is a new concept of large scale distributed computing. It provides an open platform for every user on the pay-per-use basis. Cloud computing provides number of interfaces and APIs to interact with the services provided to the users. With the development of web services distributed application, Security of data is another important subject in various layers of distributed computing. In this study, security of data that can be used during the access of distributed environment over various layers will be described.
- Published
- 2021
16. Multicenter Privacy-Preserving Cox Analysis Based on Homomorphic Encryption
- Author
-
Jing-Song Li, Shiqiang Zhu, Yu Tian, Yao Lu, and Tianshu Zhou
- Subjects
Protocol (science) ,Biomedical Research ,Distributed database ,Information Dissemination ,Computer science ,business.industry ,Homomorphic encryption ,Encryption ,computer.software_genre ,Computer Science Applications ,Data modeling ,Data sharing ,Information sensitivity ,Health Information Management ,Privacy ,Humans ,Generalizability theory ,Data mining ,Electrical and Electronic Engineering ,business ,computer ,Computer Security ,Biotechnology - Abstract
The Cox proportional hazards model is one of the most widely used methods for analyzing survival data. Data from multiple data providers are required to improve the generalizability and confidence of the results of Cox analysis; however, such data sharing may result in leakage of sensitive information, leading to financial fraud, social discrimination or unauthorized data abuse. Some privacy-preserving Cox regression protocols have been proposed in past years, but they lack either security or functionality. In this paper, we propose a privacy-preserving Cox regression protocol for multiple data providers and researchers. The proposed protocol allows researchers to train models on horizontally or vertically partitioned datasets while providing privacy protection for both the sensitive data and the trained models. Our protocol utilizes threshold homomorphic encryption to guarantee security. Experimental results demonstrate that with the proposed protocol, Cox regression model training over 9 variables in a dataset of 113,035 samples takes approximately 44 min, and the trained model is almost the same as that obtained with the original nonsecure Cox regression protocol; therefore, our protocol is a potential candidate for practical real-world applications in multicenter medical research.
- Published
- 2021
17. Hiding in the Crowd: Federated Data Augmentation for On-Device Learning
- Author
-
Mehdi Bennis, Seong-Lyun Kim, Eunjeong Jeong, Hyesung Kim, Jihong Park, and Seungeun Oh
- Subjects
Information privacy ,Contextual image classification ,Distributed database ,Computer Networks and Communications ,Computer science ,Intelligent decision support system ,Sample (statistics) ,02 engineering and technology ,computer.software_genre ,Data modeling ,Artificial Intelligence ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Oversampling ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
To cope with the lack of on-device machine learning samples, this article presents a distributed data augmentation algorithm, coined federated data augmentation (FAug). In FAug, devices share a tiny fraction of their local data, i.e., seed samples, and collectively train a synthetic sample generator that can augment the local datasets of devices. To further improve FAug, we introduce a multihop-based seed sample collection method and an oversampling technique that mixes up collected seed samples. Both approaches enjoy the benefit from the crowd of devices, by hiding data privacy from preceding hops and feeding diverse seed samples. In the image classification tasks, simulations demonstrate that the proposed FAug frameworks yield stronger privacy guarantees, lower communication latency, and higher on-device ML accuracy.
- Published
- 2021
18. A Novel Regression Prediction Method for Electronic Nose Based on Broad Learning System
- Author
-
Yu Wang, Hao Cui, Xiaoyan Peng, and Pengfei Jia
- Subjects
Data processing ,Electronic nose ,Distributed database ,Computer science ,Random mapping ,computer.software_genre ,Least squares ,Regression ,Support vector machine ,Linear regression ,Data mining ,Electrical and Electronic Engineering ,Instrumentation ,computer - Abstract
When the electronic nose (E-nose) is used to predict the concentration of mixed gas, the traditional regression prediction algorithm may lead to unsatisfactory prediction results and long training time. In order to improve the accuracy of regression prediction and reduce the training time, we proposed a regression prediction algorithm based on broad learning system (BLS) to predict the concentration of mixed gas. To further improve the accuracy of model predictions, we optimize the various parameters existing in the model to improve the performance of the model. Then, we change the initial random mapping weight assignment method of the model to further improve the data processing ability of the model, and the improved model is called GBLS. In the data processing experiment, we use the mixed gas of methane and ethylene as the test gas to test the GBLS model proposed in this article. We have compared GBLS with other existing methods including back propagation neural networks (BPNN), least squares support vector machines (LSSVM), extremely learning machine (ELM), linear regression (LR). Experimental results show that the proposed GBLS outperforms the other methods.
- Published
- 2021
19. Cross-Layer Distributed Control Strategy for Cyber Resilient Microgrids
- Author
-
Mohammad Shahidehpour, Xuan Liu, Quan Zhou, Abdullah Abusorrah, Ahmed Alabdulwahab, and Liang Che
- Subjects
Lyapunov function ,General Computer Science ,Distributed database ,Computer science ,Denial-of-service attack ,Computer security ,computer.software_genre ,Decentralised system ,Telecommunications network ,symbols.namesake ,symbols ,Microgrid ,Isolation (database systems) ,Resilience (network) ,computer - Abstract
The widespread adoption of communication and control infrastructures will not only improve the microgrid system performance in normal conditions but also increase microgrid cybersecurity risks. Potential cyberattacks can deteriorate microgrid performances by corrupting and intercepting data exchanges among participating DERs, whereby microgrids deviate from desired operating conditions and stable microgrid operations are jeopardized. In this paper, a cross-layer control strategy is proposed to enhance the microgrid resilience against false data injection (FDI) and denial of service (DoS) attacks. On the one hand, the proposed control strategy will not interfere with microgrid normal operations when there are no cyberattacks. On the other hand, the proposed control strategy can effectively mitigate the impacts of FDI and DoS attacks on microgrids without relying on prompt detection and isolation of cyberattacks. The stability of the proposed control strategy is demonstrated using the Lyapunov theory under different scenarios, including without and with FDI and DoS attacks. The effectiveness of the proposed cross-layer resilient control strategy against cyberattacks is validated in a 12-bus microgrid system using time-domain PSCAD/EMTDC simulations.
- Published
- 2021
20. Industrial Big Data Modeling and Monitoring Framework for Plant-Wide Processes
- Author
-
Le Yao and Zhiqiang Ge
- Subjects
Distributed database ,business.industry ,Computer science ,Reliability (computer networking) ,020208 electrical & electronic engineering ,Big data ,Process (computing) ,Probabilistic logic ,02 engineering and technology ,computer.software_genre ,Fault detection and isolation ,Computer Science Applications ,Data modeling ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Electrical and Electronic Engineering ,business ,Latent variable model ,computer ,Information Systems - Abstract
This article proposes a distributed parallel modeling and monitoring framework for plant-wide processes with big data. The “distributed” contains two layers of meaning. One is the spatially distributed modeling and hierarchical monitoring for the plant-wide process with multiple operating units. The other represents the distributed parallel modeling for big process data with various features. Under the framework, the distributed parallel mixture probabilistic latent variable model is proposed based on the stochastic variational inference algorithm and the parameter server architecture to cope with the big process data. Then, the model is utilized to develop the plant-wide hierarchical and distributed process monitoring algorithms, where the multilevel monitoring indexes and fault contribution indexes are established based on the Bayesian fusion algorithm for process fault detection and diagnosis. The performance comparison and visualization for the industrial plant-wide process case has demonstrated the reliability and superiority of the proposed algorithm and framework.
- Published
- 2021
21. Hierarchical fuzzy neural networks with privacy preservation for heterogeneous big data
- Author
-
Yu-Cheng Chang, Ye Shi, Leijie Zhang, and Chin-Teng Lin
- Subjects
FOS: Computer and information sciences ,Information privacy ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Computer science ,Computer Science - Artificial Intelligence ,Big data ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Artificial Intelligence & Image Processing ,Hierarchy ,Distributed database ,Artificial neural network ,business.industry ,Applied Mathematics ,0102 Applied Mathematics, 0801 Artificial Intelligence and Image Processing, 0906 Electrical and Electronic Engineering ,Backpropagation ,Artificial Intelligence (cs.AI) ,Computational Theory and Mathematics ,Control and Systems Engineering ,Scalability ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Cryptography and Security (cs.CR) - Abstract
© 1993-2012 IEEE. Heterogeneous big data poses many challenges in machine learning. Its enormous scale, high dimensionality, and inherent uncertainty make almost every aspect of machine learning difficult, from providing enough processing power to maintaining model accuracy to protecting privacy. However, perhaps the most imposing problem is that big data is often interspersed with sensitive personal data. Hence, we propose a privacy-preserving hierarchical fuzzy neural network to address these technical challenges while also alleviating privacy concerns. The network is trained with a two-stage optimization algorithm, and the parameters at low levels of the hierarchy are learned with a scheme based on the well-known alternating direction method of multipliers, which does not reveal local data to other agents. Coordination at high levels of the hierarchy is handled by the alternating optimization method, which converges very quickly. The entire training procedure is scalable, fast, and does not suffer from gradient vanishing problems like the methods based on backpropagation. Comprehensive simulations conducted on both regression and classification tasks demonstrate the effectiveness of the proposed model. Our code is available online.1
- Published
- 2022
22. Hybrid fragmentation of medical images’ attributes for multidimensional indexing
- Author
-
Ali Asghar Safaei
- Subjects
Distributed database ,Computer Networks and Communications ,Computer science ,Search engine indexing ,computer.software_genre ,Market fragmentation ,Set (abstract data type) ,DICOM ,Index (publishing) ,Application domain ,Pairwise comparison ,Data mining ,computer ,Software - Abstract
Medical images are growing dramatically both in quantity and application in the data era and the emerging Big Data problem. Searching and finding the proper medical image among such a huge number of medical images is not possible unless using a proper medical search engine. Indexing is the backend process in information retrieval systems in which documents are annotated with the index entry to be retrieved more accurately and efficiently. Most of the indexing techniques for medical images are content-based that are more complex and time consuming than the text-based ones. In this paper, a text-based medical image indexing technique is proposed that use medical images’ attributes and fragments them with the hybrid fragmentation approach (that are used in distributed database design) and re-form each of such attribute fragments into a hierarchy, constructing a multidimensional index. Hybrid fragmentation approach uses both Horizontal and vertical fragmentation of medical images’ attributes provided in header of medical image standard formats (such as the DICOM). Horizontal fragmentation uses values of image attributes (i.e., image content and properties dependent), whilst the vertical fragmentation uses pairwise affinity and correlation of the attributes in the application domain (i.e., application dependent). So, the proposed hybrid fragmentation approach based indexing of medical images aim to consider both the image properties and application statistics together to provide a better functionality. As the experimental performance evaluation results illustrate, the proposed multidimensional indexing can provide better precision of information retrieval rather that a single index or a set of multiple indexes, since that considers semantic relationship of the medical image’s attributes via the hybrid (horizontal and vertical) fragmentation. Moreover, the hybrid fragmentations approach based indexing also outperforms the vertical fragmentation-based multidimensional medical image indexing technique in terms of precision, recall and response time.
- Published
- 2021
23. Optimization of correlate subquery based on distributed database
- Author
-
Tianze Pang, Chenyu Zhang, Wenjie Liu, and Yantao Yue
- Subjects
Distributed database ,Computer science ,distributed database ,rule-based optimization ,General Engineering ,InformationSystems_DATABASEMANAGEMENT ,TL1-4050 ,Data mining ,correlate subquery optimization ,computer.software_genre ,computer ,Motor vehicles. Aeronautics. Astronautics - Abstract
Subquery is widely used in database. It can be divided into related subquery and non-related subquery according to whether it is dependent on the table of the parent query. For related subqueries, it is necessary to take a tuple from the parent query before executing the subquery, that is, the content of the subquery needs to be repeatedly operated. Disk access costs of this strategy is very big, in the distributed database, because of data communication overhead, in the parent query yuan set is too low efficiency, therefore, for the class sub queries, on the basis of the optimization of the existing query strategy, combining with the characteristics of distributed database, put forward by the subquery on to join queries, eliminate redundant clauses in the subquery, eliminate accumulation function method based on distributed database query optimization strategy, and the effectiveness of the present optimization strategy is verified by experiment.
- Published
- 2021
24. Distributed Deep Learning for Remote Sensing Data Interpretation
- Author
-
Javier Plaza, Juan M. Haut, Juan-Antonio Rico-Gallego, Sergio Moreno-Álvarez, Mercedes E. Paoletti, and Antonio Plaza
- Subjects
Earth observation ,Data processing ,Source code ,Distributed database ,business.industry ,Computer science ,media_common.quotation_subject ,Big data ,Cloud computing ,computer.software_genre ,Data cube ,Grid computing ,Electrical and Electronic Engineering ,business ,computer ,media_common ,Remote sensing - Abstract
As a newly emerging technology, deep learning (DL) is a very promising field in big data applications. Remote sensing often involves huge data volumes obtained daily by numerous in-orbit satellites. This makes it a perfect target area for data-driven applications. Nowadays, technological advances in terms of software and hardware have a noticeable impact on Earth observation applications, more specifically in remote sensing techniques and procedures, allowing for the acquisition of data sets with greater quality at higher acquisition ratios. This results in the collection of huge amounts of remotely sensed data, characterized by their large spatial resolution (in terms of the number of pixels per scene), and very high spectral dimensionality, with hundreds or even thousands of spectral bands. As a result, remote sensing instruments on spaceborne and airborne platforms are now generating data cubes with extremely high dimensionality, imposing several restrictions in terms of both processing runtimes and storage capacity. In this article, we provide a comprehensive review of the state of the art in DL for remote sensing data interpretation, analyzing the strengths and weaknesses of the most widely used techniques in the literature, as well as an exhaustive description of their parallel and distributed implementations (with a particular focus on those conducted using cloud computing systems). We also provide quantitative results, offering an assessment of a DL technique in a specific case study (source code available: https://github.com/mhaut/cloud-dnn-HSI ). This article concludes with some remarks and hints about future challenges in the application of DL techniques to distributed remote sensing data interpretation problems. We emphasize the role of the cloud in providing a powerful architecture that is now able to manage vast amounts of remotely sensed data due to its implementation simplicity, low cost, and high efficiency compared to other parallel and distributed architectures, such as grid computing or dedicated clusters.
- Published
- 2021
25. A Novel Adaptive Gradient Compression Scheme: Reducing the Communication Overhead for Distributed Deep Learning in the Internet of Things
- Author
-
Jianqiang Li, Peng Luo, Victor C. M. Leung, Jianyong Chen, and F. Richard Yu
- Subjects
Scheme (programming language) ,Distributed database ,Computer Networks and Communications ,Computer science ,business.industry ,Node (networking) ,Deep learning ,Computer Science Applications ,Transmission (telecommunications) ,Computer engineering ,Hardware and Architecture ,Compression (functional analysis) ,Signal Processing ,Convergence (routing) ,Artificial intelligence ,business ,computer ,Edge computing ,Information Systems ,computer.programming_language - Abstract
Distributed deep learning deployed in an edge computing environment is a promising approach for extracting accurate information from raw sensor data from Internet of Things (IoT). But the distributed training suffers from heavy communication overheads between a master node and multiple compute nodes due to frequent transmission of gradients, which limits the training efficiency of the distributed deep learning. In this article, we propose a novel algorithm named ProbComp-LPAC (ProbComp: probability compression and LPAC: layer parameters adaptive compression), which can reduce the communication overhead and improve the training efficiency of the distributed deep learning. ProbComp-LPAC adopts a probability equation to select the gradients and uses different compression rates in different layers of deep neural networks. Comparing with other methods, such as adaptive compression (AdaComp) and lazily aggregated quantized compression (LAQ), the performance of ProbComp-LPAC is not only faster in the training speed but also higher in the accuracy of the test.
- Published
- 2021
26. A Pre-Large Weighted-Fusion System of Sensed High-Utility Patterns
- Author
-
Gautam Srivastava, Matin Pirouz, Jerry Chun-Wei Lin, Yuanfa Li, and Unil Yun
- Subjects
Focus (computing) ,Distributed database ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Sensor fusion ,Commingling ,Set (abstract data type) ,Software ,Knowledge integration ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Electrical and Electronic Engineering ,business ,Instrumentation ,computer ,Data integration - Abstract
Within the current transportation infrastructure, we have seen a steady increase in the use of sensor technologies. These sensors, individually produce large amounts of data that then need to be fused and understood. Data commingling and data integration are difficult tasks when it comes to processing such data centrally, which can require costly hardware and software techniques. Over the past few years, high-utility pattern mining (HUPM) has gained popularity due to its growing capability in identifying useful information and knowledge from stored database data, as compared to the traditional frequent pattern mining. Existing works of HUPM mostly focus on mining the set of HUPs from one data source, which cannot be implemented in real-world scenarios. In this paper, we present a pre-large weighted high-utility pattern (PWHUP) fusion framework for integrating HUPs from different sensed data sources. The proposed PWHUP algorithm considers the size of the data source for discovering more relevant HUPs for integration, which is more applicable to real-life applications and scenarios in transportation and also within other data fusion scenarios. Moreover, the pre-large concept is applied to maintain the suggested pattern for later integration, which greatly improves the effectiveness of the proposed algorithm. Our in-depth experiments show that the designed approach has good performance for knowledge integration and outperforms existing non-integration solutions in precision, recall, and runtime.
- Published
- 2021
27. Embedding Blockchain Technology Into IoT for Security: A Survey
- Author
-
Yang Lu, Li Da Xu, and Ling Li
- Subjects
0209 industrial biotechnology ,Blockchain ,Distributed database ,Computer Networks and Communications ,business.industry ,Computer science ,02 engineering and technology ,Communications system ,Encryption ,Computer security ,computer.software_genre ,Computer Science Applications ,Interoperation ,020901 industrial engineering & automation ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,The Internet ,business ,computer ,Database transaction ,Information Systems - Abstract
In recent years, the Internet of Things (IoT) has made great progress. The interconnection between IoT and the Internet enables real-time information processing and transaction implementation through heterogeneous intelligent devices. But the security, the privacy, and the reliability of IoT are key challenges that limit its development. The features of the blockchain, such as decentralization, consensus mechanism, data encryption, and smart contracts, are suitable for building distributed IoT systems to prevent potential attacks and to reduce transaction costs. As a decentralized and transparent database platform, blockchain has the potential to raise the performance of IoT security to a higher level. This article systematically analyzes state of the art of IoT security based on the blockchain, paying special attention to the security features, issues, technologies, approaches, and related scenarios in blockchain-embedded IoT. The integration and interoperation of blockchain and IoT is an important and foreseeable development in the computational communication system.
- Published
- 2021
28. Distributed Data Storage and Fusion for Collective Perception in Resource-Limited Mobile Robot Swarms
- Author
-
Tim Antonelli, Carlo Pinciroli, Daniel Jeswin Nallathambi, and Nathalie Majcherczyk
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Control and Optimization ,Computer science ,Biomedical Engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Semantics ,Computer Science - Robotics ,020901 industrial engineering & automation ,Artificial Intelligence ,Distributed data store ,0202 electrical engineering, electronic engineering, information engineering ,Distributed database ,business.industry ,Mechanical Engineering ,Cognitive neuroscience of visual object recognition ,Mobile robot ,Data structure ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Robot ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Robotics (cs.RO) ,Classifier (UML) ,computer - Abstract
In this paper, we propose an approach to the distributed storage and fusion of data for collective perception in resource-limited robot swarms. We demonstrate our approach in a distributed semantic classification scenario. We consider a team of mobile robots, in which each robot runs a pre-trained classifier of known accuracy to annotate objects in the environment. We provide two main contributions: (i) a decentralized, shared data structure for efficient storage and retrieval of the semantic annotations, specifically designed for low-resource mobile robots; and (ii) a voting-based, decentralized algorithm to reduce the variance of the calculated annotations in presence of imperfect classification. We discuss theory and implementation of both contributions, and perform an extensive set of realistic simulated experiments to evaluate the performance of our approach., 8 pages, 10 figures
- Published
- 2021
29. TOSDS: Tenant-centric Object-based Software Defined Storage for Multitenant SaaS Applications
- Author
-
Aditi Sharma and Parmeet Kaur
- Subjects
Computer science ,Location ,computer.software_genre ,01 natural sciences ,Storage model ,Research Article-Computer Engineering and Computer Science ,Virtualization ,Multitenancy ,Distributed database ,Bin ,0101 mathematics ,Software-defined storage ,Software defined storage ,Multidisciplinary ,Database ,business.industry ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,Software as a service ,010102 general mathematics ,Unstructured data ,Object storage ,Computer data storage ,Scalability ,business ,computer - Abstract
Enormous amounts of unstructured data such as images, videos, emails, sensors’ data and documents of multiple types are being generated daily by varied applications. Apart from the challenges related to collection or processing of this data, its efficient storage is also a significant challenge since this data do not conform to any predefined storage model. Therefore, any enterprise dealing with huge unstructured data requires a scalable storage system that can provide data durability and availability at a low cost. The paper proposes a tenant-centric approach to develop an object-based software defined storage system for SaaS multi-tenant applications. We present TOSDS (Tenant-centric Object-based Software Defined Storage), a system that can efficiently meet the storage requirements of users or tenants with diverse needs who are using a multitenant SaaS application. The experimental verification of TOSDS illustrates its effectiveness in storage utilization as well as tenant isolation.
- Published
- 2021
30. Co-Maintained Database Based on Blockchain for IDSs: A Lifetime Learning Framework
- Author
-
Maode Ma and Junwei Liang
- Subjects
Security analysis ,Blockchain ,Distributed database ,Database ,Computer Networks and Communications ,Computer science ,Wireless ad hoc network ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Architecture ,Classifier (UML) ,computer - Abstract
Intrusion Detection System (IDS) is one of the most important approaches in cyber security to protect networks against both inner and outer threats. Apart from traditional networks, IDSs have been implemented in various emerging networks, such as mobile networks and Vehicle Ad hoc Networks (VANETs). However, a critical problem in IDSs is that the detection capacity is gradually decaying with the emergence of unknown attacks. It is necessary to constantly retrain IDSs with a more extensive database, but the security institutes usually lack the motivation to persistently update and maintain the database for public. Thus, in this paper, a lifetime learning framework is proposed for IDSs with a blockchain-based database (bc-DB). In the proposed framework, the blockchain-based database is multilaterally maintained by the security institutes and universities using Data Coins (DCoins) as the incentives. In addition, a Lifetime Learning IDS (LL-IDS) is further designed as the supplement of the bc-DB for common IDS users. For the LL-IDS, the Growing Hierarchical Self-Organizing Map with probabilistic relabeling (GHSOM-pr) having flexible and hierarchical architecture is employed as the classifier, which grows to make itself perfectly fit the changeable bc-DB. Security analysis and simulation experiments show that the proposed lifetime learning framework are both secure and effective in attacks detection.
- Published
- 2021
31. Linear Fuzzy Clustering of Distributed Databases Considering Privacy Preservation
- Author
-
Akira Notsu, Katsuhiro Honda, Seiki Ubukata, and Kohei Kunisawa
- Subjects
Fuzzy clustering ,Distributed database ,Computer science ,Data mining ,computer.software_genre ,computer - Published
- 2021
32. Requirements Engineering Tools: An Evaluation
- Author
-
José Luis Fernández-Alemán, Mohamed Hosni, Christof Ebert, Aurora Vizcaíno, Juan Manuel Carrillo de Gea, and Joaquín Nicolás
- Subjects
Distributed database ,Requirements engineering ,business.industry ,Computer science ,Process (engineering) ,Software as a service ,020207 software engineering ,02 engineering and technology ,Engineering management ,Documentation ,0202 electrical engineering, electronic engineering, information engineering ,business ,Alice (programming language) ,computer ,Software ,computer.programming_language - Abstract
"If you don't know where you are going, any road will get you there." Alice from Alice in Wonderland was told this obvious piece of wisdom when she asked for directions. We all know this wisdom from navigating through the fog of insufficient requirements when working on projects. Clear goals can be achieved; unclear goals are sure to be missed. Requirements engineering (RE) is the disciplined and systematic approach (i.e., "engineering") for elicitation, documentation, analysis, agreement, verification, and management of requirements while considering market, technical, and economic goals. "Disciplined" is about culture, and "systematic" demands process and tools, which is our focus here.
- Published
- 2021
33. Secure Information Fusion using Local Posterior for Distributed Cyber-Physical Systems
- Author
-
Xiuming Liu, Jiangchuan Liu, and Edith C.-H. Ngai
- Subjects
Distributed database ,Computer Networks and Communications ,Computer science ,Cyber-physical system ,Probabilistic logic ,Word error rate ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Telecommunications network ,Linear subspace ,Data integrity ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Data mining ,Electrical and Electronic Engineering ,computer ,Software - Abstract
In modern distributed cyber-physical systems (CPS), information fusion often plays a key role in automate and self-adaptive decision making process. However, given the heterogeneous and distributed nature of modern CPSs, it is a great challenge to operate CPSs with the compromised data integrity and unreliable communication links. In this paper, we study the distributed state estimation problem under the false data injection attack (FDIA) with probabilistic communication networks. We propose an integrated ”detection + fusion” solution, which is based on the Kullback-Leibler divergences (KLD) between local posteriors and therefore does not require the exchange of raw sensor data. For the FDIA detection step, the KLDs are used to cluster nodes in the probability space and to partition the space into secure and insecure subspaces. By approximating the distribution of the KLDs with a general $\chi ^2$ χ 2 distribution and calculating its tail probability, we provide an analysis of the detection error rate. For the information fusion step, we discuss the potential risk of double counting the shared prior information in the KLD-based consensus formulation method. We show that if the local posteriors are updated from the shared prior, the increased number of neighbouring nodes will lead to the diminished information gain. To overcome this problem, we propose a near-optimal distributed information fusion solution with properly weighted prior and data likelihood. Finally, we present simulation results for the integrated solution. We discuss the impact of network connectivity on the empirical detection error rate and the accuracy of state estimation.
- Published
- 2021
34. CovChain: Blockchain-Enabled Identity Preservation and Anti-Infodemics for COVID-19
- Author
-
Pallav Kumar Deb, Sudip Misra, and Anandarup Mukherjee
- Subjects
Immutability ,Blockchain ,Distributed database ,Computer Networks and Communications ,business.industry ,Computer science ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Computer security ,computer.software_genre ,Encryption ,Upload ,Hardware and Architecture ,Identity preservation ,Server ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,Software ,Information Systems - Abstract
The intangible nature of the COVID-19 virus and the changing strains have created panic and fear among people, leading to an increase in discrimination against COVID-19 positive patients and their families in society. Moreover, the spread of infodemics creates daunting challenges for peaceful livelihood and fighting the pandemic. In this article, we propose CovChain, a blockchain-based Internet of Things solution for identity preservation and anti-infodemics. Toward this, we propose storing patient information on a private blockchain and exploiting its immutability feature, which helps combat infodemics. We further propose encrypting the data using a distributed attribute-based encryption (d-ABE) scheme to facilitate restricted access to information based on the clearance level. We also propose reducing the load on the miners by using geographically-aware fog nodes. As data in a blockchain is immutable, we delete the blocks corresponding to recovering patients and store the information in a forked CovChain. The cardinality of the two chains helps in maintaining a global census by the cloud servers. Through system implementations, we present the feasibility of CovChain on resource-constrained devices within tolerable delays of 1 s, upload and download rates of 35 kb/s, and CPU (single-core) and memory utilization under 70 percent.
- Published
- 2021
35. Modeling and Analysis of Data Trading on Blockchain-Based Market in IoT Networks
- Author
-
Israel Leyva-Mayorga, Amari N. Lewis, Lam Duc Nguyen, and Petar Popovski
- Subjects
FOS: Computer and information sciences ,Distributed ledger ,Distributed databases ,Smart Contract ,Smart contract ,narrowband IoT (NB-IoT) ,Computer Networks and Communications ,Computer science ,Internet of Things ,Smart City ,02 engineering and technology ,Computer security ,computer.software_genre ,Data modeling ,Computer Science - Networking and Internet Architecture ,Blockchain ,Smart city ,NB-IoT ,distributed ledger technology (DLT) ,0202 electrical engineering, electronic engineering, information engineering ,Networking and Internet Architecture (cs.NI) ,Data collection ,Downlink ,Distributed database ,Sensors ,Data models ,Distributed Ledger Technology ,020206 networking & telecommunications ,Data Trading ,Benchmarking ,Internet of Things (IoT) ,Computer Science Applications ,Computer Science - Distributed, Parallel, and Cluster Computing ,smart city ,Hardware and Architecture ,Signal Processing ,Data analysis ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Mobile device ,computer ,Information Systems - Abstract
Mobile devices with embedded sensors for data collection and environmental sensing create a basis for a cost-effective approach for data trading. For example, these data can be related to pollution and gas emissions, which can be used to check the compliance with national and international regulations. The current approach for IoT data trading relies on a centralized third-party entity to negotiate between data consumers and data providers, which is inefficient and insecure on a large scale. In comparison, a decentralized approach based on distributed ledger technologies (DLT) enables data trading while ensuring trust, security, and privacy. However, due to the lack of understanding of the communication efficiency between sellers and buyers, there is still a significant gap in benchmarking the data trading protocols in IoT environments. Motivated by this knowledge gap, we introduce a model for DLT-based IoT data trading over the Narrowband Internet of Things (NB-IoT) system, intended to support massive environmental sensing. We characterize the communication efficiency of three basic DLT-based IoT data trading protocols via NB-IoT connectivity in terms of latency and energy consumption. The model and analyses of these protocols provide a benchmark for IoT data trading applications., Comment: 10 pages, 8 figures, Accepted at IEEE Internet of Things Journal
- Published
- 2021
36. Probeware for the Modern Era: IoT Dataflow System Design for Secondary Classrooms
- Author
-
Sherry Hsi, Seth Van Doren, and Leslie G. Bondaryk
- Subjects
Multimedia ,Distributed database ,Computer science ,Dataflow ,business.industry ,Interface (computing) ,05 social sciences ,General Engineering ,050301 education ,Dataflow programming ,Cloud computing ,computer.software_genre ,Computer Science Applications ,Education ,Variety (cybernetics) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Systems design ,0501 psychology and cognitive sciences ,The Internet ,business ,0503 education ,computer ,050107 human factors - Abstract
Sensor systems have the potential to make abstract science phenomena concrete for K–12 students. Internet of Things (IoT) sensor systems provide a variety of benefits for modern classrooms, creating the opportunity for global data production, orienting learners to the opportunities and drawbacks of distributed sensor and control systems, and reducing classroom hardware burden by allowing many students to “listen” to the same data stream. To date, few robust IoT classroom systems have emerged, partially due to lack of appropriate curriculum and student-accessible interfaces, and partially due to lack of classroom-compliant server technology. In this article, we present an architecture and sensor kit system that addresses issues of sensor ubiquity, acquisition clarity, data transparency, reliability, and security. The system has a dataflow programming interface to support both science practices and computational data practices, exposing the movement of data through programs and data files. The IoT Dataflow System supports authentic uses of computational tools for data production through this distributed cloud-based system, overcoming a variety of implementation challenges specific to making programs run for arbitrary duration on a variety of sensors. In practice, this system provides a number of unique yet unexplored educational opportunities. Early results show promise for Dataflow as a valuable learning technology from research conducted in a high school classroom.
- Published
- 2021
37. Minimizing Training Time of Distributed Machine Learning by Reducing Data Communication
- Author
-
Jie Wu, Yubin Duan, and Ning Wang
- Subjects
020203 distributed computing ,Distributed database ,Computer Networks and Communications ,business.industry ,Computer science ,Heuristic ,Graph partition ,Volume (computing) ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Partition (database) ,Computer Science Applications ,Data modeling ,Control and Systems Engineering ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Bipartite graph ,Artificial intelligence ,business ,computer ,0105 earth and related environmental sciences - Abstract
Due to the additive property of most machine learning objective functions, the training can be distributed to multiple machines. Distributed machine learning is an efficient way to deal with the rapid growth of data volume at the cost of extra inter-machine communication. One common implementation is the parameter server system which contains two types of nodes: worker nodes, which are used for calculating updates, and server nodes, which are used for maintaining parameters. We observe that inefficient communication between workers and servers may slow down the system. Therefore, we propose a graph partition problem to partition data among workers and parameters among servers such that the total training time is minimized. Our problem is NP-Complete. We investigate a two-step heuristic approach that first partitions data, and then partitions parameters. We consider the trade-off between partition time and the saving in training time. Besides, we adopt a multilevel graph partition approach to fit the bipartite graph partitioning. We implement both approaches based on an open-source parameter server platform—PS-lite. Experiment results on synthetic and real-world datasets show that both approaches could significantly improve the communication efficiency up to 14 times compared with the random partition.
- Published
- 2021
38. Cpds: Enabling Compressed and Private Data Sharing for Industrial Internet of Things Over Blockchain
- Author
-
Yumo Li, Youshui Lu, Xiaofeng Chen, Saiyu Qi, and Yuanqing Zheng
- Subjects
Service (systems architecture) ,Blockchain ,Traceability ,Distributed database ,business.industry ,Computer science ,Data management ,020208 electrical & electronic engineering ,Access control ,02 engineering and technology ,Computer security ,computer.software_genre ,Computer Science Applications ,Data sharing ,Empirical research ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,computer ,Information Systems - Abstract
Internet of Things (IoT) is a promising technology to provide product traceability for industrial systems. By using sensing and networking techniques, an IoT-enabled industrial system enables its participants to efficiently track products and record their status during production process. Current industrial IoT systems lack a unified product data sharing service, which prevents the participants from acquiring trusted traceability of products. Using emerging blockchain technology to build such a service is a promising direction. However, directly storing product data on blockchain incurs in efficiency and privacy issues in data management due to its distributed infrastructure. In response, we propose Cpds, a compressed and private data sharing framework, that provides efficient and private data management for product data stored on the blockchain. Cpds devises two new mechanisms to store compressed and policy-enforced product data on the blockchain. As a result, multiple industrial participants can efficiently share product data with fine-grained access control in a distributed environment without relying on a trusted intermediary. We conduct extensive empirical studies and demonstrate the feasibility of Cpds in improving the efficiency and security protection of product data storage on the blockchain.
- Published
- 2021
39. Research and implementation of HTAP for distributed database
- Author
-
Ouya Pei, Jintao Gao, Wenjie Liu, and Changhong Jing
- Subjects
Decision support system ,Distributed database ,Database ,Transaction processing ,Relational database ,Computer science ,Online analytical processing ,data analysis ,General Engineering ,htap ,InformationSystems_DATABASEMANAGEMENT ,TL1-4050 ,02 engineering and technology ,computer.software_genre ,Pipeline (software) ,Data warehouse ,olap ,020204 information systems ,distributed database ,0202 electrical engineering, electronic engineering, information engineering ,Online transaction processing ,020201 artificial intelligence & image processing ,computer ,Motor vehicles. Aeronautics. Astronautics - Abstract
Data processing can be roughly divided into two categories, online transaction processing OLTP(on-line transaction processing) and online analytical processing OLAP(on-line analytical processing). OLTP is the main application of traditional relational databases, and it is some basic daily transaction processing, such as bank pipeline transactions and so on. OLAP is the main application of the data warehouse system, it supports some more complex data analysis operations, focuses on decision support, and provides popular and intuitive analysis results. As the amount of data processed by enterprises continues to increase, distributed databases have gradually replaced stand-alone databases and become the mainstream of applications. However, the current business supported by distributed databases is mainly based on OLTP applications, lacking OLAP implementation. This paper proposes an implementation method of HTAP for distributed database CBase, which provides an implementation method of OLAP analysis for CBase, and can easily deal with data analysis of large amounts of data.
- Published
- 2021
40. Learning-Empowered Privacy Preservation in Beyond 5G Edge Intelligence Networks
- Author
-
Xiaohu You, Jiamin Li, Dongming Wang, Jun Xu, and Pengcheng Zhu
- Subjects
Service (systems architecture) ,Distributed database ,Computer science ,End user ,Service management ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,Computer Science Applications ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Differential privacy ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,computer ,5G - Abstract
Edge intelligence is a key enabler for powerful applications through offering artificial-intelligence-empowered computation and caching service in proximity of end users. However, massive data interaction among edge nodes and distributed information storage in the service process make edge network vulnerable to privacy threats. Moreover, the upcoming beyond fifth generation (B5G) era brings new features for edge services in terms of system complexity and operation mode, which pose significant challenges to user privacy protection. To address the challenges, we first investigate privacy concerns in B5G edge intelligence networks and design privacy-driven application clarification strategies. In order to meet the different privacy requirements of various edge applications, we propose a learning-empowered privacy-preserving scheme, which adaptively applies data perturbation in a multi-mode differential privacy (DP) approach. Then we present a case study that implements our proposed schemes in edge caching service management. Numerical results demonstrate that the schemes efficiently improve edge service utility without loss of privacy.
- Published
- 2021
41. Blockchain for social memory institutions: Functional value and possibilities
- Subjects
Cryptocurrency ,Copying ,Distributed database ,Knowledge space ,Computer science ,Legal liability ,05 social sciences ,General Medicine ,050905 science studies ,Computer security ,computer.software_genre ,Transparency (behavior) ,0509 other social sciences ,050904 information & library sciences ,Database transaction ,computer ,Hacker - Abstract
De-centralized storage and the closely-coupled related constancy of loaded data, immunity against hacker attacks, transaction history recording and complete transparency make the blockchain technology attractive not only for developing cryptocurrencies and economic transactions. The author reviews the world experience in applying the technology to various activities of social memory institutions and, in particular, individual programs based on the blockchain. The technology enables to provide control and insurance for pieces of art, to ensure copyright, to prevent illegal copying, to store digital copies and ori- ginal works created in the digital environment, to integrate resources using the key functionality of distributed databases. The possibilities and prospects for Russia are evaluated; the need for regulative foundation to define core functionality and legal liability of blockchain processes is emphasized. The possibility for using the technology for building the single knowledge space as the integrative model of digital museum, archival and library resources is analyzed.
- Published
- 2021
42. Cryptomining Detection in Container Clouds Using System Calls and Explainable Machine Learning
- Author
-
Ibrahim M. Elfadel, Rupesh Raj Karn, Hai Huang, Sahil Suneja, and Prabhakar Kudva
- Subjects
Distributed database ,Computer science ,business.industry ,Process (computing) ,Cloud computing ,computer.file_format ,computer.software_genre ,Machine learning ,System administrator ,Computational Theory and Mathematics ,Hardware and Architecture ,Signal Processing ,Container (abstract data type) ,Malware ,Artificial intelligence ,Executable ,business ,computer - Abstract
The use of containers in cloud computing has been steadily increasing. With the emergence of Kubernetes, the management of applications inside containers (or pods) is simplified. Kubernetes allows automated actions like self-healing, scaling, rolling back, and updates for the application management. At the same time, security threats have also evolved with attacks on pods to perform malicious actions. Out of several recent malware types, cryptomining has emerged as one of the most serious threats with its hijacking of server resources for cryptocurrency mining. During application deployment and execution in the pod, a cryptomining process, started by a hidden malware executable can be run in the background, and a method to detect malicious cryptomining software running inside Kubernetes pods is needed. One feasible strategy is to use machine learning (ML) to identify and classify pods based on whether or not they contain a running process of cryptomining. In addition to such detection, the system administrator will need an explanation as to the reason(s) of the ML's classification outcome. The explanation will justify and support disruptive administrative decisions such as pod removal or its restart with a new image. In this article, we describe the design and implementation of an ML-based detection system of anomalous pods in a Kubernetes cluster by monitoring Linux-kernel system calls (syscalls). Several types of cryptominers images are used as containers within an anomalous pod, and several ML models are built to detect such pods in the presence of numerous healthy cloud workloads. Explainability is provided using SHAP , LIME , and a novel auto-encoding-based scheme for LSTM models. Seven evaluation metrics are used to compare and contrast the explainable models of the proposed ML cryptomining detection engine.
- Published
- 2021
43. Differentially Private Publication of Vertically Partitioned Data
- Author
-
Peng Tang, Xiang Cheng, Shao Huaxi, Rui Chen, and Sen Su
- Subjects
021110 strategic, defence & security studies ,Information privacy ,Distributed database ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Data publishing ,computer.software_genre ,Set (abstract data type) ,Tree (data structure) ,Joint probability distribution ,Differential privacy ,Data mining ,Electrical and Electronic Engineering ,computer ,Decision tree model - Abstract
In this paper, we study the problem of publishing vertically partitioned data under differential privacy, where different attributes of the same set of individuals are held by multiple parties. In this setting, with the assistance of a semi-trusted curator, the involved parties aim to collectively generate an integrated dataset while satisfying differential privacy for each local dataset. Based on the latent tree model (LTM), we present a differentially private latent tree (DPLT) approach, which is, to the best of our knowledge, the first approach to solving this challenging problem. In DPLT, the parties and the curator collaboratively identify the latent tree that best approximates the joint distribution of the integrated dataset, from which a synthetic dataset can be generated. The fundamental advantage of adopting LTM is that we can use the connections between a small number of latent attributes derived from each local dataset to capture the cross-dataset dependencies of the observed attributes in all local datasets such that the joint distribution of the integrated dataset can be learned with little injected noise and low computation and communication costs. DPLT is backed up by a series of novel techniques, including two-phase latent attribute generation (TLAG), tree index based correlation quantification (TICQ) and distributed Laplace perturbation protocol (DLPP). Extensive experiments on real datasets demonstrate that DPLT offers desirable data utility with low computation and communication costs.
- Published
- 2021
44. When Do We Need the Blockchain?
- Author
-
Gautam Das, Deepak Puthal, Elias Kougianos, and Saraju P. Mohanty
- Subjects
Blockchain ,Distributed database ,Security solution ,business.industry ,Computer science ,020206 networking & telecommunications ,Cryptography ,02 engineering and technology ,Computer security ,computer.software_genre ,Computer Science Applications ,Human-Computer Interaction ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,business ,computer ,Simple (philosophy) - Abstract
The questions “When do we use the blockchain?” or “Where do we use the blockchain?” or “Why do we use the blockchain?” look simple, but these questions become complicated due to the increasing utilization of the blockchain. These days, it is being explored as a standard security solution. The blockchain has unique properties, and these are entirely different from other standard security or cryptography solutions. This article gives a summary of the uses of the blockchain, while answering the above three important questions.
- Published
- 2021
45. A Heterogeneous Distributed Computing Model Based on Web Service
- Author
-
Haitao Li
- Subjects
Service (systems architecture) ,Distributed database ,Computer science ,General Chemical Engineering ,Distributed computing ,Deadlock ,Heterogeneous database system ,computer.software_genre ,Industrial and Manufacturing Engineering ,Consistency (database systems) ,General Materials Science ,Cache ,Web service ,Database transaction ,computer - Abstract
Based on the in-depth study of the existing database synchronization model, in order to improve the cross platform ability of the system and facilitate the construction of small and medium-sized enterprise information platform, this paper proposes a heterogeneous distributed computing scheme based on Web service. The scheme uses JMS to realize the message transmission between systems, and uses web service technology to realize cross platform data reading and writing. In the aspect of distributed transaction processing, the two-phase commit protocol is improved to reduce the probability of system deadlock and effectively ensure the consistency of distributed database data. In order to improve the performance of distributed database system, cache technology is introduced, and the way of integrating cache and database transaction processing is proposed, which effectively ensures the validity of cache data. The architecture is oriented to program developers, who can develop efficient and convenient distributed database system on the basis of this architecture. Finally, this architecture is applied to the background management system of mobile express service. The running results show that the architecture can well meet the business requirements of distributed heterogeneous database system synchronization.
- Published
- 2021
46. Processing, Management Concepts, and Cloud Computing
- Author
-
Nandhini Abirami R, Balamurugan Balusamy, Seifedine Kadry, and Amir H. Gandomi
- Subjects
Distributed database ,Computer science ,business.industry ,Distributed computing ,Big data ,Cloud computing ,Virtualization ,computer.software_genre ,Parallel processing (DSP implementation) ,Scalability ,Systems architecture ,business ,Cloud storage ,computer - Abstract
This chapter deals with concepts behind the processing of big data such as parallel processing, distributed data processing, processing in batch mode, and processing in real time. There are basically two different types of data processing, namely, centralized and distributed data processing. Shared everything architecture is a type of system architecture sharing all the resources such as storage, memory, and processor. Shared‐nothing architecture is a type of distributed system architecture that has multiple systems interconnected to make the system scalable. The main purpose of virtualization in big data is to provide a single point of access to the data aggregated from multiple sources. Big data and cloud computing are the two fast evolving paradigms that are driving a revolution in various fields of computing. Cloud computing technology is broadly classified into three types based on its infrastructure: public cloud; private cloud; and hybrid cloud. Cloud storage adopts a distributed file system and a distributed database.
- Published
- 2021
47. Checking causal consistency of distributed databases
- Author
-
Ahmed Bouajjani, Rachid Zennou, Mohammed Erradi, Ranadeep Biswas, and Constantin Enea
- Subjects
FOS: Computer and information sciences ,Numerical Analysis ,Theoretical computer science ,Weak consistency ,Distributed database ,Computer science ,Databases (cs.DB) ,020207 software engineering ,Causal consistency ,02 engineering and technology ,Partition (database) ,CAP theorem ,Computer Science Applications ,Theoretical Computer Science ,Datalog ,Computational Mathematics ,Computer Science - Databases ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computational Theory and Mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Distributed, Parallel, and Cluster Computing (cs.DC) ,computer ,Software ,computer.programming_language - Abstract
The CAP Theorem shows that (strong) Consistency, Availability, and Partition tolerance are impossible to be ensured together. Causal consistency is one of the weak consistency models that can be implemented to ensure availability and partition tolerance in distributed systems. In this work, we propose a tool to check automatically the conformance of distributed/concurrent systems executions to causal consistency models. Our approach consists in reducing the problem of checking if an execution is causally consistent to solving Datalog queries. The reduction is based on complete characterizations of the executions violating causal consistency in terms of the existence of cycles in suitably defined relations between the operations occurring in these executions. We have implemented the reduction in a testing tool for distributed databases, and carried out several experiments on real case studies, showing the efficiency of the suggested approach., Extended version of the paper . It has been published in the Special issue of NETYS2019 by Computing Journal (https://link.springer.com/article/10.1007/s00607-021-00911-3). It is extended with more than 30% of novel contribution, CFP:https://www.springer.com/journal/607/updates/17646200 . Computing is abstracted and indexed by SCOPUS and DBLP
- Published
- 2021
48. An automatic clustering technique for query plan recommendation
- Author
-
Mehdi Hosseinzadeh, Aso Mohammad Darwesh, Nima Jafari Navimipour, Arash Sharifi, and Elham Azhir
- Subjects
DBSCAN ,Information Systems and Management ,Distributed database ,Computer science ,05 social sciences ,050301 education ,Dunn index ,02 engineering and technology ,Reuse ,computer.software_genre ,Query optimization ,Computer Science Applications ,Theoretical Computer Science ,Query plan ,Artificial Intelligence ,Control and Systems Engineering ,Schema (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Cluster analysis ,0503 education ,computer ,Software - Abstract
The query optimizer is responsible for identifying the most efficient Query Execution Plans (QEP’s). The distributed database relations may be kept in several places. These results in a dramatic increase in the number of alternative query’ plans. The query optimizer cannot exhaustively explore the alternative query plans in a vast search space at reasonable computational costs. Henceforth, reusing the previously generated plans instead of generating new plans for new queries is an efficient technique for query processing. To improve the accuracy of clustering, we’ve rewritten the queries to standardize their structures. Furthermore, TF representation schema has been used to convert the queries into vectors. In this paper, we’ve introduced a multi-objective automatic query plan recommendation method, a combination of incremental DBSCAN and NSGA-II. The quality of the results of incremental DBSCAN has been influenced by Minpts (minimum points) and Eps (epsilon). Two cluster validity indices, Dunn index and Davies–Bouldin index, have simultaneously been optimized to calculate the goodness of an answer. Comparative results have been shown against the incremental DBSCAN and K-means regarding an external cluster validity index, namely, the ARI. By comparing different types of query workloads, we’ve found that the introduced method outperforms the other well-known approaches.
- Published
- 2021
49. Liability within the scope of Cloud Computing services
- Author
-
Katarzyna Chałubińska-Jentkiewicz
- Subjects
data protection ,Distributed database ,business.industry ,Computer science ,personal data ,Big data ,lcsh:Law ,Cloud computing ,Internet hosting service ,Service provider ,intellectual property ,Computer security ,computer.software_genre ,digital content ,Electronic mail ,lcsh:Political institutions and public administration (General) ,Information and Communications Technology ,new technology ,lcsh:JF20-2112 ,business ,Cloud storage ,computer ,digital heritage ,lcsh:K - Abstract
The issue of acquiring large amounts of data and creating large sets of digital data, and then processing and analyzing them (Big Data) for the needs of generating artificial intelligence (AI) solutions is one of the key challenges to the development of economy and national security. Data have become a resource that will determine the power and geopolitical and geoeconomic position of countries and regions in the 21st century.The layout of data storage and processing in distributed databases has changed in recent years. Since the appearance of hosting services in the range of ICT services, we are talking about a new type of ASP (Applications Service Providers) – provision of the ICT networks as part of an application). Cloud Computing is therefore one of the versions of the ASP services. The ASP guarantees the customer access to a dedicated application running on a server. Cloud Computing, on the other hand, gives the opportunity to use theresources of a shared infrastructure for many users simultaneously (Murphy n.d.). The use of the CC model is more effective in many aspects. Cloud Computing offers the opportunity to use three basic services: data storage in the cloud (cloud storage), applications in the cloud (cloud applications) and computing in the cloud (compute cloud). Website hosting and electronic mail are still the most frequently chosen services in Cloud Computing. The article attempts to explain the responsibility for content stored in the Cloud Computing.
- Published
- 2021
50. DISTRIBUTED DATABASE MANAGEMENT SYSTEM (DBMS) ARCHITECTURES AND DISTRIBUTED DATA INDEPENDENCE
- Author
-
O Orlunwo Placida and A Prince Oghenekaro
- Subjects
Distributed database ,Database ,Computer science ,Management system ,computer.software_genre ,computer ,Data independence - Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.