301 results
Search Results
2. High-Performance Distributed Computing with Smartphones
- Author
-
Ishikawa, Nadeem, Nomura, Hayato, Yoda, Yuya, Uetsuki, Osamu, Fukunaga, Keisuke, Nagoya, Seiji, Sawara, Junya, Ishihata, Hiroaki, Senoguchi, Junsuke, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zeinalipour, Demetris, editor, Blanco Heras, Dora, editor, Pallis, George, editor, Herodotou, Herodotos, editor, Trihinas, Demetris, editor, Balouek, Daniel, editor, Diehl, Patrick, editor, Cojean, Terry, editor, Fürlinger, Karl, editor, Kirkeby, Maja Hanne, editor, Nardelli, Matteo, editor, and Di Sanzo, Pierangelo, editor
- Published
- 2024
- Full Text
- View/download PDF
3. PyDaskShift: Automatically Convert Loop-Based Sequential Programs to Distributed Parallel Programs
- Author
-
Islam, Agm, Speegle, Greg, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, and Han, Henry, editor
- Published
- 2024
- Full Text
- View/download PDF
4. Otsu Segmentation and Deep Learning Models for the Detection of Melanoma
- Author
-
Mustafa, Mohammed Ahmed, Allami, Zainab Failh, Arabi, Mohammed Yousif, Abdulhasan, Maki Mahdi, Ghadir, Ghadir Kamil, Al-Tmimi, Hayder Musaad, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Botto-Tobar, Miguel, editor, Zambrano Vizuete, Marcelo, editor, Montes León, Sergio, editor, Torres-Carrión, Pablo, editor, and Durakovic, Benjamin, editor
- Published
- 2024
- Full Text
- View/download PDF
5. EOCSim: A CloudSim-Based Simulator for Earth Observation Data Processing in Clouds
- Author
-
Lalayan, Arthur, Astsatryan, Hrachya, Giuliani, Gregory, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lirkov, Ivan, editor, and Margenov, Svetozar, editor
- Published
- 2024
- Full Text
- View/download PDF
6. Efficiently Distributed Federated Learning
- Author
-
Mittone, Gianluca, Birke, Robert, Aldinucci, Marco, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zeinalipour, Demetris, editor, Blanco Heras, Dora, editor, Pallis, George, editor, Herodotou, Herodotos, editor, Trihinas, Demetris, editor, Balouek, Daniel, editor, Diehl, Patrick, editor, Cojean, Terry, editor, Fürlinger, Karl, editor, Kirkeby, Maja Hanne, editor, Nardelli, Matteo, editor, and Di Sanzo, Pierangelo, editor
- Published
- 2024
- Full Text
- View/download PDF
7. Distributed System for Scientific and Engineering Computations with Problem Containerization and Prioritization
- Author
-
Sokolov, Aleksander, Larionov, Andrey, Mukhtarov, Amir, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vishnevskiy, Vladimir M., editor, Samouylov, Konstantin E., editor, and Kozyrev, Dmitry V., editor
- Published
- 2024
- Full Text
- View/download PDF
8. Scalability of blockchain: a comprehensive review and future research direction.
- Author
-
Rao, Iqra Sadia, Kiah, M. L. Mat, Hameed, M. Muzaffar, and Memon, Zain Anwer
- Subjects
DISTRIBUTED computing ,SCIENCE databases ,PARALLEL processing ,DATA science ,PARALLEL programming ,BLOCKCHAINS - Abstract
This comprehensive review paper examines the challenges faced by blockchain technology in terms of scalability and proposes potential solutions and future research directions. Scalability poses a significant hurdle for Bitcoin and Ethereum, manifesting as low throughput, extended transaction delays, and excessive energy consumption, thereby compromising efficiency. The current state of blockchain scalability is analyzed, encompassing the limitations of existing solutions such as Sharding and off-chain scaling. Various proposed remedies, including layer 2 scaling solutions, consensus mechanisms, and alternative approaches, are investigated. The paper also explores the impact of scalability on diverse blockchain applications and identifies potential future research directions by integrating data science techniques with blockchain technology. Notably, nearly 110 primary research papers from reputable scientific databases like Scopus, IEEE Explore, Science Direct, and Web of Science were reviewed, demonstrating scalability in blockchain comprising several elements. Transaction throughput and network latency emerge as the most prominent concerns. Consequently, this review offers future research avenues to address scalability challenges by leveraging data science techniques like distributed computing and parallel processing to divide and process vast datasets across multiple machines. The synergy between data science and blockchain holds promise as an optimal solution. Overall, this up-to-date understanding of blockchain scalability is invaluable to researchers, practitioners, and policy makers engaged in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. AN OPTIASSIGN-PSO BASED OPTIMISATION FOR MULTI-OBJECTIVE MULTI-LEVEL MULTI-TASK SCHEDULING IN CLOUD COMPUTING ENVIRONMENT.
- Author
-
VENKATA JWALA, GODDANTI N. S. S. L., POOJA, PONAKAMPALLI, SHARON, NEELA SHINY, SULTHANA, SHAIK REASHMA, and SURESH, CHINTALAPUDI V.
- Subjects
PARTICLE swarm optimization ,VIRTUAL machine systems ,DISTRIBUTED computing ,CLOUD computing ,FLEXIBLE structures - Abstract
Cloud computing is a prominent and evolving distributed computing paradigm that provides users with on-demand services through a network of diverse autonomous systems with flexible computational structures. The significance of task scheduling becomes evident, serving as a vital component to elevating cloud computing's overall performance. Streamlining cost-effective execution and optimizing resource utilization is a key objective, given the NP-hard nature of the task scheduling problem. Although numerous meta-heuristic techniques have been explored to address task allocation challenges, ample opportunities remain for the development of optimal strategies. This paper presents a state-of-the-art task assignment model that revolves around OptiAssign particle swarm optimization (PSO), with a strong emphasis on the crucial role played by efficient dependency handling and multi-level task scheduling. The primary aim of this model is to optimize the utilization of virtual machine capacities, simultaneously minimizing execution time, makespan, wait time, and overall execution costs within a variety of distributed computing systems. This novel algorithm showcases outstanding performance when compared to traditional approaches in task scheduling, highlighting the importance of skillful dependency management and the implementation of multi-level task scheduling strategies. The results of this study further affirm the effectiveness of the model in addressing the inherent complexities of scenarios involving intricate task dependencies and diverse scheduling priorities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Anomalous process detection for Internet of Things based on K-Core.
- Author
-
Yue Chang, Teng Hu, Fang Lou, Tao Zeng, Mingyong Yin, Siqi Yang, Shaowei Wang, and Sheng Chen
- Subjects
INTERNET of things ,INTRUSION detection systems (Computer security) ,COMPUTER security ,ARTIFICIAL intelligence ,DISTRIBUTED computing ,OPTIMIZATION algorithms - Abstract
In recent years, Internet of Things security incidents occur frequently, which is often accompanied by malicious events. Therefore, anomaly detection is an important part of Internet of Things security defense. In this paper, we create a process whitelist based on the K-Core decomposition method for detecting anomalous processes in IoT devices. The method first constructs an IoT process network according to the relationships between processes and IoT devices. Subsequently, it creates a whitelist and detect anomalous processes. Our work innovatively transforms process data into a network framework, employing K-Core analysis to identify core processes that signify high popularity. Then, a threshold-based filtering mechanism is applied to formulate the process whitelist. Experimental results show that the unsupervised method proposed in this paper can accurately detect anomalous processes on real-world datasets. Therefore, we believe our algorithm can be widely applied to anomaly process detection, ultimately enhancing the overall security of the IoT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Sublinear Algorithms in T-Interval Dynamic Networks.
- Author
-
Jahja, Irvan and Yu, Haifeng
- Subjects
TIME complexity ,ALGORITHMS ,DISTRIBUTED computing ,UNDIRECTED graphs ,DETERMINISTIC algorithms ,DISTRIBUTED algorithms ,SPANNING trees ,TOPOLOGY - Abstract
We consider standard T-interval dynamic networks, under the synchronous timing model and the broadcast CONGEST model. In a T-interval dynamic network, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some adversary and subject to the following constraint: For every T consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let H r to be the maximum (in terms of number of edges) such subgraph for round r through r + T - 1 . We define the backbone diameterd of a T-interval dynamic network to be the maximum diameter of all such H r 's, for r ≥ 1 . We use n to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood. Existing algorithms for these problems all have time complexity of Ω (n) rounds, even for T = ∞ and even when d is as small as O(1). This paper presents a novel approach/framework, based on the idea of massively parallel aggregation. Following this approach, we develop a novel deterministic Count algorithm with O (d 3 log 2 n) complexity, for T-interval dynamic networks with T ≥ c · d 2 log 2 n . Here c is a (sufficiently large) constant independent of d, n, and T. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a Θ (n) term. This paper further develops novel algorithms for solving Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood, while incurring O (d 3 polylog (n)) complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a Θ (n) term. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Computational Modeling of Ganglion Cell Bicolor Opponent Receptive Fields and FPGA Adaptation for Parallel Arrays.
- Author
-
Wei, Hui and Yao, Wenbo
- Subjects
LATERAL geniculate body ,BIOLOGICAL systems ,VISUAL pathways ,DISTRIBUTED computing ,PARALLEL programming - Abstract
The biological system is not a perfect system, but it is a relatively complete system. It is difficult to realize the lower power consumption and high parallelism that characterize biological systems if lower-level information pathways are ignored. In this paper, we focus on the K, M and P pathways of visual signal processing from the retina to the lateral geniculate nucleus (LGN). We model the visual system at a fine-grained level to ensure efficient information transmission while minimizing energy use. We also implement a circuit-level distributed parallel computing model on FPGAs. The results show that we are able to transfer information with low energy consumption and high parallelism. The Artix-7 family of xc7a200tsbv484-1 FPGAs can reach a maximum frequency of 200 MHz and a maximum parallelism of 600, and a single receptive field model consumes only 0.142 W of power. This can be useful for building assistive vision systems for small and light devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. The Method and Experiment of Micro-Crack Identification Using OFDR Strain Measurement Technology.
- Author
-
Chen, Bin, Yang, Jun, Zhang, Dezhi, Liu, Wenxiang, Li, Jin, and Zhang, Min
- Subjects
OPTICAL fibers ,DISTRIBUTED computing ,SPATIAL resolution ,MICROCRACKS ,FIBERS - Abstract
The precise evaluation of micro-crack sizes and locations is crucial for the safe operation of structures. Traditional detection techniques, however, suffer from low spatial resolution, making it difficult to accurately locate micrometer-scale cracks. A method and experimental study were proposed in this paper for identifying and locating micro-cracks using optical fiber strain sensing based on OFDR to address this issue. The feasibility of this method for micro-crack detection was verified by the combination of a polyimide-coated sensing optical fiber (PISOF) and tight sheath sensing optical fiber (TSSOF). A calculation method for micro-crack widths based on distributed optical fiber strain curves was established, and the test results of different optical fibers were compared. Through multiple verification experiments, it was found that the strain peak curves of both fiber types could accurately locate micro-cracks with a precision of 1 mm. Additionally, the crack widths could be obtained by processing the distributed strain curves using a computational model, enabling the accurate capture of micro-crack characteristics at the 10 μm level. A strong linear relationship was observed between the optical fiber stretching length and the crack width. Notably, the relative error in calculating the crack width from the strain curve of PI fiber was very small, while a linear relationship existed between the maximum strain value of the TSSOF and the crack width, allowing for the calculation of the crack width based on the maximum strain value. This further validated the feasibility of the method designed in this paper for the analysis of micro-crack characteristic parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. The integration strategy of information system based on artificial intelligence big data technology in metaverse environment.
- Author
-
Lin, Yechuan and Liu, Shixing
- Subjects
SYSTEM integration ,INFORMATION technology ,ARTIFICIAL intelligence ,INFORMATION storage & retrieval systems ,SHARED virtual environments - Abstract
The concept of the meta-universe is still in its early stages, but many leading tech companies have invested heavily in research and development for this technology. The development of meta-smart cities is a significant trend. In the meta-universe environment, integrating information systems is crucial for analyzing AI big data. Establishing an integrated platform for medical information systems is key to advancing information technology. In the context of the meta-universe, creating an efficient and unified integration platform to eliminate medical information silos and reduce system integration costs has become a pressing issue in medical informatization. This paper proposes a medical information system integration method based on an integration platform and utilizing cloud computing technology as a data center. The core business layer uses the integration software "Ensemble" as the integration platform. The underlying data center employs a Hadoop storage cluster with distributed data storage and parallel computing technology, and the existing scheduling algorithm is studied and analyzed to enhance the resource scheduling algorithm for medical small file data. The effectiveness of the algorithm is simulated and verified on an experimental platform, demonstrating improved efficiency in resource scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Study of Ethereum's Transition from Proof-of-Work to Proof-of-Stake in Preventing Smart Contracts Criminal Activities.
- Author
-
Hall, Oliver J., Shiaeles, Stavros, and Li, Fudong
- Subjects
CRYPTOCURRENCIES ,BLOCKCHAINS ,TECHNOLOGICAL innovations ,PROGRAMMING languages ,DISTRIBUTED computing - Abstract
With the ever-increasing advancement in blockchain technology, security is a significant concern when substantial investments are involved. This paper explores known smart contract exploits used in previous and current years. The purpose of this research is to provide a point of reference for users interacting with blockchain technology or smart contract developers. The primary research gathered in this paper analyses unique smart contracts deployed on a blockchain by investigating the Solidity code involved and the transactions on the ledger linked to these contracts. A disparity was found in the techniques used in 2021 compared to 2023 after Ethereum moved from a Proof-of-Work blockchain to a Proof-of-Stake one, demonstrating that with the advancement in blockchain technology, there is also a corresponding advancement in the level of effort bad actors exert to steal funds from users. The research concludes that as users become more wary of malicious smart contracts, bad actors continue to develop more sophisticated techniques to defraud users. It is recommended that even though this paper outlines many of the currently used techniques by bad actors, users who continue to interact with smart contracts should consistently stay up to date with emerging exploitations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Harmonizing Dimensionality: Unveiling the Prowess of Variational Auto-Encoder in Spark for Big Data Processing.
- Author
-
Jawad, Wasnaa and Al-Bakry, Abbas
- Subjects
DISTRIBUTED computing ,MACHINE learning ,BIG data - Abstract
In the dynamic realm of big data processing, conquering the challenges imposed by highdimensional datasets is imperative. This paper introduces a groundbreaking advancement in dimensionality reduction, employing Variational Auto-Encoder (VAE) within the Spark distributed framework. The deliberate selection of the "TLC" dataset, representative of New York City taxi trips with inherent high dimensionality, highlights the practicality of our approach. Our research showcases the virtuoso performance of VAE, achieving an impressive 95.12% reduction ratio and 89.26% accuracy. This highlights VAE's ability to elegantly distill essential information while discarding superfluous dimensions, achieving a harmonious balance between reduction and accuracy. Furthermore, building on the demonstrated superiority of Spark over Hadoop in prior successes, our adoption of VAE aligns with the overarching goal of enhancing big data processing. Spark's consistent advantage as a distributed framework reaffirms its reliability in handling diverse machine learning algorithms. This paper not only contributes to the advancement of machine learning in big data processing but also underscores the adaptability, versatility, and consistent performance of our approach across various methodologies and frameworks. The success of VAE in reducing dimensionality, coupled with Spark's inherent advantages, positions this research as a valuable contribution to the exploration of advanced techniques in distributed big data processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. JUNO distributed computing system.
- Author
-
Zhang, Xiaomei
- Subjects
NEUTRINOS ,NEUTRONS ,DISTRIBUTED computing ,DATA management ,COMPUTER networks - Abstract
The Jiangmen Underground Neutrino Observatory (JUNO) [1] is a multipurpose neutrino experiment and the determination of the neutrino mass hierarchy is its primary physics goal. JUNO is going to start data taking in 2024 and plans to use distributed computing infrastructure for the data processing and analysis tasks. The JUNO distributed computing system has been designed and built based on DIRAC [2]. Since last year, the official Monte Carlo (MC) production has been running on the system, and petabytes of massive MC data have been shared among JUNO data centers through this system. In this paper, an overview of the JUNO distributed computing system will be presented, including workload management system, data management, and condition data access system. Moreover, the progress of adapting the system to support token-based AAI [3] and HTTP-TPC [4] will be reported. Finally, the paper will mention the preparations for the upcoming JUNO data-taking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A Comprehensive Analysis of Various Tokenizers for Arabic Large Language Models.
- Author
-
Qarah, Faisal and Alsanoosy, Tawfeeq
- Subjects
LANGUAGE models ,ARABIC language ,NATURAL language processing ,PARSING (Computer grammar) ,NATURAL languages ,RESEARCH personnel - Abstract
Pretrained language models have achieved great success in various natural language understanding (NLU) tasks due to their capacity to capture deep contextualized information in text using pretraining on large-scale corpora. Tokenization plays a significant role in the process of lexical analysis. Tokens become the input for other natural language processing (NLP) tasks, like semantic parsing and language modeling. However, there is a lack of research on the evaluation of the impact of tokenization on the Arabic language model. Therefore, this study aims to address this gap in the literature by evaluating the performance of various tokenizers on Arabic large language models (LLMs). In this paper, we analyze the differences between WordPiece, SentencePiece, and BBPE tokenizers by pretraining three BERT models using each tokenizer while measuring the performance of each model on seven different NLP tasks using 29 different datasets. Overall, the model pretrained with text tokenized using the SentencePiece tokenizer significantly outperforms the other two models that utilize WordPiece and BBPE tokenizers. The results of this paper will assist researchers in developing better models, making better decisions in selecting the best tokenizers, improving feature engineering, and making models more efficient, thus ultimately leading to advancements in various NLP applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Survey of IoT Frameworks for Low-Powered Devices.
- Author
-
CAZACU, Andrei-Robert
- Subjects
DISTRIBUTED computing ,INTERNET of things ,TECHNOLOGICAL innovations ,EDGE computing ,CLOUD computing ,SMART devices - Abstract
Thanks to technological advancements, our lives are getting more intertwined with the connected world as more smart devices are coming to market. As such, the clear separation of devices as things (end-devices in IoT systems) and human operated devices is getting increasingly buried. This led to the creation of the term Internet of Everything (IoE) which is defined by Cisco as the “the networked connection of people, process, data, and things” [1]. The main difference between IoT and IoE is inclusion of people in the ecosystem, which greatly increases the number of connected parties. This increase in connected parties creates a strain on our existing infrastructure which is relying on cloud computing for performing most operations. Even though this resource provides heaps of computational power, the weak link in this scenario is the network, where all the connected devices can easily overload the available bandwidth, leading to slow response speeds and low general availability. The answer to this problem lies with technology that already exists and is not yet fully exploited as a distributed computing powerhouse, IoT. This paper aims to summarise the concept of computing at the edge, common architectural patterns, existing solutions, while also discussing real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions.
- Author
-
Jouini, Oumayma, Sethom, Kaouthar, Namoun, Abdallah, Aljohani, Nasser, Alanazi, Meshari Huwaytim, and Alanazi, Mohammad N.
- Subjects
MACHINE learning ,EDGE computing ,DISTRIBUTED computing ,DEEP learning ,DATA privacy ,ELECTRONIC data processing ,MICROCONTROLLERS - Abstract
Internet of Things (IoT) devices often operate with limited resources while interacting with users and their environment, generating a wealth of data. Machine learning models interpret such sensor data, enabling accurate predictions and informed decisions. However, the sheer volume of data from billions of devices can overwhelm networks, making traditional cloud data processing inefficient for IoT applications. This paper presents a comprehensive survey of recent advances in models, architectures, hardware, and design requirements for deploying machine learning on low-resource devices at the edge and in cloud networks. Prominent IoT devices tailored to integrate edge intelligence include Raspberry Pi, NVIDIA's Jetson, Arduino Nano 33 BLE Sense, STM32 Microcontrollers, SparkFun Edge, Google Coral Dev Board, and Beaglebone AI. These devices are boosted with custom AI frameworks, such as TensorFlow Lite, OpenEI, Core ML, Caffe2, and MXNet, to empower ML and DL tasks (e.g., object detection and gesture recognition). Both traditional machine learning (e.g., random forest, logistic regression) and deep learning methods (e.g., ResNet-50, YOLOv4, LSTM) are deployed on devices, distributed edge, and distributed cloud computing. Moreover, we analyzed 1000 recent publications on "ML in IoT" from IEEE Xplore using support vector machine, random forest, and decision tree classifiers to identify emerging topics and application domains. Hot topics included big data, cloud, edge, multimedia, security, privacy, QoS, and activity recognition, while critical domains included industry, healthcare, agriculture, transportation, smart homes and cities, and assisted living. The major challenges hindering the implementation of edge machine learning include encrypting sensitive user data for security and privacy on edge devices, efficiently managing resources of edge nodes through distributed learning architectures, and balancing the energy limitations of edge devices and the energy demands of machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. PERFORMANCE COMPARISON OF APACHE SPARK AND HADOOP FOR MACHINE LEARNING BASED ITERATIVE GBTR ON HIGGS AND COVID-19 DATASETS.
- Author
-
SEWAL, PIYUSH and SINGH, HARI
- Subjects
DISTRIBUTED computing ,GRAPH algorithms ,BATCH processing ,REGRESSION trees ,COMPUTING platforms ,SQL ,MACHINE learning - Abstract
In the realm of distributed computing frameworks, such as Apache Spark and MapReduce Hadoop, the efficacy of these frameworks varies across diverse applications and algorithms contingent upon distinctive evaluation metrics and critical parameters. This research paper diligently scrutinizes the extant body of research that compares these two frameworks concerning said evaluation metrics and parameters. Subsequently, it conducts empirical investigations to authenticate the performance of these frameworks in the context of an iterative Gradient Boosting Tree Regression (GBTR) algorithm. Remarkably, the comparative analyses in previous studies encompass a spectrum of iterative machine learning regression and classification techniques, batch processing, SQL, and Graph processing algorithms. Furthermore, numerous investigations have explored the application of machine learning algorithms encompassing logistic regression, Page Rank, K-Means, KNN, and the HiBench suite. This paper presents the comparison between the two distributed computing platforms on iterative GBTR for classification task on the HIGGS dataset from the physics domain and for the regression task on the Covid-19 dataset from the healthcare domain. The empirical findings corroborate that Apache Spark exhibits superior execution speed in iterative tasks when the available physical memory significantly exceeds the dataset size. Conversely, Hadoop outperforms Spark when dealing with substantial datasets or constrained physical memory resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. L1-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection.
- Author
-
Chuandong Qin, Yu Cao, and Liqun Meng
- Abstract
Brain tumors come in various types, each with distinct characteristics and treatment approaches, making manual detection a time-consuming and potentially ambiguous process. Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes. Machine learning models have become key players in automating brain tumor detection. Gradient descent methods are the mainstream algorithms for solving machine learning models. In this paper, we propose a novel distributed proximal stochastic gradient descent approach to solve the L
1 -Smooth Support Vector Machine (SVM) classifier for brain tumor detection. Firstly, the smooth hinge loss is introduced to be used as the loss function of SVM. It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization. Secondly, the L1 regularization method is employed to sparsify features and enhance the robustness of the model. Finally, adaptive proximal stochastic gradient descent (PGD) with momentum, and distributed adaptive PGD withmomentum(DPGD) are proposed and applied to the L1 -Smooth SVM. Distributed computing is crucial in large-scale data analysis, with its value manifested in extending algorithms to distributed clusters, thus enabling more efficient processing ofmassive amounts of data. The DPGD algorithm leverages Spark, enabling full utilization of the computer's multi-core resources. Due to its sparsity induced by L1 regularization on parameters, it exhibits significantly accelerated convergence speed. From the perspective of loss reduction, DPGD converges faster than PGD. The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection. Frompre-trained models, both the PGD andDPGD outperform other models, boasting an accuracy of 95.21%. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
23. Pseudospheres: combinatorics, topology and distributed systems
- Author
-
Alberto, Luis
- Published
- 2024
- Full Text
- View/download PDF
24. Enhancing Data Analytics in Environmental Sensing Through Cloud IoT Integration.
- Author
-
Verma, Rohan, Taneja, Harsh, Singh, Kiran Deep, and Singh, Prabh Deep
- Subjects
INTERNET of things ,ENVIRONMENTAL monitoring ,DATA security ,CLOUD computing ,VIRTUAL machine systems - Abstract
Transformational advances in environmental sensing have been made possible by the convergence of Cloud Computing and the Internet of Things (IoT). The potential for these technologies is to work together and improve environmental monitoring data analytics. Integrating IoT in the cloud creates a robust environment for advanced analytics by overcoming previous hurdles in data granularity, real-time monitoring, and geographical coverage. These developments aren't without their own set of difficulties, however, including data security, interoperability, and ethical concerns. This paper explores integrating Cloud IoT with environmental sensing. The paper addresses these issues and emphasises the need to establish ethical data-gathering methods, implement standardised communication protocols, and address privacy concerns. The study concludes by examining the challenges, concerns, and potential of integrating Cloud IoT. The integration of Cloud Computing and IoT not only transforms environmental sensing but also provides a solid foundation for collaborative research, data-driven decision-making, and environmentally conscious management. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. FedAdaSS: Federated Learning with Adaptive Parameter Server Selection Based on Elastic Cloud Resources.
- Author
-
Xu, Yuwei, Zhao, Baokang, Zhou, Huan, and Su, Jinshu
- Subjects
FEDERATED learning ,MACHINE learning ,ARTIFICIAL intelligence ,DISTRIBUTED computing ,EDGE computing - Abstract
The rapid expansion of artificial intelligence (AI) applications has raised significant concerns about user privacy, prompting the development of privacy-preserving machine learning (ML) paradigms such as federated learning (FL). FL enables the distributed training of ML models, keeping data on local devices and thus addressing the privacy concerns of users. However, challenges arise from the heterogeneous nature of mobile client devices, partial engagement of training, and non-independent identically distributed (non-IID) data distribution, leading to performance degradation and optimization objective bias in FL training. With the development of 5G/6G networks and the integration of cloud computing edge computing resources, globally distributed cloud computing resources can be effectively utilized to optimize the FL process. Through the specific parameters of the server through the selection mechanism, it does not increase the monetary cost and reduces the network latency overhead, but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works. In this paper, we propose the FedAdaSS algorithm, an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server. Our approach leverages the flexibility of cloud resource computing power, and allows organizers to strategically select servers for data broadcasting and aggregation, thus improving training performance while maintaining cost efficiency. The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process. Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions, and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20% compared to the Federated Averaging (FedAvg) with random reshuffling method under unique server. Furthermore, FedAdaSS effectively mitigates performance loss caused by low client engagement, reducing the loss indicator by 50%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. StreamFilter: a framework for distributed processing of range queries over streaming data with fine-grained access control.
- Author
-
Safaee, Shahab, Mirabi, Meghdad, and Safaei, Ali Asghar
- Subjects
DISTRIBUTED computing ,DATA management ,DATA distribution ,ACCESS control ,INDEXING ,TREES - Abstract
Access control is a fundamental component of any data management system, ensuring the prevention of unauthorized data access. Within the realm of data streams, it plays a crucial role in query processing by facilitating authorized access to them. This paper introduces the StreamFilter framework, which focuses on securely processing queries with range filters over streaming data. Leveraging the Role-Based Access Control model, the StreamFilter framework enables the specification of fine-grained access policies at various levels of granularity, such as tuples and attributes, through the utilization of a bit string structure. To enhance the search operation during data stream query processing, the framework employs a distributed indexing method, constructing a set of smaller B + Tree indices rather than a single large B + Tree index. Furthermore, it seamlessly integrates access authorization evaluation with query processing, efficiently filtering unauthorized parts from the query results. The experimental results demonstrate an approximately 50% increase in efficiency for processing queries with range filters compared to the post-filtering strategy. This improvement is observed across all types of data distribution, including uniform, skew, and hyper skew. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. ServiceNet: resource-efficient architecture for topology discovery in large-scale multi-tenant clouds.
- Author
-
Gama Garcia, Angel, Alcaraz Calero, Jose M., Mora Mora, Higinio, and Wang, Qi
- Subjects
5G networks ,RESOURCE management ,TOPOLOGY ,TENANTS - Abstract
Modern computing infrastructures are evolving due to virtualisation, especially with the advent of 5G and future technologies. While this transition offers numerous benefits, it also presents challenges. Consequently, understanding these complex systems, including networks, services, and their interconnections, is crucial. This paper introduces ServiceNet, a groundbreaking architecture that accurately performs the important task of providing understanding of a multi-tenant architecture by discovering the complete topology, crucial in the realm of high-performance distributed computing. Experimental results have been carried out in different scenarios in order to validate our approach, demonstrating the effectiveness of our approach in comprehensive multi-tenant topology discovery. The experiments, involving up to forty tenant, highlight the adaptability of ServiceNet as a valuable tool for real-time monitoring in topology discovery purposes, even in challenging scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Leveraging Large Language Models and BERT for Log Parsing and Anomaly Detection.
- Author
-
Zhou, Yihan, Chen, Yan, Rao, Xuanming, Zhou, Yukang, Li, Yuxin, and Hu, Chao
- Subjects
LANGUAGE models ,DISTRIBUTED computing ,TRANSFORMER models ,CHATGPT ,COMPUTER systems - Abstract
Computer systems and applications generate large amounts of logs to measure and record information, which is vital to protect the systems from malicious attacks and useful for repairing faults, especially with the rapid development of distributed computing. Among various logs, the anomaly log is beneficial for operations and maintenance (O&M) personnel to locate faults and improve efficiency. In this paper, we utilize a large language model, ChatGPT, for the log parser task. We choose the BERT model, a self-supervised framework for log anomaly detection. BERT, an embedded transformer encoder, with a self-attention mechanism can better handle context-dependent tasks such as anomaly log detection. Meanwhile, it is based on the masked language model task and next sentence prediction task in the pretraining period to capture the normal log sequence pattern. The experimental results on two log datasets show that the BERT model combined with an LLM performed better than other classical models such as Deelog and Loganomaly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Parallel and Distributed Frugal Tracking of a Quantile †.
- Author
-
Epicoco, Italo, Pulimeno, Marco, and Cafaro, Massimo
- Subjects
DISTRIBUTED computing ,SKEWNESS (Probability theory) ,PARALLEL programming ,NETWORK performance ,USER experience - Abstract
In this paper, we deal with the problem of monitoring network latency. Indeed, latency is a key network metric related to both network performance and quality of service, since it directly impacts on the overall user's experience. High latency leads to unacceptably slow response times of network services, and may increase network congestion and reduce the throughput, in turn disrupting communications and the user's experience. A common approach to monitoring network latency takes into account the frequently skewed distribution of latency values, and therefore specific quantiles are monitored, such as the 95th, 98th, and 99th percentiles. We present a comparative analysis of the speed of convergence of the sequential FRUGAL-1U, FRUGAL-2U, and EASYQUANTILE algorithms and the design and analysis of parallel, message-passing-based versions of these algorithms that can be used for monitoring network latency quickly and accurately. Distributed versions are also discussed. Extensive experimental results are provided and discussed as well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An Integrated Software-Defined Networking–Network Function Virtualization Architecture for 5G RAN–Multi-Access Edge Computing Slice Management in the Internet of Industrial Things.
- Author
-
Chiti, Francesco, Morosi, Simone, and Bartoli, Claudio
- Subjects
DISTRIBUTED computing ,SOFTWARE-defined networking ,INTERNET of things ,INDUSTRIAL management ,MANUFACTURING processes - Abstract
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction in operating costs and maintenance time, together with improving the quality of products and services. More specifically, the term Industrial Internet of Things (IIoT) identifies the system which consists of advanced Internet-connected equipment and analytics platforms specialized for industrial activities, where IIoT devices range from small environmental sensors to complex industrial robots. This paper presents an integrated high-level SDN-NFV architecture enabling clusters of smart devices to interconnect and manage the exchange of data with distributed control processes and databases. In particular, it is focused on 5G RAN-MEC slice management in the IIoT context. The proposed system is emulated by means of two distinct real-time frameworks, demonstrating improvements in connectivity, energy efficiency, end-to-end latency and throughput. In addition, its scalability, modularity and flexibility are assessed, making this framework suitable to test advanced and more applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Smart IoT SCADA System for Hybrid Power Monitoring in Remote Natural Gas Pipeline Control Stations.
- Author
-
Waqas, Muhammad and Jamil, Mohsin
- Subjects
HYBRID power systems ,INTELLIGENT control systems ,SUPERVISORY control & data acquisition systems ,SUPERVISORY control systems ,DISTRIBUTED computing - Abstract
A pipeline network is the most efficient and rapid way to transmit natural gas from source to destination. The smooth operation of natural gas pipeline control stations depends on electrical equipment such as data loggers, control systems, surveillance, and communication devices. Besides having a reliable and consistent power source, such control stations must also have cost-effective and intelligent monitoring and control systems. Distributed processes are monitored and controlled using supervisory control and data acquisition (SCADA) technology. This paper presents an Internet of Things (IoT)-based, open-source SCADA architecture designed to monitor a Hybrid Power System (HPS) at a remote natural gas pipeline control station, addressing the limitations of existing proprietary and non-configurable SCADA architectures. The proposed system comprises voltage and current sensors acting as Field Instrumentation Devices for required data collection, an ESP32-WROOM-32E microcontroller that functions as the Remote Terminal Unit (RTU) for processing sensor data, a Blynk IoT-based cloud server functioning as the Master Terminal Unit (MTU) for historical data storage and human–machine interactions (HMI), and a GSM SIM800L module and a local WiFi router for data communication between the RTU and MTU. Considering the remote locations of such control stations and the potential lack of 3G, 4G, or Wi-Fi networks, two configurations that use the GSM SIM800L and a local Wi-Fi router are proposed for hardware integration. The proposed system exhibited a low power consumption of 3.9 W and incurred an overall cost of 40.1 CAD, making it an extremely cost-effective solution for remote natural gas pipeline control stations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A Review on Software‐Defined Networking for Internet of Things Inclusive of Distributed Computing, Blockchain, and Mobile Network Technology: Basics, Trends, Challenges, and Future Research Potentials.
- Author
-
Shafiq, Shakila, Rahman, Md. Sazzadur, Shaon, Shamim Ahmed, Mahmud, Imtiaz, Hosen, A. S. M. Sanwar, and Longo, Francesco
- Subjects
DISTRIBUTED computing ,MOBILE computing ,TELECOMMUNICATION ,EDGE computing ,INTERNET of things ,BLOCKCHAINS - Abstract
Internet of things (IoT) and software‐defined networking (SDN) are two relatively recent developments in the field of communication technology that have emerged in response to the growing demand for more efficient, flexible, and dynamic network architectures. As both of these concepts are new, they have received increasing attention from academic or industrial sources to emphasize their potential for integration. This study is aimed at reviewing the literature on SDN for IoT (SDN‐IoT) published from 2014 to 2022 and presenting insights and directions for future research, with a particular focus on cloud, fog, and edge computing. The study collects data from Science Direct, IEEE Explore, and Google Scholar and objectively selects 126 papers and conducts metadata analysis. The study articulates the challenges of managing and orchestrating IoT systems and how SDN can be used to address these challenges by enabling dynamic and flexible network configurations. It delineates not only the function of blockchain (BC) technology in securing and managing IoT networks but also how SDN can be utilized to incorporate BC‐based solutions. Additionally, the potential of SDN for mobile networks is explored, which are increasingly being used to support IoT devices. Finally, this study outlines the issues, challenges, and potential future research directions that may present opportunities for the researchers working in this field, underscoring the demand for more in‐depth investigation and advancement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. GT-scheduler: a hybrid graph-partitioning and tabu-search based task scheduler for distributed data stream processing systems.
- Author
-
Hadian, Hamid and Sharifi, Mohsen
- Subjects
DISTRIBUTED computing ,METAHEURISTIC algorithms ,INFORMATION storage & retrieval systems ,NP-hard problems ,HEURISTIC algorithms ,TABU search algorithm - Abstract
The continual increase in the amount of generated data by social media, IoT devices, and monitoring systems have motivated the use of Distributed Data Stream Processing (DSP) systems to harness data in a real-time manner. The scheduling of processing tasks in DSP systems across the machines in a cluster or cloud environment is an NP-Hard problem. Different scheduling schemes have been proposed to address the scheduling problem, but most fail to take into account the runtime adaptation and workload changes after initial scheduling. In this paper, we propose a new scheduler (GT-Scheduler) that leverages a heuristic and rule-based algorithm to schedule tasks at near-optimal performance alongside using a meta-heuristic algorithm to make runtime adaptation. Firstly, K-way graph partitioning divides the tasks in an application graph according to the communication patterns. It places tasks with the highest amount of communication near each other to limit an increase in the topology response time. Secondly, instead of assigning tasks to the worker nodes, a partition of tasks is assigned to the nodes by adopting a greedy strategy. If the capacity of nodes is insufficient to host a specific partition of tasks, this partition is iteratively divided by the k-partitioning algorithm to assign to a proper node. The idea of runtime adaptation lies in detecting overutilized worker nodes and reassigning their tasks by exploiting a Tabu-Search and a new scoring strategy to find the best solution in a way that no worker node is overutilized. GT-Scheduler is implemented on the standard Apache Storm and using the standard benchmarks, it is shown that GT-Scheduler outperforms the R-Storm and the Online-Scheduler by at least 35% in reducing the topology response time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Software Testing Using Cuckoo Search Algorithm with Machine Learning Techniques.
- Author
-
N., Deepashree and Parveen, M. Sahina
- Subjects
SOFTWARE failures ,TIME complexity ,MACHINE learning ,DISTRIBUTED computing ,FRUIT flies - Abstract
Software testing are any errors, flaws, bugs, mistakes, failures in a piece of software that might cause the programme to produce incorrect or unexpected results. Testing in software almost always increase both the time and money needed to finish a project. And finding bugs and fixing them is a laborious and expensive software process in and of itself. While it's unrealistic to expect to completely eradicate all testing from a project, their severity may be mitigated. It is possible to predict where bugs may appear in software using a method known as software defect prediction (SDP). The goal of each software development project should be to provide a bugfree product. Predicting where bugs may appear in code, often known as software defect prediction (SDP), is an important part of fixing software. Software of a high calibre should have few bugs. A software metric is a quantitative or qualitative evaluation of some aspect of the programme or its requirements. One of the more recent population-based algorithms, Cuckoo Search (CS) was inspired by the flight patterns of some cuckoo species as well as the Lévy flying patterns of other birds and fruit flies. The needs for international convergence are met by CS. KNN is a significant non-parameter supervised learning technique. This paper presents an overview of Stochastic Diffusion Search (SDS) in the form of a social metaphor to illustrate the processes by which SDS allots resources. The best-fit pattern identification and matching difficulties were addressed by SDS using a novel probabilistic method. As a multiagent population-based global search and optimization method, SDS is a distributed model of computing that makes use of interaction amongst basic agents. The behaviour of SDS is described by studying its resource allocation, convergence to global optimum, resilience, minimum convergence criterion, and linear time complexity within a rigorous mathematical framework, setting it apart from many nature-inspired search algorithms. This paper proposes a hybrid optimization strategy based on CSSDS techniques. By using the global search strategy solution of the SDS algorithm, this hybridization idea aims to enhance the cuckoo bird's search strategy for the optimum host nest. To that end, the SDS method would be used to place the cuckoo egg in the most advantageous location. When compared to other classifiers, PC2's improved performance may be attributed to its higher recall values. When compared to the Naive Bayes and Radial Bias Neural Network classifiers, the KNN performs 7.64% and 2.20% better, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. WLCG Transition from X.509 to Tokens. Status, Plans, and Timeline.
- Author
-
Dack, Thomas, Agostini, Federica, Basney, Jim, Cornwall, Linda, De Stefano Jr, John Steven, Dykstra, Dave, Giacomini, Francesco, Litmaath, Maarten, Miccoli, Roberta, Sallé, Mischa, Short, Hannah, and Vianello, Enrico
- Subjects
DISTRIBUTED computing ,WORKFLOW ,MANAGEMENT ,CLOUD computing ,REMOTE computing - Abstract
Since 2017, the Worldwide LHC Computing Grid (WLCG) has been working towards enabling token-based authentication and authorization throughout its entire middleware stack. Following the initial publication of the WLCG Token Schema v1.0 in 2019, OAuth2.0 token workflows have been integrated across grid middleware. There are many complex challenges to be addressed before the WLCG can be end-to-end token-based, including not just technical hurdles but also interoperability with the wider authentication and authorization landscape. This paper presents the status of the WLCG coordination and deployment work, and how it relates to software providers and partner communities. The authors also detail how the WLCG token transition timeline has progressed, and how it has changed since its publication. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS.
- Author
-
Maeno, Tadashi, Alekseev, Aleksandr, Barreiro Megino, Fernando Harald, De, Kaushik, Guan, Wen, Karavakis, Edward, Klimentov, Alexei, Korchuganova, Tatiana, Lin, FaHui, Nilsson, Paul, Wenaus, Torre, Yang, Zhaoyu, and Zhao, Xin
- Subjects
DISTRIBUTED computing ,WORKFLOW ,MANAGEMENT ,CLOUD computing ,REMOTE computing - Abstract
In recent years, advanced and complex analysis workflows have gained increasing importance in the ATLAS experiment at CERN, one of the large scientific experiments at LHC. Support for such workflows has allowed users to exploit remote computing resources and service providers distributed worldwide, overcoming limitations on local resources and services. The spectrum of computing options keeps increasing across the Worldwide LHC Computing Grid (WLCG), volunteer computing, high-performance computing, commercial clouds, and emerging service levels like Platform-as-a-Service (PaaS), Container-as-a-Service (CaaS) and Function-as-a-Service (FaaS), each one providing new advantages and constraints. Users can significantly benefit from these providers, but at the same time, it is cumbersome to deal with multiple providers, even in a single analysis workflow with fine-grained requirements coming from their applications' nature and characteristics. In this paper, we will first highlight issues in geographically-distributed heterogeneous computing, such as the insulation of users from the complexities of dealing with remote providers, smart workload routing, complex resource provisioning, seamless execution of advanced workflows, workflow description, pseudointeractive analysis, and integration of PaaS, CaaS, and FaaS providers. We will also outline solutions developed in ATLAS with the Production and Distributed Analysis (PanDA) system and future challenges for LHC Run4. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Accounting and monitoring tools enhancement for Run 3 in the ATLAS distributed computing.
- Author
-
Alekseev, Aleksandr, Barberis, Dario, and Svatos, Michal
- Subjects
ACCOUNTING ,DISTRIBUTED computing ,DASHBOARDS (Management information systems) ,MANAGEMENT information systems ,INFRASTRUCTURE (Economics) - Abstract
The ATLAS experiment at the LHC utilizes complex multicomponent distributed systems for processing (PanDA WMS) and managing (Rucio) data. The complexity of the relationships between components, the amount of data being processed and the continuous development of new functionalities of the critical systems are the main challenges to consider when creating monitoring and accounting tools able to adapt to this dynamic environment in a short time. To overcome these challenges, ATLAS uses the unified monitoring infrastructure (UMA) provided by CERN-IT since 2018, which accumulates information from distributed data sources and then makes it available for different ATLAS distributed computing user groups. The information is displayed using Grafana dashboards. Based on the information provided, they can be grouped as "data transfers", "site accounting", "jobs accounting" and so on. These monitoring tools are used daily by ATLAS members to spot and fix issues. In addition, LHC Run 3 required the implementation of significant changes in the monitoring and accounting infrastructure to collect and process data collected by ATLAS during the LHC run. This paper describes the recent enhancements to the UMA-based monitoring and accounting dashboards. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A game theoretic framework for distributed computing with dynamic set of agents
- Author
-
Dhamal, Swapnil, Ben-Ameur, Walid, Chahed, Tijani, Altman, Eitan, Sunny, Albert, and Poojary, Sudheer
- Published
- 2024
- Full Text
- View/download PDF
39. EVOLUTION OF CRYPTOCURRENCIES AND THEIR UTILIZATION IN THE DIGITAL ECONOMY.
- Author
-
Grasic, Anej and Vidnjevic, Marko
- Subjects
CRYPTOCURRENCY exchanges ,ECONOMICS ,ASSETS (Accounting) ,BLOCKCHAINS ,DISTRIBUTED computing - Abstract
Cryptocurrencies and blockchain technology have revolutionized finance and technology, presenting unparalleled opportunities for innovation. Emerging with Bitcoin in 2009, the crypto ecosystem has significantly evolved, encompassing a wide range of digital assets and decentralized applications. This paper delves into the origins of cryptocurrency, the dynamics of crypto transactions, and the transformative potential of blockchain technology. It examines various types of cryptocurrencies, including coins and tokens. Also, it explores the multiple layers of blockchain technology, from foundational infrastructure to advanced applications. Despite facing challenges, like market volatility and regulatory uncertainty, continuous collaboration and innovation drive the field forward. By adopting responsible development practices, stakeholders can harness the potential of cryptocurrencies and blockchain to foster a more inclusive, transparent, and efficient financial ecosystem. This evolution promises to redefine traditional financial systems and enable new forms of economic participation and digital interaction, ultimately contributing to a more equitable global digital economy worldwide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Near-optimal distributed computation of small vertex cuts.
- Author
-
Parter, Merav and Petruschka, Asaf
- Subjects
DISTRIBUTED algorithms ,GRAPH connectivity ,DISTRIBUTED computing ,GRAPH algorithms ,SUBGRAPHS ,WARMUP - Abstract
We present near-optimal algorithms for detecting small vertex cuts in the CONGEST model of distributed computing. Despite extensive research in this area, our understanding of the vertex connectivity of a graph is still incomplete, especially in the distributed setting. To this date, all distributed algorithms for detecting cut vertices suffer from an inherent dependency in the maximum degree of the graph, Δ . Hence, in particular, there is no truly sub-linear time algorithm for this problem, not even for detecting a single cut vertex. We take a new algorithmic approach for vertex connectivity which allows us to bypass the existing Δ barrier. As a warm-up to our approach, we show a simple O ~ (D) -round randomized algorithm for computing all cut vertices in a D-diameter n-vertex graph. This improves upon the O (D + Δ / log n) -round algorithm of [Pritchard and Thurimella, ICALP 2008]. Our key technical contribution is an O ~ (D) -round randomized algorithm for computing all cut pairs in the graph, improving upon the state-of-the-art O (Δ · D) 4 -round algorithm by [Parter, DISC '19]. Note that even for the considerably simpler setting of edge cuts, currently O ~ (D) -round algorithms are known only for detecting pairs of cut edges. Our approach is based on employing the well-known linear graph sketching technique [Ahn, Guha and McGregor, SODA 2012] along with the heavy-light tree decomposition of [Sleator and Tarjan, STOC 1981]. Combining this with a careful characterization of the survivable subgraphs, allows us to determine the connectivity of G \ { x , y } for every pair x , y ∈ V , using O ~ (D) -rounds. We believe that the tools provided in this paper are useful for omitting the Δ -dependency even for larger cut values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. 面向空间分布式计算的动态任务分解及长时保障机制.
- Author
-
锁啸天, 杨雅婷, and 嵩天
- Abstract
Copyright of Journal of Frontiers of Computer Science & Technology is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
42. Good Flood Bed: An Energy-Efficient and Controlled Concurrent Transmission Protocol.
- Author
-
Khanmirza, Hamed and Maroufi, Salman
- Subjects
PARENT-child relationships ,SPANNING trees ,ENERGY consumption ,DISTRIBUTED computing - Abstract
Flood-based communication protocols are attractive due to their easy and fast network startup, resiliency to communication or node loss, and node mobility. Concurrent transmissions are flooding-based protocols that enable low-latency, network-wide communication synchronously. They also provide energy-efficient data dissemination with higher delivery reliability over traditional flooding. This paper proposes the Good Flood Bed (GFB) method to reduce further the energy consumption of networks operating based on concurrent transmission by preventing flood in parts of the network where there is no need for flood data. GFB starts with a distributed leader-election process to choose a root node. Then, the root node builds a spanning tree among nodes. The tree establishes a parent-child relationship and forms a new controlled flood bed. We show the effectiveness of the proposed method by simulation and experimenting on a BLE5 small-scale test bed. Experiments prove that this method has the potential to cut network energy consumption by 50%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. AI‐enabled blockchain and SDN‐integrated IoT security architecture for cyber‐physical systems.
- Author
-
Wang, Sen, Zhang, Jie, and Zhang, Tianhui
- Subjects
BLOCKCHAINS ,INFORMATION sharing ,INTERNET of things ,DISTRIBUTED computing ,RANDOM forest algorithms - Abstract
To address the IoT security problem, in this paper we propose and evaluate the DDoS attack mitigation method based on blockchain, and construct a DDoS abnormal information detection and sharing model. The obtained experimental results show that when the number of decision trees increases, the training time of the DDoS attack detection model based on the RF model grows with a minimum trend of 14 s. The testing time is finally maintained at 1 s, and the recognition accuracy of DDoS attacks keeps improving, ultimately reaching over 99.8%. If the amount of DDoS abnormal traffic information exceeds 100 pieces and 2000 pieces, it only takes 0.1 and 5 s to sign the DDoS abnormal traffic information using ECDSA algorithm digitally. The signature verification only takes 0.1 and 9 s, respectively. And compared to conventional network physical system IoT security architecture, a network physical system IoT security architecture that integrates AI empowerment, blockchain, and SDN integration has a higher joint defense success rate. It can be explained that this scheme will be conducive to promoting joint defense against DDoS attacks and ensuring the security of the Internet of Things. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Blockchain for Vehicle Registration, Transferring and Management Process in Sri Lanka.
- Author
-
Malintha, Chamod, Diyasena, Deshanjali, and Madushanka, Tiroshan
- Subjects
BLOCKCHAINS ,AUTOMOBILE registration fees ,DISTRIBUTED computing ,ADMINISTRATIVE fees - Abstract
In Sri Lanka, the widespread occurrence of fraudulent activities in vehicle registration processes presents notable challenges, especially regarding ownership disputes and discrepancies in vehicle history. This study first investigates the drawbacks of the existing manual registration system, which leads to delays in registration, ownership transfer, and modification processes, thus contributing to fraudulent activities in the secondary vehicle market. These challenges arise from centralized storage systems, which are vulnerable to single points of failure and data integrity compromise due to third-party involvement. To address these deficiencies, as a solution, this paper recommends the adoption of blockchain technology, utilizing its decentralized and distributed nature to ensure the security, reliability, and transparency of vehicle information management. Specifically, a blockchain-based system is proposed and developed on the Ethereum network, incorporating smart contracts to streamline key functions of the Sri Lankan government's vehicle registration process. These functions include new vehicle registration, reproducible certificate issuance, ownership transfers, modifications, and comprehensive vehicle history maintenance. The system provides public access to vehicle details, including historical data, via a user-friendly mobile application. Ultimately, this study contributes to establishing a secure and reliable method that simplifies the vehicle registration process, mitigating security breaches and data tampering risks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
45. Gathering Over Heterogeneous Meeting Nodes.
- Author
-
Chakraborty, Abhinav, Bhagat, Subhash, and Mukhopadhyaya, Krishnendu
- Subjects
- *
GRAPH theory , *MOBILE robots , *MULTIPLICITY (Mathematics) , *DISTRIBUTED computing , *GRID computing - Abstract
We consider two finite and disjoint sets of homogeneous robots deployed at the nodes of an infinite grid graph. The grid graph also comprises two finite and disjoint sets of prefixed meeting nodes located over the nodes of the grid. The objective of our study is to design a distributed algorithm that gathers all the robots belonging to the first team at one of the meeting nodes belonging to the first type, and all the robots in the second team must gather at one of the meeting nodes belonging to the second type. The robots can distinguish between the two types of meeting nodes. However, a robot cannot identify its team members. This paper assumes the strongest adversarial model, namely the asynchronous scheduler. We have characterized all the initial configurations for which the gathering problem is unsolvable. For the remaining initial configurations, the paper proposes a distributed gathering algorithm. Assuming the robots are capable of global-weak multiplicity detection , the proposed algorithm solves the problem within a finite time period. The algorithm runs in |$\Theta (dn)$| moves and |$O(dn)$| epochs, where |$d$| is the diameter of the minimum enclosing rectangle of all the robots and meeting nodes in the initial configuration, and |$n$| is the total number of robots in the system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A maximum dual CUSUM chart for joint monitoring of process mean and variance.
- Author
-
Haq, Abdul and Ali, Qamar
- Subjects
MONTE Carlo method ,STATISTICAL process control ,DISTRIBUTED computing ,QUALITY control charts - Abstract
A dual chart provides more sensitivity than the conventional chart when it is known that a shift size varies within a given interval. In this paper, we propose a maximum dual CUSUM (MDC) chart for monitoring the joint shifts (that lie in different intervals) in the mean and variance of a normally distributed process. The Monte Carlo simulation method is used to estimate the zero-state and steady-state run-length properties of the MDC chart, which include the average run-length (ARL), expected weighted run-length (EWRL) and expected relative ARL (ERARL). Based on detailed run-length comparisons in terms of the EWRL and ERARL, it is found that the MDC chart outperforms the maximum adaptive EWMA (MAE) and maximum weighted adaptive CUSUM (MWAC) charts when detecting a range of the joint shift sizes. Moreover, the diagnostic abilities of the MDC chart are also studied. Real and simulated datasets are considered to demonstrate the implementation of the MAE, MWAC and MDC charts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. RANDOMIZED KACZMARZ IN ADVERSARIAL DISTRIBUTED SETTING.
- Author
-
LONGXIU HUANG, XIA LI, and NEEDELL, DEANNA
- Subjects
PROBLEM solving ,LARGE scale systems - Abstract
Developing large-scale distributed methods that are robust to the presence of adversarial or corrupted workers is an important part of making such methods practical for real-world problems. In this paper, we propose an iterative approach that is adversary-tolerant for convex optimization problems. By leveraging simple statistics, our method ensures convergence and is capable of adapting to adversarial distributions. Additionally, the efficiency of the proposed methods for solving convex problems is shown in simulations with the presence of adversaries. Through simulations, we demonstrate the efficiency of our approach in the presence of adversaries and its ability to identify adversarial workers with high accuracy and tolerate varying levels of adversary rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. ENHANCED FEATURE-DRIVEN MULTI-OBJECTIVE LEARNING FOR OPTIMAL CLOUD RESOURCE ALLOCATION.
- Author
-
I., UMA MAHESWARA RAO and SASTRY, J. K. R.
- Subjects
RESOURCE allocation ,DISTRIBUTED computing ,COMPUTER programming ,BIG data ,RANDOM forest algorithms ,MACHINE learning ,CLOUD computing - Abstract
In cloud networks, especially those with distributed computing setups and data centers, one of the biggest obstacles is allocating resources. This is the key area, and this must be balanced between optimizing system performance on one side and affordability, stability (reliance) of operation, and energy efficiency. The importance of improving resource allocation methodologies in these complex cloud computing systems is recognized, and therefore this paper comes with an appropriate title-"Enhanced Feature-Driven Multi-Objective Learning for Optimal Cloud Resource Allocation" (OCRA), which integrates together both the latest machine learning techniques as well as traditional concepts from research into cloud computing. OCRA capably analyzes historical files on CPU, memory, disk and network usage. In addition to neatly assimilating large data sets such as that was the compliance rate with past SLAs or workload frequencies over certain time periods and resource allocations; even their patterns of service requests are an important piece of information for many busy people's lives today the adaptive mechanism is one of the defining traits of the model. It can accurately anticipate changes in resource demand and immediately adjust supply, fully able to respond rapidly when fluctuations arise suddenly or unexpectedly. Multi-Objective Random Forests are at the very core of OCRA. Each tree for decision making is specially designed to meet a particular performance objective in mind. Combining these trees into a Random Forest ensemble increases not only the model's predictive accuracy but also its stability. Pareto optimization is wisely used to maintain a balance among performance indicators, without an excessive focus on one effect alone. OCRA is proven empirically through experimental studies where key performance indicators such as Resource Utilization Rate and Quality of Service (QoS) Adherence Rate are taken into account. OCRA is both energy-efficient, an important attribute in today's environmentally conscious world, and does not sacrifice performance. As far as speed, flexibility and overall efficiency are concerned, OCRA has always been superior to the other cloud resources allocation programs of its own day. While it's still not quite ready for users who don't have a firm background in computer science or programming skills (ocra is plotted on 0-x), with sufficient memory and dominant minutes turn into mechanical equipment without configuration services. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Application of artificial neural synapses in soft robots.
- Author
-
Ding, Xuanting
- Subjects
ARTIFICIAL neural networks ,ROBOTS ,FAULT tolerance (Engineering) ,FAULT-tolerant computing ,DISTRIBUTED computing ,SYNAPSES ,PARALLEL processing - Abstract
Artificial neural network is considered to be one of the effective ways to enable soft robots to achieve high-performance control due to their significant advantages, such as massively parallel processing and distributed storage of information, adaptivity, and fault tolerance. Artificial neural networks are composed of microelectronic components connected together, of which the most basic units are artificial neural synaptic units, such as atomic switches, memristors, and synaptic transistors. This paper first introduces the research status of soft robots and artificial neural synapses, predicts the demand of soft robots for artificial neural synapses, summarizes the difficulties and problems that may be encountered in the application of artificial neural synapses to soft robots, and finally points out the importance and feasibility of artificial neural synapses in the research and development of soft robots. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Exploration of High-Dimensional Grids by Finite State Machines.
- Author
-
Dobrev, Stefan, Narayanan, Lata, Opatrny, Jaroslav, and Pankratov, Denis
- Subjects
FINITE state machines ,TIME complexity ,GRID computing ,DETERMINISTIC algorithms ,DISTRIBUTED computing ,POLYNOMIAL time algorithms - Abstract
We consider the problem of finding a "treasure" at an unknown point of an n-dimensional infinite grid, n ≥ 3 , by initially collocated finite automaton (FA) agents. Recently, the problem has been well characterized for 2 dimensions for deterministic as well as randomized FA agents, both in synchronous and semi-synchronous models (Brandt et al. in Proceedings of 32nd International Symposium on Distributed Computing (DISC) LIPCS 121:13:1–13:17, 2018; Emek et al. in Theor Comput Sci 608:255–267, 2015). It has been conjectured that n + 1 randomized FA agents are necessary to solve this problem in the n-dimensional grid (Cohen et al. in Proceedings of the 28th SODA, SODA '17, pp 207–224, 2017). In this paper we disprove the conjecture in a strong sense: we show that three randomized synchronous FA agents suffice to explore an n-dimensional grid for anyn. Our algorithm is optimal in terms of the number of the agents. Our key insight is that a constant number of FA agents can, by their positions and movements, implement a stack, which can store the path being explored. We also show how to implement our algorithm using: four randomized semi-synchronous FA agents; four deterministic synchronous FA agents; or five deterministic semi-synchronous FA agents. We give a different, no-stack algorithm that uses 4 deterministic semi-synchronous FA agents for the 3-dimensional grid. This is provably optimal in the number of agents and the exploration cost, and surprisingly, matches the result for 2 dimensions. For n ≥ 4 , the time complexity of the stack-based algorithms mentioned above is exponential in distance D of the treasure from the starting point of the agents. We show that in the deterministic case, one additional finite automaton agent brings the time down to a polynomial. We also show that any algorithm using 3 synchronous deterministic FA agents in 3 dimensions must travel beyond Ω (D 3 / 2) from the origin. Finally, we show that all the above algorithms can be generalized to unoriented grids. More specifically, six deterministic semi-synchronous FA agents are sufficient to locate the treasure in an unoriented n-dimensional grid. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.