14,956 results on '"EDGE computing"'
Search Results
2. FPGA-Based High-Speed Energy-Efficient 32-Bit Fixed-Point MAC Architecture for DSP Application in IoT Edge Computing.
- Author
-
Nagar, Mitul Sudhirkumar, Patel, Sohan H., and Engineer, Pinalkumar
- Subjects
- *
DIGITAL signal processing , *EDGE computing , *ARCHITECTURAL design , *GATE array circuits , *WORK design - Abstract
Designing high-speed and energy-efficient blocks for image and digital signal processing (DSP) architecture is an evolving research field. This work designs a high-speed and energy-efficient multiply-accumulate (MAC) unit to augment the performance of field-programmable gate array (FPGA)-based accelerators and softcore processors. In this work, three discrete 32-bit fixed-point signed MAC architectures were designed in Verilog and synthesized for the Zynq 7000 ZedBoard to obtain efficient MAC architecture. The ultimate goal of this work is to design a fast and energy-efficient MAC unit that can achieve speed up to the DSP48 block to reduce the latency of IoT edge computing. Energy efficiency was achieved in PPG and partial product addition (PPA) for the proposed Booth radix-4 Dadda (BR4D)-based MAC. At PPG, the width of the partial product (PP) terms was optimized with Bewick's signed extension to reduce the power consumption. At PPA, the number of PP rows reduces the critical path delay (CPD) with Dadda-based PPA. The proposed BR4D MAC unit offers a reduction in dynamic power, CPD, power-delay product (PDP) and energy-delay product (EDP) by 22%, 9%, 29% and 36%, respectively, compared to standard Booth radix-4 Wallace tree (BR4WT) based MAC. Furthermore, hybrid MACs (BR4WT and BR4D) were compared with the current state-of-the-art (SoA) designs, and it was found that the proposed BR4D MAC is 47% faster compared to the same design in SoA. The proposed BR4D was tested for frequency scaling technique by reducing the frequency in steps of 10 MHz from a maximum usable frequency (MUF) of 64 MHz to 10 MHz to evaluate the performance for low-power applications. Reducing clock frequency by 84% will reduce the power consumption at the same proportion and speed by 38%. Additionally, the proposed design helps to improve the battery life of IoT end nodes with a reduction in energy consumption and EDP by 76% and 61%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Data transmission optimization in edge computing using multi-objective reinforcement learning.
- Author
-
Li, Xiaole, Liu, Haitao, and Wang, Haifeng
- Subjects
- *
REINFORCEMENT learning , *OPTIMIZATION algorithms , *COMPUTER network traffic , *EDGE computing , *DATA transmission systems - Abstract
Reducing network energy consumption and balancing workload are two key optimization goals for data transmission in edge computing field. However, these two goals are likely to be conflicting in some cases and fail to achieve the optimum simultaneously. In this paper, we design a new data transmission optimization algorithm using multi-objective reinforcement learning. We design the vector of rewards for the two objectives, and update Pareto approximate set by multiple state steps to approach the optimal solution. In every step, we classify the candidate links into four different levels for path selection. We aggregate network traffic to construct minimum topology subset, minimizing the number of occupied device to reduce energy consumption. We optimize the load distribution on those selected links, minimizing maximum congestion factor to balance workload. For action selection, we leverage roulette-based Chebyshev scalarization function to solve the weight selection problem for multi-objectives and enforce exploration to avoid falling into local optimum. To improve the convergence rate, we design heuristic factor to control the search of solution space and enhance the guiding effect of the existing optimal solution. Simulation result shows that the proposed algorithm achieves good performance in energy-saving and load balance at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. MWformer: a novel low computational cost image restoration algorithm.
- Author
-
Liao, Jing, Peng, Cheng, Jiang, Lei, Ma, Yihua, Liang, Wei, Li, Kuan-Ching, and Poniszewska-Maranda, Aneta
- Subjects
- *
IMAGE reconstruction , *WAVELET transforms , *DEEP learning , *EDGE computing , *MACHINE learning - Abstract
The development of the Internet of Things has led to a surge in edge devices. The image detection algorithm, one of the commonly used algorithms in edge computing, is affected by environments such as weather, light, air humidity, smoke, and dust, so an image restoration algorithm is needed to preprocess the image in practice. Most currently proposed deep learning image restoration algorithms are based on general-purpose servers with high computational overhead to minimize the environmental effects. Edge devices are limited in size, power consumption, and computing performance, making the performance of deep learning-based image restoration algorithms on edge devices poor. In this work, we propose an image restoration algorithm that combines wavelet transform and transformer, named MWformer, to reduce computational overhead, optimize the feature map size, network structure, and network depth, and introduce the wavelet transformation to reduce the super-parameters. Experimental tests on multiple public datasets for various image restoration tasks show that the proposed MWformer ensures high performance in numerous image restoration tasks, and the computational overhead is 10% of the state-of-the-art algorithm, on average. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. IoT Computing Collaboration and Data-Aware Routing Algorithm for Edge Computing and DL.
- Author
-
Li, Bo, Tang, JinHong, Yang, Zhihe, Jiang, Qing, and Wei, Jingyang
- Subjects
- *
OPTIMIZATION algorithms , *MOBILE computing , *EDGE computing , *ROUTING algorithms , *DEEP learning , *MARKOV processes , *MULTICASTING (Computer networks) - Abstract
With the development of mobile edge computing and neural network Deep Learning (DL), more and more scholars are studying the combination of the two. This paper mainly studies the application of mobile edge computing and neural network DL in IoT computing collaboration and data-aware routing algorithms. Therefore, this paper proposes the deployment options of MEC technology and ETSIMEC in mobile edge computing, combining mobile edge computing with DL, designs an optimization algorithm based on Markov decision process and feature expression learning, and then analyzes and optimizes IoT computing and VANET routing algorithm. In order to have a clearer direction for the optimization algorithm, this paper also designs the edge computing model training and experiment comparison, the DL algorithm comparison experiment, and the routing algorithm simulation experiment and performance analysis. Combined with the experimental results, it is optimized and compared with traditional IoT computing and routing algorithms. Finally, it is concluded that the computing efficiency of IoT computing based on edge computing and DL designed in this paper is 21.33% higher than that of traditional IoT computing. The efficiency of the routing algorithm based on edge computing and DL designed in this paper is 9.29% higher than that of the traditional routing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Security-aware energy-efficient design for mobile edge computing network operating with finite blocklength codes.
- Author
-
Shi, Chenhao, Hu, Yulin, Zhu, Yao, and Schmeink, Anke
- Subjects
- *
PHYSICAL layer security , *EDGE computing , *MOBILE computing , *ENERGY consumption , *COMPUTER systems - Abstract
Energy efficiency and physical-layer security are crucial considerations in the advancement of mobile edge computing systems. This paper addresses the trade-off between secure-reliability and energy consumption in finite blocklength (FBL) communications. Specifically, we examine a three-node scenario involving a user, a legitimate edge computing server, and an eavesdropper, where the user offloads sensitive data to the edge server while facing potential eavesdropping threats. We propose an optimization framework aimed at minimizing energy consumption while ensuring secure-reliability by decomposing the problem into manageable subproblems. By demonstrating the convexity of the objective function concerning the variables, we establish the existence of an optimal parameter selection for the problem. This implies that practical optimization of parameters can significantly enhance system performance. Our numerical results demonstrate that the application of FBL regime and retransmission mechanism can effectively reduce the energy consumption of the system while ensuring secure-reliability. For the quantitative analyses, the retransmission mechanism is 33.1% better than no retransmission, and the FBL regime is 13.1% better than infinite blocklength (IBL) coding. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Fog-assisted de-duplicated data exchange in distributed edge computing networks.
- Author
-
Said, Ghawar, Ghani, Anwar, Ullah, Ata, Alzahrani, Abdulrahman, Azeem, Muhammad, Ahmad, Rashid, and Kim, Do-Hyeun
- Subjects
- *
DISTRIBUTED computing , *EDGE computing , *OPERATING costs , *DATA warehousing , *DATA transmission systems , *INTERNET of things - Abstract
The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Efficient microservices offloading for cost optimization in diverse MEC cloud networks.
- Author
-
Mahesar, Abdul Rasheed, Li, Xiaoping, and Sajnani, Dileep Kumar
- Subjects
MOBILE computing ,EDGE computing ,ARCHITECTURAL style ,MOBILE apps ,CLOUD computing - Abstract
In recent years, mobile applications have proliferated across domains such as E-banking, Augmented Reality, E-Transportation, and E-Healthcare. These applications are often built using microservices, an architectural style where the application is composed of independently deployable services focusing on specific functionalities. Mobile devices cannot process these microservices locally, so traditionally, cloud-based frameworks using cost-efficient Virtual Machines (VMs) and edge servers have been used to offload these tasks. However, cloud frameworks suffer from extended boot times and high transmission overhead, while edge servers have limited computational resources. To overcome these challenges, this study introduces a Microservices Container-Based Mobile Edge Cloud Computing (MCBMEC) environment and proposes an innovative framework, Optimization Task Scheduling and Computational Offloading with Cost Awareness (OTSCOCA). This framework addresses Resource Matching, Task Sequencing, and Task Scheduling to enhance server utilization, reduce service latency, and improve service bootup times. Empirical results validate the efficacy of MCBMEC and OTSCOCA, demonstrating significant improvements in server efficiency, reduced service latency, faster service bootup times, and notable cost savings. These outcomes underscore the pivotal role of these methodologies in advancing mobile edge computing applications amidst the challenges of edge server limitations and traditional cloud-based approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. An optimization scheme for vehicular edge computing based on Lyapunov function and deep reinforcement learning.
- Author
-
Zhu, Lin, Tan, Long, Li, Bingxian, and Tian, Huizi
- Subjects
- *
DEEP reinforcement learning , *MOBILE computing , *COMPUTER networks , *EDGE computing , *DIGITAL twins , *VEHICLE routing problem - Abstract
Traditional vehicular edge computing research usually ignores the mobility of vehicles, the dynamic variability of the vehicular edge environment, the large amount of real‐time data required for vehicular edge computing, the limited resources of edge servers, and collaboration issues. In response to these challenges, this article proposes a vehicular edge computing optimization scheme based on the Lyapunov function and Deep Reinforcement Learning. In this solution, this article uses Digital Twin technology (DT) to simulate the vehicular edge environment. The edge server DT is used to simulate the vehicular edge environment under the edge server, and the base station DT is used to simulate the entire vehicular edge system environment. Based on the real‐time data obtained from DT simulation, this paper defines the Lyapunov function to simplify the migration cost of vehicle tasks between servers into a multi‐objective dynamic optimization problem. It solves the problem by applying the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. Experimental results show that compared with other algorithms, this scheme can effectively optimize the allocation and collaboration of vehicular edge computing resources and reduce the delay and energy consumption caused by vehicle task processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. QoS-aware edge server placement for collaborative predictive maintenance in industrial internet of things.
- Author
-
Mehta, Aman and Verma, Rahul Kumar
- Subjects
- *
INTERNET of things , *PLANT maintenance , *SYSTEM downtime , *REMAINING useful life , *FEDERATED learning , *END-to-end delay - Abstract
Machine failures during the manufacturing process can have severe consequences, causing extensive downtime and financial losses. Hence, predictive maintenance (PdM) plays a crucial role within the Industrial Internet of Things (IIoT) by estimating the remaining useful life (RUL) of machines so that proactive maintenance measures can be taken to mitigate potential failures and minimize disruptions. RUL estimation is enabled by gathering and processing the data sensed by sensors mounted on and around the machines at the central server (base station or cloud) after analyzing the failure patterns. However, this approach imposes a significant load on network bandwidth and leads to poor response time for the monitoring system because a large volume of sensed data has to be transmitted to the central server for processing. Moreover, due to the singularity of the computing resource, many problems, such as inefficient resource utilization, frequent offloading, single-point failure, etc., have become major challenges. To address these issues, this article proposes an edge computing-enabled predictive maintenance framework called "Collaborative Predictive Maintenance Framework" (CollabRULe), which first identifies the optimal locations for edge server placement in the deployment region by considering the QoS parameters, such as energy, delay and connectivity. Then, it uses federated learning for predictive maintenance by estimating the RUL of the machines. Simulation results show that the proposed mechanism effectively minimizes the overall network energy consumption and end-to-end delay by ≈ 60 % and ≈ 35 % , respectively, as compared to state-of-the-art approaches. Furthermore, it shows significant improvement in the accuracy of RUL prediction as compared to its counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. DGCQN: a RL and GCN combined method for DAG scheduling in edge computing.
- Author
-
Qin, Bin, Lei, Qinyang, and Wang, Xin
- Subjects
- *
EDGE computing , *REINFORCEMENT learning , *DIRECTED acyclic graphs , *CONVOLUTIONAL neural networks , *HETEROGENEOUS computing , *SCHEDULING - Abstract
Edge computing is an emerging paradigm that enables low-latency and high-performance computing at the network edge. However, effectively scheduling complex and interdependent tasks on heterogeneous and dynamic edge computing nodes presents a significant challenge in meeting users' real-time response requirements. To solve this problem, a DGCQN scheduling network that leverages reinforcement learning and graph convolutional neural networks to learn an optimal scheduling strategy is proposed in this paper. The proposed method embeds the graph structure of Directed Acyclic Graph (DAG) tasks and node information of Kubernetes (K8s) clusters into a Q value function, guiding the DQN network in selecting the best action at each step. The method is evaluated across various DAG tasks and edge computing scenarios. Compared with HEFT, DQN, and GOSU, the task completion time of the proposed method is reduced by about 20, 10, and 1.5%, respectively. The results demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A reliability prediction model for a multistate cloud/edge-based network based on a deep neural network.
- Author
-
Huang, Ding-Hsiang, Huang, Cheng-Fu, and Lin, Yi-Kuei
- Subjects
- *
ARTIFICIAL neural networks , *WEB services , *EDGE computing , *PREDICTION models , *CLOUD computing - Abstract
Network reliability, named multistate stochastic cloud/edge-based network (MCEN) reliability afterwards, is defined as the probability that demands can be satisfied for an MCEN. It can be regarded as a performance indicator of the MCEN to measure the service capability. The concept of existing algorithms is to produce all of minimal system-state vectors for calculating MCEN reliability. However, such concept cannot response MCEN reliability in time when the MCEN scale becomes complicated in the Industry 4.0 environment. For providing MCEN reliability for decision making immediately, an architecture of a deep neural network (DNN) is developed to propose a prediction model for MCEN reliability such that MCEN capability with varied data can be learned promptly. To train the reliability prediction model, MCEN information is transformed to the suitable format, and the related information for DNN setting, including the determination of related functions, are defined with appropriate hyperparameters by using Bayesian Optimization. An illustrative case and a practical case of Amazon Web Service are provided to demonstrate the prediction model for MCEN reliability to show the availability and the efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Intelligent Edge-powered Data Reduction: A Systematic Literature Review.
- Author
-
Pioli, Laércio, de Macedo, Douglas D. J., Costa, Daniel G., and Dantas, Mario A. R.
- Published
- 2024
- Full Text
- View/download PDF
14. Image Guidance Encoder-Decoder Model in Image Captioning and Its Application.
- Author
-
Zhen Yang, Ziwei Zhou, Chaoyang Wang, and Liang Xu
- Subjects
EDGE computing ,ATTENTION ,ENCODING - Abstract
This paper introduces a new network model - the Image Guidance Encoder-Decoder Model (IG-ED), designed to enhance the efficiency of image captioning and improve predictive accuracy. IG-ED, a fusion of the convolutional network VGGNet-16 and the long short-term memory network (LSTM), is designed based on the encoder-decoder structure. The image captioning performance sees significant enhancements when leveraging the IG-ED network model. The network training process unfolds in a series of steps. Initially, the input image undergoes convolution via the VGGNet-16 network, producing a 512-dimensional vector. Concurrently, each word in the image's caption is encoded to generate a corresponding 512-dimensional vector consistent with the image feature dimension. These two vectors form the input for the decoding process. Subsequently, the vectors are fed into the redesigned fusion LSTM (F-LSTM) network at different time steps to gradually train the parameters of the IG-ED framework. The training process is completed by utilizing a loss function for determining convergence. Evaluation of the IG-ED model's performance is conducted using CIDEr and seven other evaluation metrics on the MSCOCO 2014 dataset. The results exhibit substantial improvements over the "Adaptive Attention Mode" network and "Neural Talk" network. Additionally, the parameter count of the IG-ED architecture is significantly reduced compared to the "Adaptive Attention Mode" network, leading to decreased computational resource requirements and enabling edge computing on the neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2024
15. WIDESim: A Toolkit for Simulating Resource Management Techniques Of Scientific Workflows in Distributed Environments with Graph Topology.
- Author
-
Rayej, Mohammad Amin, Siar, Hajar, Hamzei, Ahmadreza, Majidi Yazdi, Mohammad Sadegh, Mohammadian, Parsa, and Izadi, Mohammad
- Abstract
Modeling IoT applications in distributed computing systems as workflows enables automating their procedure. There are different types of workflow-based applications in the literature. Executing IoT applications using device-to-device (D2D) communications in distributed computing systems especially edge paradigms requiring direct communication between devices in a network with a graph topology. This paper introduces a toolkit for simulating resource management of scientific workflows with different structures in distributed environments with graph topology called WIDESim. The proposed simulator enables dynamic resource management and scheduling. We have validated the performance of WIDESim in comparison to standard simulators, also evaluated its performance in real-world scenarios of distributed computing. The results indicate that WIDESim’s performance is close to existing standard simulators besides its improvements. Additionally, the findings demonstrate the satisfactory performance of the extended features incorporated within WIDESim. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A Quasi-Oppositional Learning-based Fox Optimizer for QoS-aware Web Service Composition in Mobile Edge Computing.
- Author
-
Sharif, Ramin Habibzadeh, Masdari, Mohammad, Ghaffari, Ali, and Gharehchopogh, Farhad Soleimanian
- Abstract
Currently, web service-based edge computing networks are across-the-board, and their users are increasing dramatically. The network users request various services with specific Quality-of-Service (QoS) values. The QoS-aware Web Service Composition (WSC) methods assign available services to users’ tasks and significantly affect their satisfaction. Various methods have been provided to solve the QoS-aware WSC problem; However, this field is still one of the popular research fields since the dimensions of these networks, the number of their users, and the variety of provided services are growing outstandingly. Consequently, this study presents an enhanced Fox Optimizer (FOX)-based framework named EQOLFOX to solve QoS-aware web service composition problems in edge computing environments. In this regard, the Quasi-Oppositional Learning is utilized in the EQOLFOX to diminish the zero-orientation nature of the FOX algorithm. Likewise, a reinitialization strategy is included to enhance EQOLFOX's exploration capability. Besides, a new phase with two new movement strategies is introduced to improve searching abilities. Also, a multi-best strategy is recruited to depart local optimums and lead the population more optimally. Eventually, the greedy selection approach is employed to augment the convergence rate and exploitation capability. The EQOLFOX is applied to ten real-life and artificial web-service-based edge computing environments, each with four different task counts to evaluate its proficiency. The obtained results are compared with the DO, FOX, JS, MVO, RSA, SCA, SMA, and TSA algorithms numerically and visually. The experimental results indicated the contributions' effectiveness and the EQOLFOX's competency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Resource Allocation Using Deep Deterministic Policy Gradient-Based Federated Learning for Multi-Access Edge Computing.
- Author
-
Zhou, Zheyu, Wang, Qi, Li, Jizhou, and Li, Ziyuan
- Abstract
The study focuses on utilizing the computational resources present in vehicles to enhance the performance of multi-access edge computing (MEC) systems. While vehicles are typically equipped with computational services for vehicle-centric Internet of Vehicles (IoV) applications, their resources can also be leveraged to reduce the workload on edge servers and improve task processing speed in MEC scenarios. Previous research efforts have overlooked the potential resource utilization of passing vehicles, which can be a valuable addition to MEC systems alongside parked cars. This study introduces an assisted MEC scenario where a base station (BS) with an edge server serves various devices, parked cars, and vehicular traffic. A cooperative approach using the Deep Deterministic Policy Gradient (DDPG) based Federated Learning method is proposed to optimize resource allocation and job offloading. This method enables the transfer of device operations from devices to the BS or from the BS to vehicles based on specific requirements. The proposed system also considers the duration for which a vehicle can provide job offloading services within the range of the BS before leaving. The objective of the DDPG-FL method is to minimize the overall priority-weighted task computation time. Through simulation results and a comparison with three other schemes, the study demonstrates the superiority of their proposed method in seven different scenarios. The findings highlight the potential of incorporating vehicular resources in MEC systems, showcasing improved task processing efficiency and overall system performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A general framework for metaverse based on parallel computing and HPC.
- Author
-
Al Khaldy, Mohammad Ali, Al-Qerem, Ahmad, Aldweesh, Amjad, and Alauthman, Mohammad
- Subjects
ARTIFICIAL intelligence ,SHARED virtual environments ,CUSTOMER experience ,PARALLEL programming ,ELECTRONIC data processing - Abstract
As virtual and actual universes merge inside the creating metaverse, requests have pointedly ascended for continuous, intuitive, and intense encounters. The ability of the metaverse to effectively analyze and render complicated links and information supplied by clients is critical for realizing that goal. These demanding computational demands are starting to be supported by parallel processing, and high-performance computing (HPC) is beyond uncertainty key to this domain. The integrative framework presented in this paper addresses the core challenges of inertness, flexibility, and ease of use while integrating equal registration into the metaverse. The system enables prompt handling of client actions and quick response times by distributing calculations over multiple processors, which is essential for the seamless client experience. It also manages the vast amount of metaverse material and interactions as well as the various data processing needs. The paper looks at intrinsic equal processing difficulties in this unique climate, including creating versatile and energy-effective equal calculations that consider load adjusting and asset designation. It features the need to democratize equal figuring assets to produce metaverse extension while accentuating the significance of information protection and security conventions in multiclient settings. The cooperative energy between metaverse development and equal registering progressions vows to push limits, empowering remarkable degrees of virtual submersion and collaboration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Image super‐resolution reconstruction based on implicit image functions.
- Author
-
Lin, Hai and Yang, JunJie
- Subjects
- *
MULTILAYER perceptrons , *IMAGE reconstruction , *DEEP learning , *COMPUTING platforms , *EDGE computing , *IMAGE reconstruction algorithms - Abstract
Image super‐resolution (SR) reconstruction is a key technique for improving image quality and details. Conventional methods are frequently limited by interpolation, filtering, or statistical approaches; thus, they are unable to reconstruct high‐quality continuously enlarged images with detailed information. This study proposes an image SR reconstruction network model, called LALNet, based on implicit image functions and residual multilayered perceptron (RAMLP) with an attention mechanism. Through the implicit image function and RAMLP + attention, high‐quality SR reconstruction with continuous scale factors is achieved, and LALNets can run on embedded edge computing platforms. This method exhibits the following advantages: lightweight network structure reduces computing requirements, introduction of implicit image functions and RAMLP improves reconstruction quality, and attention mechanism suppresses artefacts and distortions. Experimental results show that LALNet outperforms traditional and other deep learning methods in terms of reconstruction performance and computational efficiency. This research provides new ideas and methods for the further development of the field of image SR reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Smart connected farms and networked farmers to improve crop production, sustainability and profitability.
- Author
-
Singh, Asheesh K., Balabaygloo, Behzad J., Bekee, Barituka, Blair, Samuel W., Fey, Suzanne, Fotouhi, Fateme, Gupta, Ashish, Jha, Amit, Martinez-Palomares, Jorge C., Menke, Kevin, Prestholt, Aaron, Tanwar, Vishesh K., Xu Tao, Vangala, Anusha, Carroll, Matthew E., Das, Sajal K., DePaula, Guilherme, Kyveryga, Peter, Sarkar, Soumik, and Segovia, Michelle
- Subjects
DATA privacy ,DATA analytics ,AGRICULTURE ,SOCIOECONOMICS ,INNOVATION adoption - Abstract
To meet the grand challenges of agricultural production including climate change impacts on crop production, a tight integration of social science, technology and agriculture experts including farmers are needed. Rapid advances in information and communication technology, precision agriculture and data analytics, are creating a perfect opportunity for the creation of smart connected farms (SCFs) and networked farmers. A network and coordinated farmer network provides unique advantages to farmers to enhance farm production and profitability, while tackling adverse climate events. The aim of this article is to provide a comprehensive overview of the state of the art in SCF including the advances in engineering, computer sciences, data sciences, social sciences and economics including data privacy, sharing and technology adoption. More specifically, we provide a comprehensive review of key components of SCFs and crucial elements necessary for its success. It includes, high-speed connections, sensors for data collection, and edge, fog and cloud computing along with innovative wireless technologies to enable cyber agricultural system. We also cover the topic of adoption of these technologies that involves important considerations around data analysis, privacy, and the sharing of data on platforms. From a social science and economics perspective, we examine the net-benefits and potential barriers to data-sharing within agricultural communities, and the behavioral factors influencing the adoption of SCF technologies. The focus of this review is to cover the state-of-the-art in smart connected farms with sufficient technological infrastructure; however, the information included herein can be utilized in geographies and farming systems that are witnessing digital technologies and want to develop SCF. Overall, taking a holistic view that spans technical, social and economic dimensions is key to understanding the impacts and future trajectory of Smart and Connected Farms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A scalable blockchain-enabled federated learning architecture for edge computing.
- Author
-
Ren, Shuyang, Kim, Eunsam, and Lee, Choonhwa
- Subjects
- *
FEDERATED learning , *ELECTRONIC data processing , *DEEP learning , *EDGE computing , *INTELLIGENT buildings - Abstract
Various deep learning techniques, including blockchain-based approaches, have been explored to unlock the potential of edge data processing and resultant intelligence. However, existing studies often overlook the resource requirements of blockchain consensus processing in typical Internet of Things (IoT) edge network settings. This paper presents our FLCoin approach. Specifically, we propose a novel committee-based method for consensus processing in which committee members are elected via the FL process. Additionally, we employed a two-layer blockchain architecture for federated learning (FL) processing to facilitate the seamless integration of blockchain and FL techniques. Our analysis reveals that the communication overhead remains stable as the network size increases, ensuring the scalability of our blockchain-based FL system. To assess the performance of the proposed method, experiments were conducted using the MNIST dataset to train a standard five-layer CNN model. Our evaluation demonstrated the efficiency of FLCoin. With an increasing number of nodes participating in the model training, the consensus latency remained below 3 s, resulting in a low total training time. Notably, compared with a blockchain-based FL system utilizing PBFT as the consensus protocol, our approach achieved a 90% improvement in communication overhead and a 35% reduction in training time cost. Our approach ensures an efficient and scalable solution, enabling the integration of blockchain and FL into IoT edge networks. The proposed architecture provides a solid foundation for building intelligent IoT services. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Beehive Smart Detector Device for the Detection of Critical Conditions That Utilize Edge Device Computations and Deep Learning Inferences.
- Author
-
Kontogiannis, Sotirios
- Subjects
- *
MACHINE learning , *CONVOLUTIONAL neural networks , *DECISION support systems , *COLONY collapse disorder of honeybees , *COMPUTER systems , *FUZZY neural networks , *DEEP learning - Abstract
This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3–5% better accuracy results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Presenting the COGNIFOG Framework: Architecture, Building Blocks and Road toward Cognitive Connectivity.
- Author
-
Adame, Toni, Amri, Emna, Antonopoulos, Grigoris, Azaiez, Selma, Berne, Alexandre, Camargo, Juan Sebastian, Kakoulidis, Harry, Kleisarchaki, Sofia, Llamedo, Alberto, Prasinos, Marios, Psara, Kyriaki, and Shumaiev, Klym
- Subjects
- *
ARTIFICIAL intelligence , *REAL-time computing , *MACHINE learning , *COMPUTER systems , *UBIQUITOUS computing - Abstract
In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT–edge–cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Corun: Concurrent Inference and Continuous Training at the Edge for Cost-Efficient AI-Based Mobile Image Sensing.
- Author
-
Liu, Yu, Andhare, Anurag, and Kang, Kyoung-Don
- Subjects
- *
IMAGE recognition (Computer vision) , *ARTIFICIAL intelligence , *EDGE computing , *MOBILE apps , *SMARTWATCHES , *DEEP learning - Abstract
Intelligent mobile image sensing powered by deep learning analyzes images captured by cameras from mobile devices, such as smartphones or smartwatches. It supports numerous mobile applications, such as image classification, face recognition, and camera scene detection. Unfortunately, mobile devices often lack the resources necessary for deep learning, leading to increased inference latency and rapid battery consumption. Moreover, the inference accuracy may decline over time due to potential data drift. To address these issues, we introduce a new cost-efficient framework, called Corun, designed to simultaneously handle multiple inference queries and continual model retraining/fine-tuning of a pre-trained model on a single commodity GPU in an edge server to significantly improve the inference throughput, upholding the inference accuracy. The scheduling method of Corun undertakes offline profiling to find the maximum number of concurrent inferences that can be executed along with a retraining job on a single GPU without incurring an out-of-memory error or significantly increasing the latency. Our evaluation verifies the cost-effectiveness of Corun. The inference throughput provided by Corun scales with the number of concurrent inference queries. However, the latency of inference queries and the length of a retraining epoch increase at substantially lower rates. By concurrently processing multiple inference and retraining tasks on one GPU instead of using a separate GPU for each task, Corun could reduce the number of GPUs and cost required to deploy mobile image sensing applications based on deep learning at the edge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Edge Computing and Fault Diagnosis of Rotating Machinery Based on MobileNet in Wireless Sensor Networks for Mechanical Vibration.
- Author
-
Huang, Yi, Liang, Shuang, Cui, Tingqiong, Mu, Xiaojing, Luo, Tianhong, Wang, Shengxue, and Wu, Guangyong
- Subjects
- *
REAL-time computing , *WIRELESS sensor networks , *VIBRATION (Mechanics) , *PROCESS capability , *SENSOR networks , *MICROCONTROLLERS - Abstract
With the rapid development of the Industrial Internet of Things in rotating machinery, the amount of data sampled by mechanical vibration wireless sensor networks (MvWSNs) has increased significantly, straining bandwidth capacity. Concurrently, the safety requirements for rotating machinery have escalated, necessitating enhanced real-time data processing capabilities. Conventional methods, reliant on experiential approaches, have proven inefficient in meeting these evolving challenges. To this end, a fault detection method for rotating machinery based on mobileNet in MvWSNs is proposed to address these intractable issues. The small and light deep learning model is helpful to realize nearly real-time sensing and fault detection, lightening the communication pressure of MvWSNs. The well-trained deep learning is implanted on the MvWSNs sensor node, an edge computing platform developed via embedded STM32 microcontrollers (STMicroelectronics International NV, Geneva, Switzerland). Data acquisition, data processing, and data classification are all executed on the computing- and energy-constrained sensor node. The experimental results demonstrate that the proposed fault detection method can achieve about 0.99 for the DDS dataset and an accuracy of 0.98 in the MvWSNs sensor node. Furthermore, the final transmission data size is only 0.1% compared to the original data size. It is also a time-saving method that can be accomplished within 135 ms while the raw data will take about 1000 ms to transmit to the monitoring center when there are four sensor nodes in the network. Thus, the proposed edge computing method shows good application prospects in fault detection and control of rotating machinery with high time sensitivity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. GVC-YOLO: A Lightweight Real-Time Detection Method for Cotton Aphid-Damaged Leaves Based on Edge Computing.
- Author
-
Zhang, Zhenyu, Yang, Yunfan, Xu, Xin, Liu, Liangliang, Yue, Jibo, Ding, Ruifeng, Lu, Yanhui, Liu, Jie, and Qiao, Hongbo
- Subjects
- *
COTTON aphid , *EDGE computing , *REAL-time computing , *PEST control , *AGRICULTURE , *COTTON - Abstract
Cotton aphids (Aphis gossypii Glover) pose a significant threat to cotton growth, exerting detrimental effects on both yield and quality. Conventional methods for pest and disease surveillance in agricultural settings suffer from a lack of real-time capability. The use of edge computing devices for real-time processing of cotton aphid-damaged leaves captured by field cameras holds significant practical research value for large-scale disease and pest control measures. The mainstream detection models are generally large in size, making it challenging to achieve real-time detection on edge computing devices with limited resources. In response to these challenges, we propose GVC-YOLO, a real-time detection method for cotton aphid-damaged leaves based on edge computing. Building upon YOLOv8n, lightweight GSConv and VoVGSCSP modules are employed to reconstruct the neck and backbone networks, thereby reducing model complexity while enhancing multiscale feature fusion. In the backbone network, we integrate the coordinate attention (CA) mechanism and the SimSPPF network to increase the model's ability to extract features of cotton aphid-damaged leaves, balancing the accuracy loss of the model after becoming lightweight. The experimental results demonstrate that the size of the GVC-YOLO model is only 5.4 MB, a decrease of 14.3% compared with the baseline network, with a reduction of 16.7% in the number of parameters and 17.1% in floating-point operations (FLOPs). The mAP@0.5 and mAP@0.5:0.95 reach 97.9% and 90.3%, respectively. The GVC-YOLO model is optimized and accelerated by TensorRT and then deployed onto the embedded edge computing device Jetson Xavier NX for detecting cotton aphid damage video captured from the camera. Under FP16 quantization, the detection speed reaches 48 frames per second (FPS). In summary, the proposed GVC-YOLO model demonstrates good detection accuracy and speed, and its performance in detecting cotton aphid damage in edge computing scenarios meets practical application needs. This research provides a convenient and effective intelligent method for the large-scale detection and precise control of pests in cotton fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. P-CA: Privacy-Preserving Convolutional Autoencoder-Based Edge–Cloud Collaborative Computing for Human Behavior Recognition.
- Author
-
Wang, Haoda, Qiu, Chen, Zhang, Chen, Xu, Jiantao, and Su, Chunhua
- Subjects
- *
ARTIFICIAL neural networks , *EDGE computing , *HUMAN behavior , *PERFORMANCE technology , *DEEP learning , *RECOGNITION (Psychology) - Abstract
With the development of edge computing and deep learning, intelligent human behavior recognition has spawned extensive applications in smart worlds. However, current edge computing technology faces performance bottlenecks due to limited computing resources at the edge, which prevent deploying advanced deep neural networks. In addition, there is a risk of privacy leakage during interactions between the edge and the server. To tackle these problems, we propose an effective, privacy-preserving edge–cloud collaborative interaction scheme based on WiFi, named P-CA, for human behavior sensing. In our scheme, a convolutional autoencoder neural network is split into two parts. The shallow layers are deployed on the edge side for inference and privacy-preserving processing, while the deep layers are deployed on the server side to leverage its computing resources. Experimental results based on datasets collected from real testbeds demonstrate the effectiveness and considerable performance of the P-CA. The recognition accuracy can maintain 88%, although it could achieve about 94.8% without the mixing operation. In addition, the proposed P-CA achieves better recognition accuracy than two state-of-the-art methods, i.e., FedLoc and PPDFL, by 2.7% and 2.1%, respectively, while maintaining privacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance.
- Author
-
He, Huaiwen, Huang, Feng, Zhou, Chenghao, Shen, Hong, and Yang, Yihong
- Subjects
- *
WIRELESS power transmission , *MOBILE computing , *MATHEMATICAL analysis , *EDGE computing , *POWER resources - Abstract
In the Internet of Things (IoT) era, Mobile Edge Computing (MEC) significantly enhances the efficiency of smart devices but is limited by battery life issues. Wireless Power Transfer (WPT) addresses this issue by providing a stable energy supply. However, effectively managing overall energy consumption remains a critical and under-addressed aspect for ensuring the network's sustainable operation and growth. In this paper, we consider a WPT-MEC network with user cooperation to migrate the double near–far effect for the mobile node (MD) far from the base station. We formulate the problem of maximizing long-term computation rates under a power consumption constraint as a multi-stage stochastic optimization (MSSO) problem. This approach is tailored for a sustainable WPT-MEC network, considering the dynamic and varying MEC network environment, including randomness in task arrivals and fluctuating channels. We introduce a virtual queue to transform the time-average energy constraint into a queue stability problem. Using the Lyapunov optimization technique, we decouple the stochastic optimization problem into a deterministic problem for each time slot, which can be further transformed into a convex problem and solved efficiently. Our proposed algorithm works efficiently online without requiring further system information. Extensive simulation results demonstrate that our proposed algorithm outperforms baseline schemes, achieving approximately 4% enhancement while maintain the queues stability. Rigorous mathematical analysis and experimental results show that our algorithm achieves O (1 / V) , O (V) trade-off between computation rate and queue stability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A secure and efficient authentication key agreement scheme for industrial internet of things based on edge computing.
- Author
-
Zhu, Wenlong, Chen, Xuexiao, and Jiang, Linmei
- Subjects
ELLIPTIC curve cryptography ,EDGE computing ,INDUSTRIALISM ,DATA security ,INTERNET of things ,PUBLIC key cryptography - Abstract
Industrial Internet of Things (IIoT) is a pivotal driving force behind the intelligent transformation of the global industrial system, bringing about a gradual shift in production methods. However, due to users completing remote access and data transmission through open channels, ensuring user legitimacy and data security has become an important factor in IIoT. In addition, resource limited users or devices need to collect and publish corresponding messages while also timely processing instant messages obtained from the network. Leveraging edge computing technology effectively mitigates the transmission time of messages, addressing the challenges of high computing pressure and prolonged response time associated with cloud server computing. Therefore, investigating authentication key agreement based on edge computing in the IIoT environment holds substantial theoretical significance and practical importance. This article introduces an IIoT authentication that ensures both security and efficiency by integrating elliptic curve cryptography and three-factor authentication. Within this scheme, the user and the edge server accomplish security authentication. Through security analysis, the proposed scheme is proven to meet the essential security requirements. Performance analysis shows that it achieves higher security, supports more functions, and achieves lower computational and communication costs compared to related schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Delay Optimization for Wireless Powered Mobile Edge Computing with Computation Offloading via Deep Learning.
- Author
-
Lei, Ming, Fu, Zhe, and Yu, Bocheng
- Subjects
MOBILE computing ,MACHINE learning ,DEEP learning ,EDGE computing ,LINEAR programming - Abstract
Mobile edge computing (MEC), specifically wireless powered mobile edge computing (WPMEC), can achieve superior real-time data analysis and intelligent processing. In WPMEC, different user nodes (UNs) harvest significantly different amounts of energy, which results in longer delays for lower-energy UNs when data are offloaded to MEC servers. This study involves quantifying the delays in energy harvesting and task offloading to edge servers in WPMEC via user cooperation. In this paper, a method for transferring the tasks that need to be offloaded to edge servers as quickly as possible is investigated. The problem is formulated as an optimization model to minimize the delay, including the time required for the energy harvesting and offloading tasks. Because the problem was non-deterministic polynomial hard (NP-hard), a delay-optimal approximation algorithm (DOPA) is proposed. Finally, with the training data generated based on the DOPA, a deep learning-based online offloading (DLOO) framework is designed for predicting the transmission power of each UN. After each UN's transmission power is obtained, the original model is converted to a linear programming problem, which substantially reduces the computational complexity of the DOPA for solving the mixed-integer linear programming problem, especially in large-scale networks. The numerical results show that compared with the non-cooperation methods for WPMEC, the proposed algorithm significantly reduces the total delay. Additionally, in the delay optimization process for a scale of six UNs, the average computation time of the DLOO is only 0.2% that of the DOPA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Combining Edge Computing-Assisted Internet of Things Security with Artificial Intelligence: Applications, Challenges, and Opportunities.
- Author
-
Rupanetti, Dulana and Kaabouch, Naima
- Subjects
ARTIFICIAL intelligence ,DATA privacy ,COMPUTER network security ,MACHINE learning ,INTERNET security - Abstract
The integration of edge computing with IoT (EC-IoT) systems provides significant improvements in addressing security and privacy challenges in IoT networks. This paper examines the combination of EC-IoT and artificial intelligence (AI), highlighting practical strategies to improve data and network security. The published literature has suggested decentralized and reliable trust measurement mechanisms and security frameworks designed explicitly for IoT-enabled systems. Therefore, this paper reviews the latest attack models threatening EC-IoT systems and their impacts on IoT networks. It also examines AI-based methods to counter these security threats and evaluates their effectiveness in real-world scenarios. Finally, this survey aims to guide future research by stressing the need for scalable, adaptable, and robust security solutions to address evolving threats in EC-IoT environments, focusing on the integration of AI to enhance the privacy, security, and efficiency of IoT systems while tackling the challenges of scalability and resource limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Dynamic Edge-Based High-Dimensional Data Aggregation with Differential Privacy.
- Author
-
Chen, Qian, Ni, Zhiwei, Zhu, Xuhui, Lyu, Moli, Liu, Wentao, and Xia, Pingfan
- Subjects
DATA privacy ,UPLOADING of data ,EDGE computing ,INFORMATION sharing ,PRIVACY - Abstract
Edge computing enables efficient data aggregation for services like data sharing and analysis in distributed IoT applications. However, uploading dynamic high-dimensional data to an edge server for efficient aggregation is challenging. Additionally, there is the significant risk of privacy leakage associated with direct such data uploading. Therefore, we propose an edge-based differential privacy data aggregation method leveraging progressive UMAP with a dynamic time window based on LSTM (EDP-PUDL). Firstly, a model of the dynamic time window based on a long short-term memory (LSTM) network was developed to divide dynamic data. Then, progressive uniform manifold approximation and projection (UMAP) with differential privacy was performed to reduce the dimension of the window data while preserving privacy. The privacy budget was determined by the data volume and the attribute's Shapley value, adding DP noise. Finally, the privacy analysis and experimental comparisons demonstrated that EDP-PUDL ensures user privacy while achieving superior aggregation efficiency and availability compared to other algorithms used for dynamic high-dimensional data aggregation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. LIME-Mine: Explainable Machine Learning for User Behavior Analysis in IoT Applications.
- Author
-
Cai, Xiaobo, Zhang, Jiajin, Zhang, Yue, Yang, Xiaoshan, and Han, Ke
- Subjects
ARTIFICIAL neural networks ,EDGE computing ,BEHAVIORAL assessment ,MACHINE learning ,INTERNET of things - Abstract
In Internet of Things (IoT) applications, user behavior is influenced by factors such as network structure, user activity, and location. Extracting valuable patterns from user activity traces can lead to the development of smarter, more personalized IoT applications and improved user experience. This paper proposes a LIME-based user behavior preference mining algorithm that leverages Explainable AI (XAI) techniques to interpret user behavior data and extract user preferences. By training a black-box neural network model to predict user behavior using LIME and approximating predictions with a local linear model, we identify key features influencing user behavior. This analysis reveals user behavioral patterns and preferences, such as habits at specific times, locations, and device states. Incorporating user behavioral information into the resource scheduling process, combined with a feedback mechanism, establishes an active discovery network of user demand. Our approach, utilizing edge computing capabilities, continuously fine-tunes and optimizes resource scheduling, actively adapting to user perceptions. Experimental results demonstrate the effectiveness of feedback control in satisfying diverse user resource requests, enhancing user satisfaction, and improving system resource utilization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A Multi-Dimensional Reverse Auction Mechanism for Volatile Federated Learning in the Mobile Edge Computing Systems.
- Author
-
Hong, Yiming, Zheng, Zhaohua, and Wang, Zizheng
- Subjects
MACHINE learning ,FEDERATED learning ,MOBILE computing ,EDGE computing ,COMPUTER systems - Abstract
Federated learning (FL) can break the problem of data silos and allow multiple data owners to collaboratively train shared machine learning models without disclosing local data in mobile edge computing. However, how to incentivize these clients to actively participate in training and ensure efficient convergence and high test accuracy of the model has become an important issue. Traditional methods often use a reverse auction framework but ignore the consideration of client volatility. This paper proposes a multi-dimensional reverse auction mechanism (MRATR) that considers the uncertainty of client training time and reputation. First, we introduce reputation to objectively reflect the data quality and training stability of the client. Then, we transform the goal of maximizing social welfare into an optimization problem, which is proven to be NP-hard. Then, we propose a multi-dimensional auction mechanism MRATR that can find the optimal client selection and task allocation strategy considering clients' volatility and data quality differences. The computational complexity of this mechanism is polynomial, which can promote the rapid convergence of FL task models while ensuring near-optimal social welfare maximization and achieving high test accuracy. Finally, the effectiveness of this mechanism is verified through simulation experiments. Compared with a series of other mechanisms, the MRATR mechanism has faster convergence speed and higher testing accuracy on both the CIFAR-10 and IMAGE-100 datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Traffic-Aware Intelligent Association and Task Offloading for Multi-Access Edge Computing.
- Author
-
Nugroho, Avilia Kusumaputeri and Kim, Taewoon
- Subjects
EDGE computing ,EVIDENCE gaps ,CLOUD computing ,PREDICTION models ,DEADLINES - Abstract
Edge computing is a promising technology, especially for offloading users' computationally heavy tasks. The close proximity of edge computing units to users minimizes network latency, thereby enabling delay-sensitive applications. Although optimal resource provisioning and task offloading in edge computing are widely studied in the literature, there are still some critical research gaps. In this study, we propose a traffic-aware optimal association and task-offloading approach. The proposed method does not rely solely on the average rate of offloading requests, which can differ from actual values in real time. Instead, it uses an intelligent, high-precision prediction model to forecast future offloading requests, allowing resource provisioning to be based on future sequences of requests rather than average values. Additionally, we propose an optimization-based approach that can meet task deadlines, which is crucial for mission-critical applications. Finally, the proposed approach distributes the computing load over multiple time steps, ensuring future resource scheduling and task-offloading decisions can be made with a certain level of flexibility. The proposed approach is extensively evaluated under various scenarios and configurations to validate its effectiveness. As a result, the proposed deep learning model has resulted in a request prediction error of 0.0338 (RMSE). In addition, compared to the greedy approach, the proposed approach has reduced the use of local and cloud computing from 0.02 and 18.26 to 0.00 and 0.62, respectively, while increasing edge computing usage from 1.31 to 16.98, which can effectively prolong the lifetime of user devices and reduce network latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Application of fuzzy logic control theory combined with target tracking algorithm in unmanned aerial vehicle target tracking.
- Author
-
Li, Cong, Zhao, Wenyi, Zhao, Liuxue, Ju, Li, and Zhang, Hongyu
- Subjects
- *
TRANSFORMER models , *CENTRAL processing units , *DRONE aircraft , *TRACKING algorithms , *FUZZY logic - Abstract
This paper aims to increase the Unmanned Aerial Vehicle's (UAV) capacity for target tracking. First, a control model based on fuzzy logic is created, which modifies the UAV's flight attitude in response to the target's motion status and changes in the surrounding environment. Then, an edge computing-based target tracking framework is created. By deploying edge devices around the UAV, the calculation of target recognition and position prediction is transferred from the central processing unit to the edge nodes. Finally, the latest Vision Transformer model is adopted for target recognition, the image is divided into uniform blocks, and then the attention mechanism is used to capture the relationship between different blocks to realize real-time image analysis. To anticipate the position, the particle filter algorithm is used with historical data and sensor inputs to produce a high-precision estimate of the target position. The experimental results in different scenes show that the average target capture time of the algorithm based on fuzzy logic control is shortened by 20% compared with the traditional proportional-integral-derivative (PID) method, from 5.2 s of the traditional PID to 4.2 s. The average tracking error is reduced by 15%, from 0.8 m of traditional PID to 0.68 m. Meanwhile, in the case of environmental change and target motion change, this algorithm shows better robustness, and the fluctuation range of tracking error is only half of that of traditional PID. This shows that the fuzzy logic control theory is successfully applied to the UAV target tracking field, which proves the effectiveness of this method in improving the target tracking performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Optimizing energy efficiency in MEC networks: a deep learning approach with Cybertwin-driven resource allocation.
- Author
-
Lilhore, Umesh Kumar, Simaiya, Sarita, Dalal, Surjeet, Faujdar, Neetu, Alroobaea, Roobaea, Alsafyani, Majed, Baqasah, Abdullah M., and Algarni, Sultan
- Subjects
COST functions ,MOBILE computing ,EDGE computing ,RESOURCE allocation ,ENERGY consumption ,DEEP learning - Abstract
Cybertwin (CT) is an innovative network structure that digitally simulates humans and items in a virtual environment, significantly influencing Cybertwin instances more than regular VMs. Cybertwin-driven networks, combined with Mobile Edge Computing (MEC), provide practical options for transmitting IoT-enabled data. This research introduces a hybrid methodology integrating deep learning with Cybertwin-driven resource allocation to enhance energy-efficient workload offloading and resource management in MEC networks. Offloading work is essential in MEC networks since several applications require significant resources. The Cybertwin-driven approach considers user mobility, virtualization, processing power, load migrations, and resource demand as crucial elements in the decision-making process for offloading. The model optimizes job allocation between on-premises and distant execution using a task-offloading strategy to reduce the operating burden on the MEC network. The model uses a hybrid partitioning approach and a cost function to optimize resource allocation efficiently. This cost function accounts for energy consumption and service delays associated with job assignment, execution, and fulfilment. The model calculates the cost of several segmentation and offloading procedures and chooses the lowest cost to enhance energy efficiency and performance. The approach employs a deep learning architecture called "CNN-LSTM-TL" to accomplish energy-efficient task offloading, utilizing pre-trained transfer learning models. Batch normalization is used to speed up model training and improve its robustness. The model is trained and assessed using an extensive mobile edge computing public dataset. The experimental findings confirm the efficacy of the proposed methodology, indicating a 20% decrease in energy usage compared to conventional methods while achieving comparable or superior performance levels. Simulation studies emphasize the advantages of incorporating Cybertwin-driven insights into resource allocation and workload-offloading techniques. This research enhances energy-efficient and resource-aware MEC networks by incorporating Cybertwin-driven techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A computation offloading strategy for multi-access edge computing based on DQUIC protocol.
- Author
-
Yang, Peng, Ma, Ruochen, Yi, Meng, Zhang, Yifan, Li, Bing, and Bai, Zijian
- Subjects
- *
EDGE computing , *DEEP reinforcement learning , *REINFORCEMENT learning , *COMMUNICATION policy , *STATISTICAL decision making - Abstract
Computation offloading can efficiently expand edge resources and is widely used to perform computing-intensive and delay-sensitive tasks. The inability of existing offloading strategies to pay attention to both packet loss problem and performance problems caused by channel noise usually lead to serious encoding costs and retransmission costs in offloading by traditional communication protocols. To address these issues, we propose a dynamic analog-digital coding QUIC (DQUIC) protocol to ensure the efficiency and reliability of edge computing data transmission. The DQUIC protocol uses a dynamic encoding method based on continuous slot communication state to handle sudden errors with a small encoding cost. Moreover, we design a dynamic multi-access edge computing (MEC) model using the DQUIC protocol for communication, which considers the impact of channel noise on communication rate and channel packet loss rate. In the dynamic MEC environment, the double deep Q-learning (DDQN) algorithm is used to solve the offloading decision problem and find the optimal offloading strategy. The experimental results demonstrate that our computation strategy, which leverages DQUIC, surpasses those strategies grounded in the DQUIC protocol and Coco protocol within a dynamic MEC environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Millisecond‐scale behaviours of plankton quantified in vitro and in situ using the Event‐based Vision Sensor.
- Author
-
Takatsuka, Susumu, Miyamoto, Norio, Sato, Hidehito, Morino, Yoshiaki, Kurita, Yoshihisa, Yabuki, Akinori, Chen, Chong, and Kawagucci, Shinsuke
- Subjects
- *
MOTION capture (Human mechanics) , *COMPUTER vision , *PARTICLE motion , *ENERGY consumption , *EDGE computing - Abstract
The Event‐based Vision Sensor (EVS) is a bio‐inspired sensor that captures detailed motions of objects, aiming to become the 'eyes' of machines like self‐driving cars. Compared to conventional frame‐based image sensors, the EVS has an extremely fast motion capture equivalent to 10,000‐fps even with standard optical settings, plus high dynamic ranges for brightness and also lower consumption of memory and energy. Here, we developed 22 characteristic features for analysing the motions of aquatic particles from the EVS raw data and tested the applicability of the EVS in analysing plankton behaviour. Laboratory cultures of six species of zooplankton and phytoplankton were observed, confirming species‐specific motion periodicities up to 41 Hz. We applied machine learning to automatically classify particles into four categories of zooplankton and passive particles, achieving an accuracy up to 86%. At the in situ deployment of the EVS at the bottom of Lake Biwa, several particles exhibiting distinct cumulative trajectory with periodicities in their motion (up to 16 Hz) were identified, suggesting that they were living organisms with rhythmic behaviour. We also used the EVS in the deep sea, observing particles with active motion and periodicities over 40 Hz. Our application of the EVS, especially focusing on its millisecond‐scale temporal resolution and wide dynamic range, provides a new avenue to investigate organismal behaviour characterised by rapid and periodical motions. The EVS will likely be applicable in the near future for the automated monitoring of plankton behaviour by edge computing on autonomous floats, as well as quantifying rapid cellular‐level activities under microscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Leveraging Edge Computing for Video Data Streaming in UAV-Based Emergency Response Systems.
- Author
-
Sarkar, Mekhla and Sahoo, Prasan Kumar
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *STREAMING video & television , *BANDWIDTH allocation , *EDGE computing - Abstract
The rapid advancement of technology has greatly expanded the capabilities of unmanned aerial vehicles (UAVs) in wireless communication and edge computing domains. The primary objective of UAVs is the seamless transfer of video data streams to emergency responders. However, live video data streaming is inherently latency dependent, wherein the value of the video frames diminishes with any delay in the stream. This becomes particularly critical during emergencies, where live video streaming provides vital information about the current conditions. Edge computing seeks to address this latency issue in live video streaming by bringing computing resources closer to users. Nonetheless, the mobile nature of UAVs necessitates additional trajectory supervision alongside the management of computation and networking resources. Consequently, efficient system optimization is required to maximize the overall effectiveness of the collaborative system with limited UAV resources. This study explores a scenario where multiple UAVs collaborate with end users and edge servers to establish an emergency response system. The proposed idea takes a comprehensive approach by considering the entire emergency response system from the incident site to video distribution at the user level. It includes an adaptive resource management strategy, leveraging deep reinforcement learning by simultaneously addressing video streaming latency, UAV and user mobility factors, and varied bandwidth resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Low-Cost, Low-Power Edge Computing System for Structural Health Monitoring in an IoT Framework.
- Author
-
Hidalgo-Fort, Eduardo, Blanco-Carmona, Pedro, Muñoz-Chavero, Fernando, Torralba, Antonio, and Castro-Triguero, Rafael
- Subjects
- *
STRUCTURAL health monitoring , *WIRELESS sensor networks , *COMPUTER systems , *MODAL analysis , *EDGE computing , *COMPUTER firmware - Abstract
A complete low-power, low-cost and wireless solution for bridge structural health monitoring is presented. This work includes monitoring nodes with modular hardware design and low power consumption based on a control and resource management board called CoreBoard, and a specific board for sensorization called SensorBoard is presented. The firmware is presented as a design of FreeRTOS parallelised tasks that carry out the management of the hardware resources and implement the Random Decrement Technique to minimize the amount of data to be transmitted over the NB-IoT network in a secure way. The presented solution is validated through the characterization of its energy consumption, which guarantees an autonomy higher than 10 years with a daily 8 min monitoring periodicity, and two deployments in a pilot laboratory structure and the Eduardo Torroja bridge in Posadas (Córdoba, Spain). The results are compared with two different calibrated commercial systems, obtaining an error lower than 1.72% in modal analysis frequencies. The architecture and the results obtained place the presented design as a new solution in the state of the art and, thanks to its autonomy, low cost and the graphical device management interface presented, allow its deployment and integration in the current IoT paradigm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Policy Compression for Intelligent Continuous Control on Low-Power Edge Devices.
- Author
-
Avé, Thomas, De Schepper, Tom, and Mets, Kevin
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *AUTONOMOUS robots , *INTELLIGENT control systems , *MOBILE robots - Abstract
Interest in deploying deep reinforcement learning (DRL) models on low-power edge devices, such as Autonomous Mobile Robots (AMRs) and Internet of Things (IoT) devices, has seen a significant rise due to the potential of performing real-time inference by eliminating the latency and reliability issues incurred from wireless communication and the privacy benefits of processing data locally. Deploying such energy-intensive models on power-constrained devices is not always feasible, however, which has led to the development of model compression techniques that can reduce the size and computational complexity of DRL policies. Policy distillation, the most popular of these methods, can be used to first lower the number of network parameters by transferring the behavior of a large teacher network to a smaller student model before deploying these students at the edge. This works well with deterministic policies that operate using discrete actions. However, many real-world tasks that are power constrained, such as in the field of robotics, are formulated using continuous action spaces, which are not supported. In this work, we improve the policy distillation method to support the compression of DRL models designed to solve these continuous control tasks, with an emphasis on maintaining the stochastic nature of continuous DRL algorithms. Experiments show that our methods can be used effectively to compress such policies up to 750% while maintaining or even exceeding their teacher's performance by up to 41% in solving two popular continuous control tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Integration of Tracking, Re-Identification, and Gesture Recognition for Facilitating Human–Robot Interaction.
- Author
-
Lee, Sukhan, Lee, Soojin, and Park, Hyunwoo
- Subjects
- *
AUTONOMOUS robots , *TRANSPORTATION of patients , *NURSES as patients , *EDGE computing , *PATIENT monitoring , *MOBILE robots - Abstract
For successful human–robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human–robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Review of Recent Hardware and Software Advances in GPU-Accelerated Edge-Computing Single-Board Computers (SBCs) for Computer Vision.
- Author
-
Iqbal, Umair, Davies, Tim, and Perez, Pascal
- Subjects
- *
COMPUTER vision , *SINGLE-board computers , *ARTIFICIAL intelligence , *SMART cities , *CITY traffic - Abstract
Computer Vision (CV) has become increasingly important for Single-Board Computers (SBCs) due to their widespread deployment in addressing real-world problems. Specifically, in the context of smart cities, there is an emerging trend of developing end-to-end video analytics solutions designed to address urban challenges such as traffic management, disaster response, and waste management. However, deploying CV solutions on SBCs presents several pressing challenges (e.g., limited computation power, inefficient energy management, and real-time processing needs) hindering their use at scale. Graphical Processing Units (GPUs) and software-level developments have emerged recently in addressing these challenges to enable the elevated performance of SBCs; however, it is still an active area of research. There is a gap in the literature for a comprehensive review of such recent and rapidly evolving advancements on both software and hardware fronts. The presented review provides a detailed overview of the existing GPU-accelerated edge-computing SBCs and software advancements including algorithm optimization techniques, packages, development frameworks, and hardware deployment specific packages. This review provides a subjective comparative analysis based on critical factors to help applied Artificial Intelligence (AI) researchers in demonstrating the existing state of the art and selecting the best suited combinations for their specific use-case. At the end, the paper also discusses potential limitations of the existing SBCs and highlights the future research directions in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Visual Navigation Algorithms for Aircraft Fusing Neural Networks in Denial Environments.
- Author
-
Gao, Yang, Wang, Yue, Tian, Lingyun, Li, Dongguang, and Wang, Fenming
- Subjects
- *
OBJECT recognition (Computer vision) , *COMPUTING platforms , *FEATURE extraction , *EDGE computing , *MONOCULARS - Abstract
A lightweight aircraft visual navigation algorithm that fuses neural networks is proposed to address the limited computing power issue during the offline operation of aircraft edge computing platforms in satellite-denied environments with complex working scenarios. This algorithm utilizes object detection algorithms to label dynamic objects within complex scenes and performs dynamic feature point elimination to enhance the feature point extraction quality, thereby improving navigation accuracy. The algorithm was validated using an aircraft edge computing platform, and comparisons were made with existing methods through experiments conducted on the TUM public dataset and physical flight experiments. The experimental results show that the proposed algorithm not only improves the navigation accuracy but also has high robustness compared with the monocular ORB-SLAM2 method under the premise of satisfying the real-time operation of the system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies.
- Author
-
Femminella, Mauro and Reali, Gianluca
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *EDGE computing , *COMPUTER systems , *DATA protection - Abstract
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. AI-Driven QoS-Aware Scheduling for Serverless Video Analytics at the Edge.
- Author
-
Giagkos, Dimitrios, Tzenetopoulos, Achilleas, Masouros, Dimosthenis, Xydis, Sotirios, Catthoor, Francky, and Soudris, Dimitrios
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *COMMUNICATION infrastructure , *VIDEO processing , *DATA analytics - Abstract
Today, video analytics are becoming extremely popular due to the increasing need for extracting valuable information from videos available in public sharing services through camera-driven streams in IoT environments. To avoid data communication overheads, a common practice is to have computation close to the data source rather than Cloud offloading. Typically, video analytics are organized as separate tasks, each with different resource requirements (e.g., computational- vs. memory-intensive tasks). The serverless computing paradigm forms a promising approach for mapping such types of applications, enabling fine-grained deployment and management in a per-function, and per-device manner. However, there is a tradeoff between QoS adherence and resource efficiency. Performance variability due to function co-location and prevalent resource heterogeneity make maintaining QoS challenging. At the same time, resource efficiency is essential to avoid waste, such as unnecessary power consumption and CPU reservation. In this paper, we present Darly, a QoS-, interference- and heterogeneity-aware Deep Reinforcement Learning-based Scheduler for serverless video analytics deployments on top of distributed Edge nodes. The proposed framework incorporates a DRL agent that exploits performance counters to identify the levels of interference and the degree of heterogeneity in the underlying Edge infrastructure. It combines this information along with user-defined QoS requirements to improve resource allocations by deciding the placement, migration, or horizontal scaling of serverless functions. We evaluate Darly on a typical Edge cluster with a real-world workflow composed of commonly used serverless video analytics functions and show that our approach achieves efficient scheduling of the deployed functions by satisfying multiple QoS requirements for up to 91.6% (Profile-based) of the total requests under dynamic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Optimizing Task Offloading for Power Line Inspection in Smart Grid Networks with Edge Computing: A Game Theory Approach.
- Author
-
Lu, Xu, Yuan, Sihan, Nian, Zhongyuan, Mu, Chunfang, and Li, Xi
- Subjects
- *
ELECTRIC lines , *EDGE computing , *ELECTRIC power distribution grids , *GAME theory , *POWER resources - Abstract
In the power grid, inspection robots enhance operational efficiency and safety by inspecting power lines for information sharing and interaction. Edge computing improves computational efficiency by positioning resources close to the data source, supporting real-time fault detection and line monitoring. However, large data volumes and high latency pose challenges. Existing offloading strategies often neglect task divisibility and priority, resulting in low efficiency and poor system performance. This paper constructs a power grid inspection offloading scenario using Python 3.11.2 to study and improve various offloading strategies. Implementing a game-theory-based distributed computation offloading strategy, simulation analysis reveals issues with high latency and low resource utilization. To address these, an improved game-theory-based strategy is proposed, optimizing task allocation and priority settings. By integrating local and edge computing resources, resource utilization is enhanced, and latency is significantly reduced. Simulations show that the improved strategy lowers communication latency, enhances system performance, and increases resource utilization in the power grid inspection context, offering valuable insights for related research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. APCSMA: Adaptive Personalized Client-Selection and Model-Aggregation Algorithm for Federated Learning in Edge Computing Scenarios.
- Author
-
Ma, Xueting, Ma, Guorui, Liu, Yang, and Qi, Shuhan
- Subjects
- *
FEDERATED learning , *MACHINE learning , *EDGE computing , *ALGORITHMS , *HETEROGENEITY - Abstract
With the rapid advancement of the Internet and big data technologies, traditional centralized machine learning methods are challenged when dealing with large-scale datasets. Federated Learning (FL), as an emerging distributed machine learning paradigm, enables multiple clients to collaboratively train a global model while preserving privacy. Edge computing, also recognized as a critical technology for handling massive datasets, has garnered significant attention. However, the heterogeneity of clients in edge computing environments can severely impact the performance of the resultant models. This study introduces an Adaptive Personalized Client-Selection and Model-Aggregation Algorithm, APCSMA, aimed at optimizing FL performance in edge computing settings. The algorithm evaluates clients' contributions by calculating the real-time performance of local models and the cosine similarity between local and global models, and it designs a ContriFunc function to quantify each client's contribution. The server then selects clients and assigns weights during model aggregation based on these contributions. Moreover, the algorithm accommodates personalized needs in local model updates, rather than simply overwriting with the global model. Extensive experiments were conducted on the FashionMNIST and Cifar-10 datasets, simulating three data distributions with parameters dir = 0.1, 0.3, and 0.5. The accuracy improvements achieved were 3.9%, 1.9%, and 1.1% for the FashionMNIST dataset, and 31.9%, 8.4%, and 5.4% for the Cifar-10 dataset, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A novel blockchain enabled resource allocation and task offloading strategy in cloud computing environment.
- Author
-
Senthilkumar, G., Madhusudhan, K. N., Jeyasheela, Y., and Ajitha, P.
- Subjects
RESOURCE allocation ,CLOUD computing ,EDGE computing ,ELECTRONIC data processing ,BLOCKCHAINS ,BIG data ,ENERGY consumption - Abstract
Large amounts of processing resources are required for the sensed raw big data processing during the data generation process. Furthermore, as sensed data are typically privacy sensitive, blockchain technology can be used to ensure the privacy concerns. This study examines a multiuser mobile offloading network that consists of a cloud server located remotely and an edge node. We formulate the offloading problem as the joint optimization of task offloading decision making of all users, the computation resource allocation among the edge executing applications, and the radio resource assignment among all the remote-processing applications. The goal is to minimize the maximum weighted cost of all users. When compared to other benchmark approaches, the simulation results show that the proposed algorithm achieves optimal results in terms of both energy consumption and delay as a result of collaboration. Finally the resource allocation and optimal offloading strategy with 93% efficiency is obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.