361 results on '"Load prediction"'
Search Results
2. Load prediction of parcel pick-up points: model-driven vs data-driven approaches.
- Author
-
Nguyen, Thi-Thu-Tam, Cabani, Adnane, Cabani, Iyadh, De Turck, Koen, and Kieffer, Michel
- Subjects
FORECASTING ,CONSUMERS - Abstract
Pick-Up Points (PUPs) represent an alternative delivery option for online purchases. Parcels are delivered at a reduced cost to PUPs and wait until being picked up by customers or returned to the original warehouse if their sojourn time is over. When the chosen PUP is overloaded, the parcel may be refused and delivered to the next available PUP on the carrier tour. This paper presents and compares forecasting approaches for the load of a PUP to help PUP management companies balance delivery flows and reduce PUP overload. The parcel life-cycle has been taken into account in the forecasting process via models of the flow of parcel orders, the parcel delivery delays, and the pick-up process. Model-driven and data-driven approaches are compared in terms of load-prediction accuracy. For the considered example, the best approach (which makes use of the relationship of the load with the delivery and pick-up processes) is able to predict the load up to 4 days ahead with mean absolute errors ranging from 3.16 parcels (1 day ahead) to 8.51 parcels (4 days ahead) for a PUP with an average load of 45 parcels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. PredXGBR : A Machine Learning Framework for Short-Term Electrical Load Prediction †.
- Author
-
Zabin, Rifat, Haque, Khandaker Foysal, and Abdelgawad, Ahmed
- Subjects
ELECTRICAL load ,RECURRENT neural networks ,ENERGY consumption ,ELECTRIC utilities ,ARTIFICIAL intelligence - Abstract
The growing demand for consumer-end electrical load is driving the need for smarter management of power sector utilities. In today's technologically advanced society, efficient energy usage is critical, leaving no room for waste. To prevent both electricity shortage and wastage, electrical load forecasting becomes the most convenient way out. However, the conventional and probabilistic methods are less adaptive to the acute, micro, and unusual changes in the demand trend. With the recent development of artificial intelligence (AI), machine learning (ML) has become the most popular choice due to its higher accuracy based on time-, demand-, and trend-based feature extractions. Thus, we propose an Extreme Gradient Boosting (XGBoost) regression-based model—PredXGBR-1, which employs short-term lag features to predict hourly load demand. The novelty of PredXGBR-1 lies in its focus on short-term lag autocorrelations to enhance adaptability to micro-trends and demand fluctuations. Validation across five datasets, representing electrical load in the eastern and western USA over a 20-year period, shows that PredXGBR-1 outperforms a long-term feature-based XGBoost model, PredXGBR-2, and state-of-the-art recurrent neural network (RNN) and long short-term memory (LSTM) models. Specifically, PredXGBR-1 achieves an mean absolute percentage error (MAPE) between 0.98 and 1.2% and an R 2 value of 0.99, significantly surpassing PredXGBR-2's R 2 of 0.61 and delivering up to 86.8% improvement in MAPE compared to LSTM models. These results confirm the superior performance of PredXGBR-1 in accurately forecasting short-term load demand. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Research on Joint Forecasting Technology of Cold, Heat, and Electricity Loads Based on Multi-Task Learning.
- Author
-
Han, Ruicong, Jiang, He, Wei, Mofan, and Guo, Rui
- Subjects
OPTIMIZATION algorithms ,TECHNOLOGICAL forecasting ,HIGH speed trains ,PREDICTION models ,TIME series analysis ,LOAD forecasting (Electric power systems) ,ASYNCHRONOUS learning - Abstract
The cooperative optimization and dispatch operation of the integrated energy system (IES) depends on accurate load forecasts. A multivariate load, joint prediction model, based on the combination of multi-task learning (MTL) and dynamic time warping (DTW), is proposed to address the issue of the prediction model's limited accuracy caused by the fragmentation of the multivariate load coupling relationship and the absence of future time series information. Firstly, the MTL model, based on the bidirectional long short-term memory (BiLSTM) neural network, extracts the coupling information among the multivariate loads and performs the preliminary prediction; secondly, the DTW algorithm clusters and splices the load data that are similar to the target value as the input features of the model; finally, the BiLSTM-attention model is used for secondary prediction, and the improved Bayesian optimization algorithm is applied for adaptive selection of optimal hyperparameters. Based on the game-theoretic view of Shapley's additive interpretation (SHAP), a model interpretation technique is introduced to determine the validity of the liquidity indicator and the asynchronous relationship between the significance of the indicator and its actual contribution. The prediction results show that the joint prediction model proposed in this paper has higher training speed and prediction accuracy than the traditional single-load prediction model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. 基于改进凌日搜索算法的风洞 天平载荷预测方法.
- Author
-
王碧玲1., 周激, 沈力华, 刘博宇, and 王奔
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
6. AN INTERNET OF THINGS TASK SCHEDULING FRAMEWORK BASED ON AGILE VIRTUAL NETWORK ON DEMAND SERVICE MODEL.
- Author
-
QIQUN LIU
- Subjects
RESOURCE allocation ,ON-demand computing ,SEARCH algorithms ,QUALITY of service ,PREDICTION models - Abstract
In order to improve the efficiency of cloud computing resource utilization and avoid the problem of computing resource allocation and scheduling lagging behind load changes, the author proposes a cloud computing resource on-demand allocation and elastic scheduling method based on network load prediction. Firstly, the author takes the network load data of Wikimedia as the research object and proposes an adaptive two-stage multi network model load prediction method based on LSTM (i.e. ATSMNN-LSTM load prediction method). This method can classify the network load data into climbing and descending types based on the trend and characteristics of the input network load data, And adaptively schedule the input network load data to the LSTM load prediction model that matches its type for prediction based on the classification results. The author proposes a maximum cloud service revenue computing resource quantity search algorithm based on network load prediction (i.e. MaxCSPRNWP algorithm), which aims to improve cloud service revenue as the optimization objective. Under the premise of ensuring task service quality and system stability, the algorithm allocates cloud computing resources on demand and flexibly schedules them in advance based on the predicted network load results. The experimental results show that the ATSMNNLSTM load prediction method proposed by the author can obtain more accurate network load prediction results compared to other load prediction methods, and the MaxCSPR-NWP algorithm, which is based on network load prediction and is capable of effectively converting the network load prediction results into the required number of cloud servers, is the maximum cloud service revenue computing resource quantity search algorithm proposed by the author, not only does it achieve the early allocation and scheduling of cloud computing resources, thereby avoiding the impact of lagging behind in computing resource allocation and scheduling due to load changes on the quality of cloud computing task services and resource utilization efficiency, at the same time, it has also achieved on-demand allocation and flexible scheduling of cloud computing resources with the goal of improving cloud service revenue. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Satellite Internet of Things Load Estimation and Prediction
- Author
-
MAO Xiwen, WANG Haitao, and ZHANG Gengxin
- Subjects
satellite IoT ,load prediction ,load estimation ,machine learning ,Applied optics. Photonics ,TA1501-1820 - Abstract
【Objective】With the rapid development of the satellite Internet of Things (IoT), a large number of short-burst users are aggravating collisions and interference among users of the access network. To address this issue, several organizations and individuals have put forward some dynamic access schemes. However, for most of the proposed dynamic access schemes, it is necessary to know the exact number of future time slot access applications. At present, some load estimation schemes have been proposed in the literature, but the accuracy of these schemes is not high, and they can only achieve load estimation for current time slot.【Methods】To solve this issue, we propose a load estimation method based on the leading code state and parameter estimation. A load prediction method based on machine learning is also proposed. The load estimation method based on leading code status and parameter estimation analyzes the relationship between the probability of leading code in different states within the time slot of the satellite IoT and the number of requests for access to the current time slot. It gives the maximum likelihood parameter estimation expression and uses the maximum likelihood parameter estimation method to estimate the current time slot load. The load prediction method based on machine learning takes the estimated load value as its historical data, combining the Long and Short Term Memory (LSTM) network and the Auto Regressive Moving Average (ARMA) model to predict the future time slot load.【Results】The simulation results show that the estimated error of the load estimation method based on leading code state and parameter estimation is less than 1%. The comprehensive error of the load prediction method based on load estimation results as historical machine learning data is about 6%.【Conclusion】The predicted error of the proposed load estimation and prediction method is within the acceptable range, thus offering accurate future slot access requests for dynamic access schemes.
- Published
- 2024
- Full Text
- View/download PDF
8. Edge computing resource scheduling method based on container elastic scaling.
- Author
-
Wang, Huaijun, Deng, Erhao, Li, Junhuai, and Zhang, Chenfei
- Subjects
CONVOLUTIONAL neural networks ,EDGE computing ,MARKOV processes ,ELECTRONIC data processing ,PREDICTION models ,REINFORCEMENT learning - Abstract
Edge computing is a crucial technology to solve the problem of computing resources and bandwidth required for extensive edge data processing, as well as for meeting the real-time demands of applications. Container virtualization technology has become the underlying technical basis for edge computing due to its efficient performance. Because the traditional container scaling strategy has issues such as long response times, low resource utilization, and unpredictable container application loads, this article proposes a method for scheduling edge computing resources based on the elastic scaling of containers. Firstly, a container load prediction model (Trend Enhanced-Temporal Convolutional Network, TE-TCN) is designed based on the temporal convolutional neural network, which features an encoder-decoder structure. The encoder extracts potential temporal relationship features from the historical data of the container load, while the decoder identifies the trend item of the container load through the trend enhancement module. Subsequently, the information extracted by the encoder and decoder is fed into the fully connected layer to facilitate container load prediction using the dual-input ResNet method. Secondly, Markov decision process (MDP) is used to model the elastic expansion problem of containers in multi-objective optimization. Utilizing the prediction outcomes of the TE-TCN load prediction model, a time-varying action space is formulated to address the issue of excessive action space in conventional reinforcement learning. Subsequently, a predictive container scaling strategy based on reinforcement learning is devised to align with the application load patterns in the container environment, enabling adaptation to the surge in traffic generated by the container environment. Finally, the experimental results on the WorldCup98 dataset and the real dataset show that the TE-TCN model can accurately predict the container load change. Experiments in the actual environment demonstrate that the proposed strategy reduces the average response time by 16.2% when the burst load arrives, and increases the average CPU utilization by 44.6% when the jitter load occurs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. MULTILEVEL ENSEMBLE MODEL FOR LOAD PREDICTION ON HOSTS IN FOG COMPUTING ENVIRONMENT.
- Author
-
BAWA, Shabnam, RANA, Prashant Singh, and TEKCHANDANI, Rajkumar
- Subjects
VIRTUAL machine systems ,DECISION trees ,RANDOM forest algorithms ,MACHINE learning ,PREDICTION models - Abstract
With the growing demand for various IoT applications, fog nodes frequently become overloaded. Fog computing requires effective load balancing to maximize resource utilization. It is essential to determine the load on host to obtain workload consolidation. Various random parameters, including CPU utilization, the number of CPU cores, RAM, memory allocated, memory available, disk I/O, and network I/O are employed to better comprehend host workloads. In the proposed work, the host's load status is detected using an ensemble approach into three categories: under-loaded, balanced and overloaded. Further, the proposed work considers three case studies and varying numbers of virtual machines (VMs) are executed with various parameter combinations. In each case study, a different number of VMs are executed in parallel on two different platforms. In the proposed study, we predicted the load on multiple hosts by employing a variety of advanced machine-learning models. To construct an ensemble model, we selected models with higher accuracy based on retrieved performance evaluation criteria. The ensemble method is applied to deal with the worst-case scenario of the model prediction. For a selected number of case studies, the Random Forest model, Ada Boost Classifier, Gradient Boost and Decision Tree models perform better than other models. These state-of-the-art predictive models are outperformed by our proposed ensemble model and achieves an improved accuracy of nearly 82% by correctly classifying hosts as overloaded, underloaded and balanced. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Power Load Prediction Algorithm Based on Wavelet Transform.
- Author
-
Xu Chen, Haomiao Zhang, Chao Zhang, Zhiqiang Cheng, and Yinzhe Xu
- Subjects
PATTERN recognition systems ,MACHINE learning ,TECHNOLOGICAL innovations ,DIGITAL twins ,PROCESS capability - Abstract
To address the environmental impact, low efficiency, and poor accuracy of existing power load prediction methods, this study innovatively proposes a power load prediction system that combines wavelet transform with digital twin technology. Compared with similar power load prediction methods, the proposed method achieved the highest power load prediction accuracy rate of 97.26%, with the lowest MAPE and RMSE being only 3.96% each. Our proposed method has good noise resistance and overcomes the disadvantage of traditional power load prediction methods that are easily affected by the environment. Moreover, the false detection rate of the load information data obtained from the power system in the Fuxin area from 2022 to 2023 was less than 5%, further verifying the reliability of the proposed method. This achievement is attributed to the powerful signal processing capabilities of the discrete wavelet transform, advanced pattern recognition and prediction capabilities of these three deep learning network algorithms, and the intelligence of digital twin technology. The combination of these three elements has brought new technological breakthroughs to the field of power load prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. 卫星物联网负载量估计及预测.
- Author
-
茹习文, 王海涛, and 张更新
- Abstract
Copyright of Study on Optical Communications / Guangtongxin Yanjiu is the property of Study on Optical Communications Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
12. Two-Stage IoT Computational Task Offloading Decision-Making in MEC with Request Holding and Dynamic Eviction.
- Author
-
Wang, Dayong, Bakar, Kamalrulnizam Bin Abu, and Isyaku, Babangida
- Subjects
EDGE computing ,INTERNET of things ,ENERGY consumption ,DECISION making ,EVICTION ,DEEP learning - Abstract
The rapid development of Internet of Things (IoT) technology has led to a significant increase in the computational task load of Terminal Devices (TDs). TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing (MEC). However, existing task-offloading optimization methods typically assume that MEC's computing resources are unlimited, and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted. In addition, existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot, but lack support for multiple retry in subsequent time slots. It is resulting in TD missing potential offloading opportunities in the future. To fill this gap, we propose a Two-Stage Offloading Decision-making Framework (TSODF) with request holding and dynamic eviction. Long Short-Term Memory (LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot. The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology. Simulation results show that TSODF reduces total TD's energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Optimization of Electricity Load Forecasting Model based on Multivariate Time Series Analysis.
- Author
-
Zhuo Wang, Yuchen Luo, Wei Wu, Lei Cao, and Zhun Li
- Subjects
- *
SHORT-term memory , *LONG-term memory , *TIME series analysis , *ENERGY consumption , *ECONOMIC equilibrium , *DEMAND forecasting - Abstract
Due to rising demand and expanding economies, forecasting electricity loads is vital for electrical system management. Precise forecasts assure both economic stability and effective utilization. The basis for generating schedules and managing energy is established by prediction, which is crucial for power stations and transmitting facilities. The purpose of this research is to develop an efficient load prediction approach. Hence, this study presents a novel fine-tuned backtracking search-driven log-sigmoid recurrent network (FBS-LRN) framework for improved thermal electricity load prediction. In the proposed framework, the FBS optimization strategy is introduced for recurrent network activated dynamically in long and short term memory (LSTM) with the log-sigmoid function. In the beginning, the FBS optimization approach is employed to improve the LSTM's characteristics to tackle the issue that the LSTM's performance will be impacted by the unpredictability of its internal properties. Next, using the Python platform, the electricity load projection framework, depending on the suggested FBS-LRN will be implemented into practice and examined using several criteria. The comprehensive research reveals that the suggested approach has superior prediction accuracy and efficacy compared to the current models. Planning for power production and use in the electrical system can be aided within thermal by higher-quality load forecasts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
14. Hourly load prediction based feature selection scheme and hybrid CNN‐LSTM method for building's smart solar microgrid.
- Author
-
Da, Thao Nguyen, Cho, Ming‐Yuan, and Thanh, Phuong Nguyen
- Subjects
- *
FEATURE selection , *MACHINE learning , *CONVOLUTIONAL neural networks , *MICROGRIDS , *BATTERY management systems , *DEEP learning - Abstract
The short‐term load prediction is the critical operation in the peak demand administration and power generation scheduling of buildings that integrated the smart solar microgrid (SSM). Many research studies have proved that hybrid deep learning strategies achieve more accuracy and feasibility in practical applications than individual algorithms. Moreover, many buildings have integrated the SSM on the rooftop with the battery management system (BMS) to enhance energy efficiency management. However, the traditional methodologies only processed the weather parameters and power demand information for short‐term load prediction, ignoring the collected data from SSM and BMS by the advanced metering infrastructures (AMI), which probably improved prediction accuracy. In this research, many accumulated data of building and SSM are collected before methodology implementation. Considering the diversities of accumulated parameters from SSM and BMS, an adaptive convolution neural network long short‐term memory (CNN‐LSTM) is proposed for hourly electrical load prediction. The CNN could extract the critical large‐scale input feature, while the LSTM could achieve better accurate forecasts. The Pearson correlation matrix is calculated for the feature selection scheme from different data units. The hyperparameter tuning is utilized for obtaining the optimized hybrid CNN‐LSTM algorithm. The K‐fold cross‐validation is employed for deep learning algorithm verification, which includes LSTM, GRU, CNN, and Bi‐LSTM methodologies. The results prove that the hybrid CNN‐LSTM achieved outperformed improvements, which are 20.57%, 29.63%, 19.06% in MSE, MAE, MAPE, and 21.24%, 22.02%, 3.82% in validating MSE, MAE, MAPE, respectively. The hybrid CNN and LSTM combined with the feature selection scheme achieve superior predicting accuracies, proving the adaptability ability for integrating into the energy management system (EMS) of the building's SSM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. End-to-End Top-Down Load Forecasting Model for Residential Consumers.
- Author
-
Parkash, Barkha, Lie, Tek Tjing, Li, Weihua, and Tito, Shafiqur Rahman
- Subjects
- *
BOX-Jenkins forecasting , *CONSUMERS - Abstract
This study presents an efficient end-to-end (E2E) learning approach for the short-term load forecasting of hierarchically structured residential consumers based on the principles of a top-down (TD) approach. This technique employs a neural network for predicting load at lower hierarchical levels based on the aggregated one at the top. A simulation is carried out with 9 (from 2013 to 2021) years of energy consumption data of 50 houses located in the United States of America. Simulation results demonstrate that the E2E model, which uses a single model for different nodes and is based on the principles of a top-down approach, shows huge potential for improving forecasting accuracy, making it a valuable tool for grid planners. Model inputs are derived from the aggregated residential category and the specific cluster targeted for forecasting. The proposed model can accurately forecast any residential consumption cluster without requiring any hyperparameter adjustments. According to the experimental analysis, the E2E model outperformed a two-stage methodology and a benchmarked Seasonal Autoregressive Integrated Moving Average (SARIMA) and Support Vector Regression (SVR) model by a mean absolute percentage error (MAPE) of 2.27%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Multi‐Task Residential Short‐Term Load Prediction Based on Contrastive Learning.
- Author
-
Zhang, Wuqing, Li, Jianbin, Wu, Sixing, and Guo, Yiguo
- Subjects
- *
ELECTRIC power production , *ELECTRIC power consumption , *FORECASTING , *CONSUMPTION (Economics) - Abstract
Load forecasting is crucial for the operation and planning of electricity generation, transmission, and distribution. In the context of short‐term electricity load prediction for residential users, single‐task learning methods fail to consider the relationship among multiple residential users and have limited feature extraction capabilities for residential load data. It is challenging to obtain sufficient information from individual residential user load predictions, resulting in poor prediction performance. To address these issues, we propose a framework for multi‐task residential short‐term load prediction based on contrastive learning. Firstly, clustering techniques are used to select residential users with similar electricity consumption patterns. Secondly, contrastive learning is employed for pre‐training, extracting trend and seasonal feature representations of load sequences, thereby enhancing the feature extraction capability for residential load Feature. Lastly, a multi‐task learning prediction framework is utilized to learn shared information among multiple residential users' loads, enabling short‐term load prediction for multiple residences. The proposed load prediction framework has been implemented on two real‐world load data sets, and the experimental results demonstrate that it effectively reduces the prediction errors for residential load prediction. © 2024 Institute of Electrical Engineer of Japan and Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Research on Summer Cooling Load Prediction of Combined Cooling, Heating and Power System
- Author
-
Qi, Peiwen, Gao, Wenzhong, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Wen, Fushuan, editor, and Aris, Ishak Bin, editor
- Published
- 2024
- Full Text
- View/download PDF
18. Edge computing resource scheduling method based on container elastic scaling
- Author
-
Huaijun Wang, Erhao Deng, Junhuai Li, and Chenfei Zhang
- Subjects
Container elastic scaling ,Convolutional neural network ,Load prediction ,Reinforcement learning ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Edge computing is a crucial technology to solve the problem of computing resources and bandwidth required for extensive edge data processing, as well as for meeting the real-time demands of applications. Container virtualization technology has become the underlying technical basis for edge computing due to its efficient performance. Because the traditional container scaling strategy has issues such as long response times, low resource utilization, and unpredictable container application loads, this article proposes a method for scheduling edge computing resources based on the elastic scaling of containers. Firstly, a container load prediction model (Trend Enhanced-Temporal Convolutional Network, TE-TCN) is designed based on the temporal convolutional neural network, which features an encoder-decoder structure. The encoder extracts potential temporal relationship features from the historical data of the container load, while the decoder identifies the trend item of the container load through the trend enhancement module. Subsequently, the information extracted by the encoder and decoder is fed into the fully connected layer to facilitate container load prediction using the dual-input ResNet method. Secondly, Markov decision process (MDP) is used to model the elastic expansion problem of containers in multi-objective optimization. Utilizing the prediction outcomes of the TE-TCN load prediction model, a time-varying action space is formulated to address the issue of excessive action space in conventional reinforcement learning. Subsequently, a predictive container scaling strategy based on reinforcement learning is devised to align with the application load patterns in the container environment, enabling adaptation to the surge in traffic generated by the container environment. Finally, the experimental results on the WorldCup98 dataset and the real dataset show that the TE-TCN model can accurately predict the container load change. Experiments in the actual environment demonstrate that the proposed strategy reduces the average response time by 16.2% when the burst load arrives, and increases the average CPU utilization by 44.6% when the jitter load occurs.
- Published
- 2024
- Full Text
- View/download PDF
19. Energy-efficient optimization strategy based on elastic data migration in big data streaming platform
- Author
-
Yonglin PU, Xiaolong XU, Jiong YU, Ziyang LI, and Binglei GUO
- Subjects
stream computing ,load prediction ,resource constraint ,data migration ,energy-efficient ,Telecommunication ,TK5101-6720 - Abstract
Focused on the problem that the stream computing platform was suffering from the high energy consumption and low efficiency due to the lack of consideration for energy efficiency in designing process, an energy-efficient optimization strategy based on elastic data migration in big data streaming platform (EEDM-BDSP) was proposed.Firstly, models of the load prediction and the resource judgment were set up, and the load prediction algorithm was designed, which predicted the load tendency and determine node resource occupancy, so as to find nodes of resource overload and redundancy.Secondly, models of the resource constraint and the optimal data migration were set up, and the optimal data migration algorithm was proposed, which data migration for the purpose of improving node resource utilization.Finally, model of the energy consumption was set up to calculate the energy consumption saved by the cluster after data migration.The experimental results show that the EEDM-BDSP changes node resources in the cluster can responded on time, the resource utilization and the energy-efficient are improved.
- Published
- 2024
- Full Text
- View/download PDF
20. Research on electric vehicle charging load prediction method based on spectral clustering and deep learning network.
- Author
-
Fang Xin, Xie Yang, Wang Beibei, Xu Ruilin, Mei Fei, and Zheng Jianyong
- Subjects
DEEP learning ,ELECTRIC charge ,ELECTRIC vehicles ,ARTIFICIAL neural networks ,STATISTICAL sampling ,CONVOLUTIONAL neural networks - Abstract
This article discusses a research study on predicting the charging load of electric vehicles (EVs) using spectral clustering and deep learning networks. The proposed method involves using Monte Carlo simulation to generate historical load data, clustering the data using spectral clustering to identify patterns, and constructing CNN-LSTM models for prediction. The method has shown improved accuracy compared to traditional methods without clustering. However, further research is needed to address limitations such as data dependency, uncertainty handling, and model complexity. The study presents a method for electric vehicle charging load forecasting using spectral clustering and deep learning networks. The method involves using gray correlation analysis to measure the similarity of curve shapes and constructing a curve similarity matrix based on morphological characteristics. The study then applies a CNN-LSTM network to extract spatial and temporal features from the charging load data and make predictions. The method combines the advantages of CNNs and LSTMs to accurately predict electric vehicle charging demand under different load patterns. The proposed method can improve the planning and management of charging infrastructure. This article discusses a method for predicting electric vehicle charging load based on spectral clustering and deep learning networks. The historical data of a charging station in Nanjing, China is used to extract charging load features. The data is then subjected to spectral clustering to divide it into different clusters based on charging load characteristics. For each cluster, a CNN-LSTM model is constructed to forecast the charging load. The proposed method is compared to other prediction models and shows superior performance in terms of accuracy. The article presents a method for [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
21. 基于Informer的负荷及光伏出力系数预测.
- Author
-
缪月森, 夏红军, 黄宁洁, 李云, and 周世杰
- Abstract
Copyright of Integrated Intelligent Energy is the property of Editorial Department of Integrated Intelligent Energy and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
22. Study on tiered storage algorithm based on heat correlation of astronomical data.
- Author
-
Ye, Xin-Chen, Zhang, Hai-Long, Wang, Jie, Zhang, Ya-Zhou, Du, Xu, Wu, Han, and Riccio, Giuseppe
- Subjects
- *
RADIO telescopes , *GEODETIC astronomy , *ELECTRONIC data processing , *ALGORITHMS , *CLOUD storage ,PULSAR detection - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five- hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. SHORT TERM LOAD PREDICTION OF REGIONAL HEATING AND HEAT STORAGE SYSTEM BASED ON NEURAL NETWORK.
- Author
-
Yang LIU and HuiJie LIU
- Subjects
- *
HEAT storage , *BOX-Jenkins forecasting , *BACK propagation - Abstract
Accurate heat load prediction is the key to achieve fine control, energy conservation, and carbon reduction of regional hydronics. Taking the regional hydronics of a city in the north of China as the research object, the author, respectively uses back propagation neural network (BPNN), genetic algorithm (GA) optimized BPNN (GA-BPNN), and autoregressive integrated moving average model (ARIMA) combined BPNN (ARIMA BPNN) to predict its heat load, and compares the accuracy and applicability of each prediction method. The results indicate that GA-BPNN has the smallest prediction error, followed by ARIMA-BPNN, but the latter requires less data for prediction. In practical engineering, if there is a sufficient amount of data related to heat load, it is recommended to use GA-BPNN. If there is a small amount of data, ARIMA-BP prediction method can be used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Energy-efficient optimization strategy based on elastic data migration in big data streaming platform.
- Author
-
PU Yonglin, XU Xiaolong, YU Jiong, LI Ziyang, and GUO Binglei
- Abstract
Focused on the problem that the stream computing platform was suffering from the high energy consumption and low efficiency due to the lack of consideration for energy efficiency in designing process, an energy-efficient optimization strategy based on elastic data migration in big data streaming platform (EEDM-BDSP) was proposed. Firstly, models of the load prediction and the resource judgment were set up, and the load prediction algorithm was designed, which predicted the load tendency and determine node resource occupancy, so as to find nodes of resource overload and redundancy. Secondly, models of the resource constraint and the optimal data migration were set up, and the optimal data migration algorithm was proposed, which data migration for the purpose of improving node resource utilization. Finally, model of the energy consumption was set up to calculate the energy consumption saved by the cluster after data migration. The experimental results show that the EEDM-BDSP changes node resources in the cluster can responded on time, the resource utilization and the energy-efficient are improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Real-Time Prewarning Method of Subway Turnout Jamming Failure Based on Short-Term Load Prediction and Ensemble LVQ Learning
- Author
-
Hao Wen, Jie Xiao, and Guangxiang Xie
- Subjects
Subway ,turnout ,jamming failure ,real-time prewarning ,load prediction ,ensemble LVQ learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Turnout is the key equipment for realizing the turnback of subway trains. Frequent movements and environmental changes often lead to abnormal increases in resistance during turnout conversion, resulting in jamming failures that directly affect traffic safety and efficiency. In order to effectively foresee the risk of failure during operation and minimize the adverse effects of failure, a real-time turnout jamming failure prewarning method is proposed. Firstly, a weighted grey prediction machine using PSO for weight optimization (PSO-WGPM) is proposed to predict the short-term action load index, and the predicted index series is used to characterize the future changes in the resistance state of the turnout; Secondly, the predicted index and the generated index are cascaded into a risk identification index series, and a multi-dimensional hybrid prewarning feature set with time continuity are constructed by time-domain characteristics and overload statistical characteristics of the risk identification index series; Then, the prewarning feature set is fed into the learning vector quantization(LVQ) network for prewarning discrimination, and an ensemble learning based on a hybrid voting strategy is designed to obtain the final prewarning result, in order to fit the learning environment of unbalanced small-scale samples; After giving an prewarning, the occurrence time range of jamming failure is inferred based on the overload rate of the predicted index series. The experimental results show that: the proposed method can improve the prewarning success rate and effectively control false alarms, with good correctness; it can infer the range time of fault occurrence, achieving prewarning accuracy; the prewarning calculation time is much shorter than the failure advance time, meeting the timeliness requirements for applications.
- Published
- 2024
- Full Text
- View/download PDF
26. Machine Learning Approaches for Power System Parameters Prediction: A Systematic Review
- Author
-
Tolulope David Makanju, Thokozani Shongwe, and Oluwole John Famoriji
- Subjects
Frequency ,load prediction ,machine learning ,power system ,voltage prediction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Prediction in the power system network is very crucial as expansion is needed in the network. Several methods have been used to predict the load on a network, from short to long time load prediction, to ensure adequate planning for future use. Since the power system network is dynamic, other parameters, such as voltage and frequency prediction, are necessary for effective planning against contingencies. Also, most power systems are interconnected networks; using isolated variables to predict any part of the network tends to reduce prediction accuracy. This review analyzed different machine learning approaches used for load, frequency, and voltage prediction in power systems and proposed a machine learning predictive approach using network topology behavior as input variables to the model. The analysis of the proposed model was tested using a regression model, Decision tree regressor, and long short-term memory. The analysis results indicate that with network topology behavior as input to the model, the prediction will be more accurate than when isolated variables of a particular Bus in a network are used for prediction. This work suggests that network topology behavior data should be used for prediction in a power system network rather than the use of isolated data of a particular bus or exogenous data for prediction in a power system. Therefore, this research recommends that the accuracy of different predictive models be tested on power system parameters by hybridizing the network topology behavior dataset and the exogenous dataset.
- Published
- 2024
- Full Text
- View/download PDF
27. ECP: Error-Aware, Cost-Effective and Proactive Network Slicing Framework
- Author
-
Amr E. Aboeleneen, Alaa A. Abdellatif, Aiman M. Erbad, and Amr M. Salem
- Subjects
Reinforcement learning ,network slicing ,load prediction ,smart health ,error-correction ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
Recent advancements in Software Defined Networks (SDN), Open Radio Access Network (O-RAN), and 5G technology have significantly expanded the capabilities of wireless networks, extending beyond mere data transmission. This progression has led to the emergence of Virtual Networks (VN) and Network Slicing, enabling industries to enhance their services and applications by establishing virtual networks that utilize shared physical infrastructure. Many works in the literature have considered optimizing the allocation of on-demand slices, assuming the absolute availability of resources and their accurate load. However, accurately allocating future network slices remains challenging due to the error in load prediction, diverse Key Performance Indicators (KPIs), resource price variations, and the potential for over- or under-provisioning. This study presents a two-phase intelligent approach to address these challenges. The framework proactively predicts different slice loads while considering prediction errors in optimizing future slices with varied KPIs in a cost-efficient manner. Specifically, our method utilizes historical load data per service and employs AI-based forecasts for service load prediction. Subsequently, it employs a Deep Reinforcement Learning (DRL) agent on O-RAN’s virtual Control Unit (vCU) and virtual Distributed unit (vDU) to correct errors in prediction and optimize the cost of slice allocation based on service KPI requirements, ultimately pre-allocating future network slices at reduced costs. Through experimental validation against various baselines and state-of-the-art solutions, we demonstrate the efficacy of our proposed solution, achieving a notable reduction (37-51%) in the average cost of allocated slices while inquiring about (1.5-7%) of additional resources compared to the state-of-the-art..
- Published
- 2024
- Full Text
- View/download PDF
28. Chaotic Equilibrium Optimizer-Based Green Communication With Deep Learning Enabled Load Prediction in Internet of Things Environment
- Author
-
Mohammed Aljebreen, Marwa Obayya, Hany Mahgoub, Saud S. Alotaibi, Abdullah Mohamed, and Manar Ahmed Hamza
- Subjects
Internet of Things ,green communication ,load prediction ,clustering process ,equilibrium optimizer ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Currently, there is an emerging requirement for applications related to the Internet of Things (IoT). Though the capability of IoT applications is huge, there are frequent limitations namely energy optimization, heterogeneity of devices, memory, security, privacy, and load balancing (LB) that should be solved. Such constraints must be optimised to enhance the network’s efficiency. Hence, the core objective of this study was to formulate the intelligent-related cluster head (CH) selection method to establish green communication in IoT. Therefore, this study develops a chaotic equilibrium optimizer-based green communication with deep learning-enabled load prediction (CEOGC-DLLP) in the IoT environment. The study recognizes the emerging need for IoT applications and acknowledges the critical challenges, such as energy optimization, device heterogeneity, memory constraints, security, privacy, and load balancing, which are essential to enhancing the efficiency of IoT networks. The presented CEOGC-DLLP technique mainly accomplishes green communication via clustering and future load prediction processes. To do so, the presented CEOGC-DLLP model derives the CEOGC technique with a fitness function encompassing multiple parameters. In addition, the presented CEOGC-DLLP technique follows the deep belief network (DBN) model for the load prediction process, which helps to balance the load among the IoT devices for effective green communication. The experimental assessment of the CEOGC-DLLP technique is performed and the outcomes are investigated under different aspects. The comparison study represents the supremacy of the CEOGC-DLLP method compared to existing techniques with a maximum throughput of 64662 packets and minimum MSE of 0.2956.
- Published
- 2024
- Full Text
- View/download PDF
29. Research on Random Access of Satellite Internet of Things Based on Load Prediction
- Author
-
Xiwen MAO, Haitao WANG, and Gengxin ZHANG
- Subjects
satellite internet of things ,random access ,load prediction ,load estimation ,spread spectrum communication ,Information technology ,T58.5-58.64 - Abstract
In the process of the vigorous development of satellite internet of things, random access protocol is regarded as the most suitable communication protocol due to its characteristics of low signaling overhead, high transmission efficiency, high flexibility and easy implementation.However, due to drastic load changes, conventional random access mode only has advantages in the specific load range.Dynamic allocation of random access resources requires the accurate number of time slot access applications.In view of the above problems, a random access adaptive scheme was proposed, which could adaptively selected the appropriate random access mode according to the future load size, so as to improved the system throughput.The load size could be obtained through the proposed load estimation and prediction scheme.When narrowband interference exists in the transmission environment, the spread-spectrum communication means was introduced to made the adaptive scheme of random access run successfully.Moreover, the proposed adaptive scheme also improved the system throughput.
- Published
- 2023
- Full Text
- View/download PDF
30. Study on tiered storage algorithm based on heat correlation of astronomical data
- Author
-
Xin-Chen Ye, Hai-Long Zhang, Jie Wang, Ya-Zhou Zhang, Xu Du, and Han Wu
- Subjects
tiered strorage ,astronomical data processing ,load prediction ,decision tree ,high performance computing ,Astronomy ,QB1-991 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five-hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future.
- Published
- 2024
- Full Text
- View/download PDF
31. Power Load Prediction Based on Multi-IoT Monitoring Sensors and Protection Detection Response Recovery Network Security Model.
- Author
-
Yiming Zhang, Qi Huang, Shaoyang Yin, Xin Luo, and Shuo Ding
- Subjects
DEEP learning ,SMART power grids ,CONVOLUTIONAL neural networks ,COMPUTER network security ,AUTOREGRESSIVE models ,DETECTORS ,TREND analysis - Abstract
With the expansion and deployment of smart metering in power grid management and control, the need for security protection in the power system is continuously growing. However, the current construction of a comprehensive defense system for terminal data is inadequate. In this paper, we report a study on power loads to address the security challenges facing grid management, using the protection detection response recovery (PDRR) network security model as the basis. Firstly, we design an end-to-end security perception architecture using IoT technology and develop an optimization model for monitoring sensor information. In addition, we construct a data aggregation model that improves adversarial domain adaptation and incorporates deep convolutional neural networks to extract features. The proposed model enhances short-term load forecasting by combining linear predictions from autoregressive models with the nonlinear trend analysis capabilities of deep learning models. The performance of the proposed method is compared with those of the Adam and stochastic gradient descent (SGD) optimizers. Experimental results confirm that the proposed method ensures reliable data transmission, facilitates effective classification aggregation of heterogeneous data, and yields fast and accurate load forecasting results. Furthermore, the proposed method enhances the robustness of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. ETSA-LP: ENSEMBLE TIME-SERIES APPROACH FOR LOAD PREDICTION IN CLOUD.
- Author
-
VERMA, Shveta and BALA, Anju
- Subjects
TIME series analysis ,DYNAMIC loads ,ERROR rates ,FORECASTING ,RESEARCH personnel ,CLOUD storage ,CLOUD computing - Abstract
Cloud Computing has immersed researchers in accessing the resources on-demand for deploying various applications by offering infinite services. But, as the demand for cloud resources is dynamic, it significantly affects the load on the system. Thus, this research emphasizes deploying a dynamic and autonomic load prediction framework. This paper proposes an Ensemble Time-Series Approach for Load Prediction (ETSA-LP), which integrates various time-series analysis techniques for predicting CPU and memory utilization. To evaluate the efficiency of the proposed approach, a series of experiments on Google and PlanetLab traces have been conducted in a real Cloud environment. The results were compared according to different performance metrics and models, the accuracy determined and the minimal error rate selected as the best among others. The proposed ensemble approach gives the best performance over the existing models showing the remarkable accuracy improvement and reducing the error rate and execution time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Nearly Zero-Energy Building Load Forecasts through the Competition of Four Machine Learning Techniques.
- Author
-
Qin, Haosen, Yu, Zhen, Li, Zhengwei, Li, Huai, and Zhang, Yunyun
- Subjects
MACHINE learning ,BUILDING-integrated photovoltaic systems ,ENERGY consumption of buildings ,FEATURE selection ,COOLING loads (Mechanical engineering) ,OFFICE buildings ,FORECASTING - Abstract
Heating, ventilation and air conditioning (HVAC) systems account for approximately 50% of the total energy consumption in buildings. Advanced control and optimal operation, seen as key technologies in reducing the energy consumption of HVAC systems, indispensably rely on an accurate prediction of the building's heating/cooling load. Therefore, the goal of this research is to develop a model capable of making such accurate predictions. To streamline the process, this study employs sensitivity and correlation analysis for feature selection, thereby eliminating redundant parameters, and addressing distortion problems caused by multicollinearity among input parameters. Four model identification methods including multivariate polynomial regression (MPR), support vector regression (SVR), multilayer perceptron (MLP), and extreme gradient boosting (XGBoost) are implemented in parallel to extract value from diverse building datasets. These models are trained and selected autonomously based on statistical performance criteria. The prediction models were deployed in a nearly zero-energy office building, and the impacts of feature selection, training set size, and real-world uncertainty factors were analyzed and compared. The results showed that feature selection considerably improved prediction accuracy while reducing model dimensionality. The research also recognized that prediction accuracy during model deployment can be influenced significantly by factors like personnel mobility during holidays and weather forecast uncertainties. Additionally, for nearly zero-energy buildings, the thermal inertia of the building itself can considerably impact prediction accuracy in certain scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. FedGrid: A Secure Framework with Federated Learning for Energy Optimization in the Smart Grid.
- Author
-
Gupta, Harshit, Agarwal, Piyush, Gupta, Kartik, Baliarsingh, Suhana, Vyas, O. P., and Puliafito, Antonio
- Subjects
- *
FEDERATED learning , *SMART power grids , *RENEWABLE energy sources , *GRIDS (Cartography) , *RENEWABLE natural resources , *DATA privacy , *WIND power - Abstract
In the contemporary energy landscape, power generation comprises a blend of renewable and non-renewable resources, with the major supply of electrical energy fulfilled by non-renewable sources, including coal and gas, among others. Renewable energy resources are challenged by their dependency on unpredictable weather conditions. For instance, solar energy hinges on clear skies, and wind energy relies on consistent and sufficient wind flow. However, as a consequence of the finite supply and detrimental environmental impact associated with non-renewable energy sources, it is required to reduce dependence on such non-renewable sources. This can be achieved by precisely predicting the generation of renewable energy using a data-driven approach. The prediction accuracy for electric load plays a very significant role in this system. If we have an appropriate estimate of residential and commercial load, then a strategy could be defined for the efficient supply to them by renewable and non-renewable energy sources through a smart grid, which analyzes the demand-supply and devises the supply mechanism accordingly. Predicting all such components, i.e., power generation and load forecasting, involves a data-driven approach where sensitive data (such as user electricity consumption patterns and weather data near power generation setups) is used for model training, raising the issue of data privacy and security concerns. Hence, the work proposes Federated Smart Grid (FedGrid), a secure framework that would be able to predict the generation of renewable energy and forecast electric load in a privacy-oriented approach through federated learning. The framework collectively analyzes all such predictive models for efficient electric supply. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Assessment of modeling methods for predicting load resulting from hydrogen-air detonation.
- Author
-
Chen, Di, Wu, Chengqing, and Li, Jun
- Subjects
- *
RENEWABLE energy transition (Government policy) , *CHEMICAL kinetics , *CHEMICAL models , *CHEMICAL reactions , *EXPLOSIVES , *RESEARCH personnel - Abstract
As hydrogen becomes an increasingly vital component in the transition toward a sustainable energy system, its flammable and detonable properties necessitate a comprehensive understanding of its explosive characteristics. This study evaluated the accuracy and computational efficiency of an innovative numerical approach that integrates CESE compressible CFD solver, chemistry reaction model, and structural FEM solver within LS-DYNA to predict hydrogen detonation loads. Comparisons were made with the commonly used energy equivalent methods, i.e., the TNT equivalent method, and the high-pressure volume method, which utilizes multi-material ALE techniques. Hydrogen detonation test results from open-air space, open-air space with a blast wall, and semi-confined space were compared against numerical simulations. The results revealed that, for scaled distance exceeding 0.79 m/kg1/3, all three methods accurately predicted the peak overpressure. The TNT equivalent method exhibited an unexpectedly high energy efficiency factor exceeding 0.51, significantly surpassing the recommended range of 0.01–0.1 for typical vapor cloud accidents. The CESE-chemistry coupling method exceled in capturing overpressure duration and structural response due to its consideration of chemical kinetics. As the scaled distance reduced to 0.37 m/kg1/3, the CESE-chemistry coupling method maintained its proficiency in modelling pressure waves, while the TNT equivalent method overestimated peak pressure by 494%. Conversely, the high-pressure volume method underestimated the peak pressures within or near the H 2 -air cloud. Nevertheless, the CESE-chemistry coupling method required significantly higher computational costs, with 15–20 times more computational time compared to the other two methods, and 60–70% of the total computation time was spent solving chemical kinetics. It is concluded from the current study that for scenarios involving close scaled distances (less than 0.37 m/kg1/3) or where the structure locates inside or near the gas cloud, the CESE-chemistry coupling method may be preferred despite its higher computational demands. Conversely, for simulations prioritizing computational efficiency and larger scaled distances, the TNT equivalent method or high-pressure volume method is recommended. These findings offer guidelines for researchers and engineering professionals engaged in assessing and mitigating risks associated with hydrogen explosion accidents in the pursuit of safe and sustainable hydrogen utilization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. 基于负载量预测的卫星物联网随机接入研究.
- Author
-
茆习文, 王海涛, and 张更新
- Abstract
Copyright of Space-Integrated-Ground Information Networks is the property of Beijing Xintong Media Co., Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
37. Enhancing Load Prediction Accuracy using Optimized Support Vector Regression Models.
- Author
-
OLAWUYI, Abdulsemiu, AJEWOLE, Titus, LAWAL, Muyideen, and AWOFOLAJU, Tolulope
- Subjects
SUPPORT vector machines ,PREDICTION models ,MECHANICAL loads ,PARAMETER estimation ,PERFORMANCE evaluation - Abstract
This paper investigates the effect of Support Vector Regression hyperparameters optimization on electrical load prediction. Accurate and robust load prediction helps policy makers in the energy sector to make inform decision and reduce losses. To achieve this, Bayesian optimization technique was employed for the hyperparameters optimization which are then used for the load prediction. The hyperparameters are the regularization parameters and the epsilon. In addition, the effects of sliding window during the load prediction were also evaluated. The sliding window values were varied from 1 to 5. The results showed that the sliding window of 1 had the optimized hyperparameters with the best performing evaluation metrics of 0.01912 and 0.09493 for MSE and MAE respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. 基于深度学习的房间冷负荷预测模型.
- Author
-
林 越 and 刘廷章
- Abstract
Copyright of Journal of Shanghai University / Shanghai Daxue Xuebao is the property of Journal of Shanghai University (Natural Sciences) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
39. A Data-Driven Approach to Ship Energy Management: Incorporating Automated Tracking System Data and Weather Information.
- Author
-
Ünlübayir, Cem, Mierendorff, Ulrich Hermann, Börner, Martin Florian, Quade, Katharina Lilith, Blömeke, Alexander, Ringbeck, Florian, and Sauer, Dirk Uwe
- Subjects
ENERGY management ,HYBRID power systems ,WATER currents ,FUEL cells ,PROPULSION systems - Abstract
This research paper presents a data-based energy management method for a vessel that predicts the upcoming load demands based on data from weather information and its automated tracking system. The vessel is powered by a hybrid propulsion system consisting of a high-temperature fuel cell system to cover the base load and a battery system to compensate for the fuel cell's limited dynamic response capability to load fluctuations. The developed energy management method predicts the load demand of the next time steps by analyzing physical relationships utilizing operational and positional data of a real vessel. This allows a steadier operation of the fuel cell and reduces stress factors leading to accelerated aging and increasing the resource efficiency of the propulsion system. Since large ships record tracking data of their cruise and no a priori training is required to adjust the energy management, the proposed method can be implemented with small additional computational effort. The functionality of the energy management method was verified using data from a real ship and records of the water currents in the North Sea. The accuracy of the load prediction is 2.7% and the attenuation of the fuel cell's power output could be increased by approximately 32%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. 基于 XGBoost-神经网络的建筑负荷预测模型构建.
- Author
-
魏东, 杨洁婷, 韩少然, and 朱准
- Abstract
In response to the problem of heavy workload and difficulty in improving the generalization ability of feature selection in building load prediction models, a method based on extreme gradient boosting(XGBoost)-neural network for building load feature selection and prediction was proposed. The XGBoost algorithm was used to train the filtered data, and the optimal feature subset was determined based on the mean absolute percentage error(MAPE) to improve the model accuracy and generalization ability. The Bayesian regularization algorithm was used to train the feedforward neural network to reduce network structure complexity during training optimization and prevent network overfitting, thereby further improving its generalization ability. Experimental results of load prediction for a commercial building show that the mean squared error(MSE) of the model is reduced by 43. 29% after feature selection compared to before, effectively improving the model prediction accuracy. The neural network is trained using both Bayesian regularization and Levenberg-Marquardt(L-M) algorithms, with the former achieving an average reduction of 87. 08% and 85. 33% in root mean squared error (RMSE) and MAPE after 5 experiments, respectively, leading to effective improvement of the prediction model's generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
41. 基于融合多注意力机制的深度学习的盾构 荷载预测方法.
- Author
-
陈 城, 史培新, 王占生, and 贾鹏蛟
- Subjects
- *
DEEP learning , *TIME series analysis , *FORECASTING - Abstract
Shield load is the main performance indicator of the shield, accurate load prediction is significant to ensure the safety and efficiency of the shield and the stability of the surrounding environment. Recognizing the limitations of the traditional prediction methods, this paper proposes a hybrid model (CBM), combining convolutional neural network (CNN), bi-directional long short-term memory (BiLSTM) and attention mechanism, to predict the shield load accurately based on the high-dimensional feature and time series characteristic of the data. The proposed model not only can extract the high-dimensional features and time series characteristics of the data, but also can highlight the importance of high-dimensional features and important time node information. The experiment results show that compared with the existing models, the proposed model achieves a higher prediction performance, the prediction accuracy of the thrust and torque is 94.2% and 96.2%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. IntDEM: an intelligent deep optimized energy management system for IoT-enabled smart grid applications
- Author
-
Ganesh, P. M. Jai, Sundaram, B. Meenakshi, Balachandran, Praveen Kumar, and Mohammad, Gouse Baig
- Published
- 2024
- Full Text
- View/download PDF
43. On-Demand Allocation of Cryptographic Computing Resource with Load Prediction
- Author
-
Cao, Xiaogang, Li, Fenghua, Geng, Kui, Xie, Yingke, Kou, Wenlong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wang, Ding, editor, Liu, Zheli, editor, and Chen, Xiaofeng, editor
- Published
- 2023
- Full Text
- View/download PDF
44. Research on Microgrid Load Prediction Based on GWO-LSSVM
- Author
-
Zhu, Ye, Zhang, Siliang, Su, Dan, Bao, ShuangShuang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Li, Jian, editor, Xie, Kaigui, editor, Hu, Jianlin, editor, and Yang, Qingxin, editor
- Published
- 2023
- Full Text
- View/download PDF
45. LSTM Short-Term Load-Prediction Model for IES Including Electricity Price and Attention Mechanism
- Author
-
Sheng, Yuemao, Feng, Rongqiang, Lu, Hai, and Ma, Yongsheng, editor
- Published
- 2023
- Full Text
- View/download PDF
46. PredXGBR: A Machine Learning Based Short-Term Electrical Load Forecasting Architecture
- Author
-
Zabin, Rifat, Barua, Labanya, Ahmed, Tofael, Das, Swagatam, Series Editor, Bansal, Jagdish Chand, Series Editor, Ahmad, Mohiuddin, editor, Uddin, Mohammad Shorif, editor, and Jang, Yeong Min, editor
- Published
- 2023
- Full Text
- View/download PDF
47. Modelling aileron and spoiler deflections with the linear frequency domain method (LFD) for subsonic flight conditions
- Author
-
Govindan, Kuharaaj and Bier, Niko
- Published
- 2023
- Full Text
- View/download PDF
48. End-to-End Top-Down Load Forecasting Model for Residential Consumers
- Author
-
Barkha Parkash, Tek Tjing Lie, Weihua Li, and Shafiqur Rahman Tito
- Subjects
E2E forecasting ,load prediction ,neural networks ,top-down hierarchical forecasting ,representative load profile ,Technology - Abstract
This study presents an efficient end-to-end (E2E) learning approach for the short-term load forecasting of hierarchically structured residential consumers based on the principles of a top-down (TD) approach. This technique employs a neural network for predicting load at lower hierarchical levels based on the aggregated one at the top. A simulation is carried out with 9 (from 2013 to 2021) years of energy consumption data of 50 houses located in the United States of America. Simulation results demonstrate that the E2E model, which uses a single model for different nodes and is based on the principles of a top-down approach, shows huge potential for improving forecasting accuracy, making it a valuable tool for grid planners. Model inputs are derived from the aggregated residential category and the specific cluster targeted for forecasting. The proposed model can accurately forecast any residential consumption cluster without requiring any hyperparameter adjustments. According to the experimental analysis, the E2E model outperformed a two-stage methodology and a benchmarked Seasonal Autoregressive Integrated Moving Average (SARIMA) and Support Vector Regression (SVR) model by a mean absolute percentage error (MAPE) of 2.27%.
- Published
- 2024
- Full Text
- View/download PDF
49. Energy Consumption Optimization in Data Centers Using LSTM-Based Load Prediction and Dynamic Resource Allocation
- Author
-
Dang Dujun
- Subjects
energy consumption optimization ,lstm ,load prediction ,dynamic resource allocation ,genetic algorithms ,data centers ,68t05 ,Mathematics ,QA1-939 - Abstract
With the rapid growth of data centers, optimizing energy consumption has become a critical challenge. This paper proposes an energy management framework that integrates Long Short-Term Memory (LSTM) networks for load prediction with a dynamic resource allocation algorithm to optimize energy consumption in data centers. The LSTM model is employed to accurately predict future workloads based on historical data, enabling the proactive adjustment of resource utilization. The dynamic resource allocation algorithm, informed by LSTM-based predictions, dynamically manages server operations and cooling systems through techniques such as dynamic voltage and frequency scaling (DVFS) and server consolidation, ensuring that resources are allocated efficiently in real time. The proposed framework incorporates a multi-objective optimization approach to balance energy savings and system performance. We use genetic algorithms to fine-tune resource allocation based on the predicted load, optimizing for energy consumption and response time. Experimental results show that our method achieves significant energy savings compared to traditional static resource management approaches, with minimal impact on service quality. This research highlights the potential of machine learning-driven optimization algorithms in reducing the environmental footprint of data centers while maintaining operational efficiency.
- Published
- 2024
- Full Text
- View/download PDF
50. Research on Cloud Computing-based Music Resource Sharing Platform Technology and Its Promotion of Employment and Entrepreneurship
- Author
-
Wei Wei, Du Yuelin, Li Wei, and Wang Yan
- Subjects
cloud computing ,gru ,cnn ,load prediction ,resource scheduling ,68t42 ,Mathematics ,QA1-939 - Abstract
Music resource-sharing platforms can increase students’ employment paths and enhance their employment ability. Load prediction and resource scheduling algorithms are designed in this paper to construct a music resource-sharing platform based on cloud computing technology. The mean predictor’s predictions are combined with the GRU-CNN model’s predictions, and the weighted average is calculated to enhance the accuracy and stability of the predictions. Then, the integrated migration scheduling method is designed to make the load of the cloud environment more balanced and the resource utilization more reasonable by the four processes of migration triggering, container selection, node selection, and container migration in a coarse-grained dynamic scheduling approach with nodes as objects. Following the completion of the construction, the algorithm that was designed is evaluated for load prediction and resource scheduling, and regression analysis is performed to examine the role of music resource-sharing platforms in promoting employment and entrepreneurship. It is found that after the resource scheduling by the algorithm in this paper, the node operation tension is greatly alleviated, and the CPU resource utilization and memory resource utilization of node Node1 are reduced from 80.96% and 62.69% to 58.03% and 43.36%, respectively. The resources of the low-load node Node4 are effectively utilized, the memory resource usage rate has increased from 6.86% to 49.52%, and the resource usage of each node in the cluster is more balanced and reasonable. At a statistical level of 1%, the impact of music resource-sharing platform development on employment quality is 0.194. In contrast, the industrial structure does not significantly contribute to the quality of employment, indicating that under the current economic situation, emerging economic forms such as music resource-sharing platforms are the main driving force to improve the quality of employment and the development of music resource sharing platforms generally improves the quality of labor force employment.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.