246 results on '"data interpolation"'
Search Results
2. Double-layer stacking optimization for electricity theft detection considering data incompleteness and intra-class imbalance
- Author
-
Ge, Leijiao, Li, Jingjing, Du, Tianshuo, and Hou, Luyang
- Published
- 2025
- Full Text
- View/download PDF
3. Multimodal data fusion for geo-hazard prediction in underground mining operation
- Author
-
Liang, Ruiyu, Zhang, Chengguo, Huang, Chaoran, Li, Binghao, Saydam, Serkan, Canbulat, Ismet, and Munsamy, Lesley
- Published
- 2024
- Full Text
- View/download PDF
4. Research on urban water demand prediction based on machine learning and feature engineering
- Author
-
Dongfei Yan, Yi Tao, Jianqi Zhang, and Huijia Yang
- Subjects
data cleaning ,data interpolation ,feature engineering ,machine learning ,water demand prediction ,water supply system ,Water supply for domestic and industrial purposes ,TD201-500 ,River, lake, and water-supply engineering (General) ,TC401-506 - Abstract
Urban water demand prediction is not only the foundation of water resource planning and management, but also an important component of water supply system optimization and scheduling. Therefore, predicting future water demand is of great significance. For univariate time series data, the issue of outliers can be solved through data preprocessing. Then, the data input dimension is increased through feature engineering, and finally, the LightGBM (Light Gradient Boosting Machine) model is used to predict future water demand. The results demonstrate that cubic polynomial interpolation outperforms the Prophet model and the linear method in the context of missing value interpolation tasks. In terms of predicting water demand, the LightGBM model demonstrates excellent forecasting performance and can effectively predict future water demand trends. The evaluation indicators MAPE (mean absolute percentage error) and NSE (Nash–Sutcliffe efficiency coefficient) on the test dataset are 4.28% and 0.94, respectively. These indicators can provide a scientific basis for short-term prediction of water supply enterprises. HIGHLIGHTS Interpolation of raw training data may not necessarily improve the performance of predictive models.; Accurate prediction of univariate data can be achieved through feature engineering and machine learning.;
- Published
- 2024
- Full Text
- View/download PDF
5. Assessment of monthly to daily streamflow disaggregation methods: A case study of the Nile River Basin
- Author
-
Mohamed Refaat Elgendy, Paulin Coulibaly, Sonia Hassini, Wael El-Dakhakhni, Yasser Elsaie, Mesfin Benti Tolera, Samuel Dagalo Hatiye, and Mekonen Ayana
- Subjects
Nile River Basin ,Monthly streamflow disaggregation ,Data interpolation ,Data processing ,Hydrology ,Water management ,Physical geography ,GB3-5030 ,Geology ,QE1-996.5 - Abstract
Study region: The Nile River Basin Study focus: The lack of observed streamflow data at a short time scale poses a critical challenge for calibrating and validating hydrologic models. Therefore, many disaggregation methods were developed, resulting in various relative performances without a clear indication of the optimal choice. This study aims to iteratively assess eight monthly to daily streamflow disaggregation methods at 21 major subbasin outlets in the Nile River Basin (NRB) to identify the best-performing ones. These methods include one proportionality method and seven interpolation methods, i.e., linear, 2nd-order spline, 3rd-order spline, Piecewise Cubic Hermite Interpolating Polynomial (Pchip), Modified Akima (MAkima), mean preserved 2nd-order spline, and mean preserved 3rd-order spline. We assessed these methods using three metrics and visual investigations. New hydrologic insights for the region: The results showed that the interpolation methods performed well, better than the proportionality method. However, their performances decreased at stations with high daily streamflow fluctuations. The interpolation methods’ performances were similar in mimicking the daily values but significantly different in preserving the mass balance. The mean preserving 3rd-order interpolation method (Lai 22) was the best in preserving the mass balance and capturing the low, moderate and high flows and, therefore, selected to generate the daily flow data in the NRB. The results of this study can guide a reliable method for obtaining daily streamflow data, which is important for the hydrologic and water management studies in the NRB.
- Published
- 2024
- Full Text
- View/download PDF
6. GS-PIA Algorithm for Bi-cubic B-spline Interpolation Surfaces.
- Author
-
Yuchen Xiang and Chengzhi Liu
- Subjects
- *
INTERPOLATION , *GAUSS-Seidel method , *ALGORITHMS - Abstract
The progressive iterative approximation (PIA) is a versatile method for interpolating or fitting a given data set. The convergence behavior of PIA plays a pivotal role in determining the computational efficiency of data interpolation or fitting. This paper introduces an accelerated iterative approach, GS-PIA, derived from the Gauss-Seidel splitting, specifically designed for interpolating data points. The convergence and computational cost of the proposed iterative method are discussed. Numerical results underscore the superiority of GS-PIA in data interpolation when compared to other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
7. From data acquisition to validation: a complete workflow for predicting individual customer lifetime value.
- Author
-
Nie, Dongyun, Scriney, Michael, Liang, Xiaoning, and Roantree, Mark
- Subjects
CUSTOMER lifetime value ,BUSINESS planning ,ACQUISITION of data ,BUSINESS partnerships ,CORPORATE profits - Abstract
Customer lifetime value is a core measure that allows companies to predict the potential net profit from future relationships with their customers. It is a metric that is computed by recording customer behavior over a long term and helps to build customized business strategies. However, existing research focuses either on a conceptual model of customer CLV s or assumes that all variables required for the computation of CLV are readily available. In this research, we employ a real customer dataset of insurance policies to construct a holistic framework that covers all aspects of CLV computation. In addition, we develop an extensive validation process, aiming to verify our results and obtain an understanding as to which CLV models perform best in the insurance context. In this research, we deliver a framework which comprises all aspects of CLV estimation using a real insurance policy dataset provided by a large business partner. The framework addresses the creation of a unified customer record, classification of customers into ranked groups, interpolation of missing parameters, through to the calculation and validation of individual CLV values. Our method also includes a robust validation with both subjective and objective evaluations of our findings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. 有控弹箭气动参数辨识技术.
- Author
-
康其庄, 王康健, 易文俊, 段耀泽, and 夏悠然
- Abstract
Copyright of Journal of Ordnance Equipment Engineering is the property of Chongqing University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
9. Climate change and its consequences on the climatic zoning of Coffea canephora in Brazil.
- Author
-
Lorençone, Pedro Antonio, de Oliveira Aparecido, Lucas Eduardo, Lorençone, João Antonio, Botega, Guilherme Torsoni, Lima, Rafael Fausto, and de Souza Rolim, Glauco
- Subjects
CLIMATIC zones ,CLIMATE change adaptation ,COFFEE ,METEOROLOGICAL databases ,ATMOSPHERIC temperature - Abstract
Coffee production has a large share in Brazilian agribusiness and a cultural and social importance in the country. Worldwide, Brazil is the largest producer of arabica coffee and the second largest of canephora species. In 2020, national production was 14.3 million bags of canephora coffee. Few studies on canephora coffee adaptation to climate changes can be found in the literature. Thus, our goal was to identify areas suitable for Coffea canephora cultivation in Brazil under CMIP-5 climate change framework. The study was carried out for the entire country using data on average air temperature data for the entire year, November, and the coldest month, as well as average annual accumulated water deficit for the period of 1960–2020. These data were gathered from the Meteorological Database for Teaching and Research (BDMEP) of the National Institute of Meteorology of Brazil-INMET (Brazil 1992). Furthermore, BCC-CSM1.1 climate model was used at 125 × 125 km resolution to simulate future climate using WorldClim 2 data for 2041–2080, in the Representative Concentration Pathway (RCP) scenarios 2.6, 4.5, 6.0, and 8.5. Potential climate changes can negatively impact canephora coffee plantations in all CMIP5 RCP scenarios studied. The BCC-CSM1.1 scenarios showed a 65% reduction in total areas suitable for coffee cultivation in Brazil. Rondônia and Bahia were states with the greatest impact of climate change since they had the largest reduction in areas suitable for canephora coffee growth. Currently, both states are major C. canephora producers and can therefore directly compromise regional economy. Thermal excess was the most common class for future scenarios, averaging 56.76% of the entire country. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A K-Means-Based Interpolation Algorithm With Lp-Norm and Feature Weighting
- Author
-
Yipeng Miao and Yenan Xu
- Subjects
Clustering ,Lp-norm ,feature weights ,incomplete data ,data interpolation ,data labeling ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The integrity of data is crucial for the majority of existing data analysis methods. However, incomplete and unbalanced datasets from the collection and organization process will affect analysis accuracy. Existing interpolation algorithms often overlook feature importance, resulting in either cumbersome processes or underutilization of data information. This study introduces an interpolation method based on clustering algorithm focused on improving the accuracy and efficiency of missing data processing in datasets. First, in this paper, we clarify the problem to be solved about data interpolation, and we consider the importance of the information brought by the data itself to the interpolation, so we propose a scheme that combines clustering and interpolation. We propose a new method that uses the Lp norm as a similarity measure in the K-means clustering algorithm, and introduce a controllable weighting formula based on the current data segmentation. Methodologically, the clustering and interpolation are synchronized by iteratively updating the variable optimization cost function. The experimental results demonstrate significant improvements of the proposed interpolation algorithm over traditional techniques, particularly in tasks such as data labeling and classification within real datasets for clustering and classification.
- Published
- 2024
- Full Text
- View/download PDF
11. Preconditioned geometric iterative methods for B-spline interpolation.
- Author
-
Liu, Chengzhi, Qiu, Yue, and Zhang, Li
- Abstract
The geometric iterative method (GIM) is widely used in data interpolation/fitting, but its slow convergence affects the computational efficiency. Recently, much work has been done to guarantee the acceleration of GIM in the literature. This work aims to accelerate the convergence rate by introducing a preconditioning technique. After constructing the preconditioner, we preprocess the progressive iterative approximation (PIA) and its variants, called the preconditioned GIMs. We show that the proposed preconditioned GIMs converge, and the extra computation cost of the preconditioning technique is negligible. Several numerical experiments are given to demonstrate that our preconditioner can accelerate the convergence rate of PIA and its variants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Assessments of Gravity Data Gridding Using Various Interpolation Approaches for High-Resolution Geoid Computations.
- Author
-
Karaca, Onur, Erol, Bihter, and Erol, Serdar
- Subjects
- *
GEOID , *KRIGING , *GEOLOGICAL statistics , *INTERPOLATION algorithms , *GRAVITY , *GRAVITY anomalies , *INTERPOLATION , *ARTIFICIAL neural networks - Abstract
This article investigates the role of different approaches and interpolation methods in gridding terrestrial gravity anomalies. In this regard, first of all, simple and complete Bouguer anomalies are considered in gravity data gridding. In the comparison results of gridding these two Bouguer anomaly datasets, the effect of the high-frequency contribution of topographic gravitation (by means of the terrain correction) is clarified. After that, the role of the used interpolation algorithm on the resulting grid of mean gravity anomalies and hence on the geoid modeling accuracy is inspected. For this purpose, four different interpolation methods including geostatistical Kriging, nearest neighbor, inverse distance to a power (IDP), and artificial neural networks (ANNs) are applied. Here, the IDP and nearest neighbor methods represent simple-structured algorithms among the interpolation methods tested in this study. The ANN method, on the other hand, is preferred as a complex, optimization-based soft computing method that has been applied in recent years. In addition, the geostatistical Kriging method is one of the conventional methods that is mostly applied for gridding gravity data in geodesy and geophysics. The calculated gravity anomalies in grids are employed in high-resolution geoid model computations using the least squares modifications of Stokes formula with additive corrections (LSMSA) technique. The investigations are carried out using the test datasets of Auvergne, France that are provided by the International Service for the Geoid for scientific research. It is concluded that the interpolation algorithms affect the gravity gridding results and hence the geoid model determination. The ANN method does not provide superior results compared to the conventional algorithms in gravity gridding. The geoid model with 4.1 cm accuracy is computed in the test area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Missing Data Imputation Method Combining Random Forest and Generative Adversarial Imputation Network.
- Author
-
Ou, Hongsen, Yao, Yunan, and He, Yi
- Subjects
- *
GENERATIVE adversarial networks , *MISSING data (Statistics) , *PROBABILISTIC generative models , *RANDOM forest algorithms , *STANDARD deviations , *INTERPOLATION algorithms , *K-nearest neighbor classification - Abstract
(1) Background: In order to solve the problem of missing time-series data due to the influence of the acquisition system or external factors, a missing time-series data interpolation method based on random forest and a generative adversarial interpolation network is proposed. (2) Methods: First, the position of the missing part of the data is calibrated, and the trained random forest algorithm is used for the first data interpolation. The output value of the random forest algorithm is used as the input value of the generative adversarial interpolation network, and the generative adversarial interpolation network is used to calibrate the position. The data are interpolated for the second time, and the advantages of the two algorithms are combined to make the interpolation result closer to the true value. (3) Results: The filling effect of the algorithm is tested on a certain bearing data set, and the root mean square error (RMSE) is used to evaluate the interpolation results. The results show that the RMSE of the interpolation results based on the random forest and generative adversarial interpolation network algorithms in the case of single-segment and multi-segment missing data is only 0.0157, 0.0386, and 0.0527, which is better than the random forest algorithm, generative adversarial interpolation network algorithm, and K-nearest neighbor algorithm. (4) Conclusions: The proposed algorithm performs well in each data set and provides a reference method in the field of data filling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Research on Interpolation Method for Missing Electricity Consumption Data.
- Author
-
Junde Chen, Jiajia Yuan, Weirong Chen, Zeb, Adnan, Suzauddola, Md., and Nanehkaran, Yaser A.
- Subjects
ELECTRIC power consumption ,POWER distribution networks ,MISSING data (Statistics) ,POWER resources ,ELECTRIC lines ,DATA warehousing - Abstract
Missing value is one of the main factors that cause dirty data. Without high-quality data, there will be no reliable analysis results and precise decision-making. Therefore, the data warehouse needs to integrate high-quality data consistently. In the power system, the electricity consumption data of some large users cannot be normally collected resulting in missing data, which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate. For the problem of missing electricity consumption data, this study proposes a group method of data handling (GMDH) based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data. First, the dependent and independent variables are defined from the original data, and the upper and lower limits of missing values are determined according to prior knowledge or existing data information. All missing data are randomly interpolated within the upper and lower limits. Then, the GMDH network is established to obtain the optimal complexity model, which is used to predict the missing data to replace the last imputed electricity consumption data. At last, this process is implemented iteratively until the missing values do not change. Under a relatively small noise level (α = 0.25), the proposed approach achieves a maximum error of no more than 0.605%. Experimental findings demonstrate the efficacy and feasibility of the proposed approach, which realizes the transformation from incomplete data to complete data. Also, this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. COMPONENTS OF THE POLISH LPI IN RELATION TO MACROECONOMIC VARIABLES. COINTEGRATION ANALYSIS.
- Author
-
KARP, Piotr
- Subjects
REVERSE logistics ,TARIFF laws ,TIME series analysis ,QUALITY of service ,ECONOMIC development ,VALUE (Economics) - Abstract
Purpose: The aim of the article is to analyze Poland’s logistics potential measured by the means of the LPI (Logistics Performance Index). The impact of trade volume, infrastructure development and service quality on individual LPI components was assessed. Estimating the relationship between the components of the Polish LPI index and macroeconomic variables enables the assessment of the strength of the relationship and sensitivity to the economic situation. It allows to draw conclusions about possible areas that are more sensitive or require repair. It also enables to indicate how the economy influences the TSL sector (Transport- Shipping-Logistics). Design/methodology/approach: The analysis was carried out using the rules and methods of time series cointegration, which enable the analysis of long- and short-term relationships. This enabled the identification of areas most sensitive to the influence of particular factors. To obtain consistent time series, interpolation methods were also used. Findings: The development of infrastructure and an increase in the level of services has a positive impact on all aspects measured by LPI components. In turn, the increase in trade exchange, as an increase in demand for the TSL sector, affects four of the six components. In two cases, border services and on-time delivery, the relationship is negative. This highlights the main points limiting the growth of LPI ratings and indirectly limiting trade and economic development of Poland. Research limitations/implications: Limited data availability influenced the choice of method used in the study. Moreover, short time series and data interpolation used in the study may result in the inaccuracy of estimations. Practical implications: Econometric analysis indicates weaknesses in the Polish logistics sector. Improvements in customs regulations and expansion of infrastructure may improve the functioning of the TSL sector. It should be of particular interest to policy makers, for which economic growth and the LPI rating should be very important. Originality/value: This is the first paper that uses econometric tools to compare the components of LPI with macro variables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. 基于卷积神经网络的岩爆烈度等级预测.
- Author
-
李康楠, 吴雅琴, 杜 锋, 张 翔, and 王乙桥
- Subjects
CONVOLUTIONAL neural networks ,MISSING data (Statistics) ,PREDICTION models ,INTERPOLATION ,DEEP learning - Abstract
Copyright of Coal Geology & Exploration is the property of Xian Research Institute of China Coal Research Institute and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
17. Research on criterion of support point selection of radial basis function in hypersonic heat flux interpolation
- Author
-
HONG Haifeng, KANG Zhicong, and XIE Liang
- Subjects
radial basis function ,data interpolation ,support point selection ,error criterion ,interpolation precision ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
In order to avoid the problem that the support points selected by the traditional single absolute error criterion are not interpolated accurately when the heat flux is small the radial basis function is used for hypersonic heat flux interpolation. In this paper, a radial basis function interpolation procedure with double error criterion is proposed. In this process, the absolute error criterion is used to select a certain number of support points, and then the relative error criterion is used to select another part of points. The numerical experiments show that the double error criterion can ensure the interpolation accuracy at both large and small heat flux and can avoid the problem of negative heat flux which is easy to appear in the interpolation results caused by the traditional single absolute error criterion.
- Published
- 2023
- Full Text
- View/download PDF
18. Building a 5D Database of Heliostats Flux Distributions: Toward High Flexibility High Accuracy Flux Control for Odeillo's Big Solar Furnace
- Author
-
Emmanuel Guillot, Michaël Tessonneaud, and Jean-Louis Sans
- Subjects
Solar Furnace ,Flux Measurements ,Camera ,Radiometer ,Data Interpolation ,Heliostats ,Physics ,QC1-999 - Abstract
In order to increase the flexibility and the control of the concentrated solar flux provided to test processes and materials at the big solar furnace in Odeillo (up to 1000 kW and 10000 KW/m2), extensive and accurate knowledge of the individual contribution of each of the heliostats is required, for all their aiming points. For a solar furnace, with double mirror reflection and heavily changing shadow of the solar beams, this would require too much experimental time to record this behaviour systematically, either directly (flux measurements of all the configurations) or indirectly (simulation by ray tracing based on the real optical configuration measured, both at macroscopic and microscopic scale). This work here presents a trade-off method based on real flux measurement, but at sampled configurations, then interpolated to fill in missing data. 5 dimensions have been sampled: heliostats (1D) + aiming directions (2D) + flux distributions (2D). An evaluation of the method is presented based on actual experimental data, with a discussion of the experience.
- Published
- 2023
- Full Text
- View/download PDF
19. Use of Artificial Vision during the Lye Treatment of Sevillian-Style Green Olives to Determine the Optimal Time for Terminating the Cooking Process.
- Author
-
Gordillo, Miguel Calixto López, Madueño-Luna, Antonio, Luna, José Miguel Madueño, and Ramírez-Juidías, Emilio
- Subjects
ARTIFICIAL vision ,OLIVE ,JUDGMENT (Psychology) ,MANUFACTURING processes ,COOKING ,STORAGE tanks - Abstract
This study focuses on characterizing the temporal evolution of the surface affected by industrial treatment with NaOH within the processing tanks during the lye treatment stage of Manzanilla table olives. The lye treatment process is affected by multiple variables, such as ambient temperature, the initial temperature of the olives before lye treatment, the temperature of the NaOH solution, the concentration of the solution, the variety of olives, and their size, which are determinants of the speed of the lye treatment process. Traditionally, an expert, relaying on their subjective judgement, manages the cooking process empirically, leading to variability in the termination timing of the cook. In this study, we introduce a system that, by using an artificial vision system, allows us to know in a deterministic way the percentage of lye treatment achieved at each moment along the cooking process; furthermore, with an interpolator that accumulates values during the lye treatment, it is possible to anticipate the completion of the cooking by indicating the moment when two-thirds, three-fourths, or some other value of the interior surface will be reached with an error of less than 10% relative to the optimal moment. Knowing this moment is crucial for proper processing, as it will affect subsequent stages of the manufacturing process and the quality of the final product. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Shot-gather Reconstruction using a Deep Data Prior-based Neural Network Approach.
- Author
-
Rodríguez-López, Luis, León-López, Kareth, Goyes-Peñafiel, Paul, Galvis, Laura, and Arguello, Henry
- Subjects
- *
DEEP learning , *SEISMIC surveys , *FEATURE extraction , *PRIOR learning , *INTERPOLATION , *DATABASES - Abstract
Seismic surveys are often affected by environmental obstacles or restrictions that prevent regular sampling in seismic acquisition. To address missing data, various methods, including deep learning techniques, have been developed to extract features from complex information, albeit with the limitation of requiring external seismic databases. While previous works have primarily focused on trace reconstruction, missing shot-gathers directly impact the seismic processing flow and represent a major challenge in seismic data regularization. In this paper, we propose DIPsgr, a seismic shot-gather reconstruction method that uses only the incomplete seismic acquisition measurements to estimate their missing information employing unsupervised deep learning. Numerical experiments on three databases demonstrate that DIPsgr recovers the complete set of traces in each shot-gather, with preserved information and seismic events. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Seismic data interpolation using deeply supervised U‐Net++ with natural seismic training sets.
- Author
-
Wu, Geng, Liu, Yang, Liu, Cai, Zheng, Zhisheng, and Cui, Yang
- Subjects
- *
RANDOM noise theory , *BANDPASS filters , *SEPARATION of variables , *MISSING data (Statistics) , *DEEP learning , *IMAGING systems in seismology - Abstract
Interpolation techniques provide an effective method for recovery of missing traces. In recent years, many researchers have applied deep learning methods to seismic data interpolation. Generally, one can choose synthetic data as a training set; however, the features of synthetic data are always inconsistent with those of field data, which may lead to inaccurate interpolation. Meanwhile, U‐Net is a common network structure used in seismic data interpolation; however, the four downsampling and upsampling structures of U‐Net have limited adaptability for different data. In this study, the deep learning method based on U‐Net++ was proposed for seismic data interpolation, which contains U‐Net with different depths. The different depths were connected by skip pathways, and the best depth of the network was chosen for different seismic data by deep supervision. Furthermore, a new strategy for training sets was designed: frequency‐wavenumber (f‐k) bandpass filters were used to convert natural images into a natural seismic training set, which has a stronger generalization capability than synthetic data as the training set. The characteristics of the new training set can effectively improve the accuracy of missing data reconstruction. Compared with the conventional U‐Net and traditional interpolation techniques, for example, the Fourier Bregman method, the proposed method produces more accurate and reasonable interpolation results. Further, it can reconstruct both irregular and regular missing seismic data, even in the presence of strong random noise and aliasing. Synthetic and field data tests showed the effectiveness, robustness and generalization of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. A Scheme for Sensor Data Reconstruction in Smart Home
- Author
-
Du, Yegang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Min, editor, Chen, Chao, editor, and Liu, Yang, editor
- Published
- 2021
- Full Text
- View/download PDF
23. BRDF Measurement of Real Materials Using Handheld Cameras
- Author
-
Otani, Haru, Komuro, Takashi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bebis, George, editor, Athitsos, Vassilis, editor, Yan, Tong, editor, Lau, Manfred, editor, Li, Frederick, editor, Shi, Conglei, editor, Yuan, Xiaoru, editor, Mousas, Christos, editor, and Bruder, Gerd, editor
- Published
- 2021
- Full Text
- View/download PDF
24. Rational Quartic Spline Interpolation and Its Application in Signal Processing
- Author
-
Harim, Noor Adilla, Karim, Samsul Ariffin Abdul, Othman, Mahmod, Ghaffar, Abdul, Nisar, Kottakkaran Sooppy, Kacprzyk, Janusz, Series Editor, Abdul Karim, Samsul Ariffin, editor, Saad, Nordin, editor, and Kannan, Ramani, editor
- Published
- 2021
- Full Text
- View/download PDF
25. Use of Artificial Vision during the Lye Treatment of Sevillian-Style Green Olives to Determine the Optimal Time for Terminating the Cooking Process
- Author
-
Miguel Calixto López Gordillo, Antonio Madueño-Luna, José Miguel Madueño Luna, and Emilio Ramírez-Juidías
- Subjects
table olive ,sodium hydroxide ,artificial vision ,data interpolation ,Chemical technology ,TP1-1185 - Abstract
This study focuses on characterizing the temporal evolution of the surface affected by industrial treatment with NaOH within the processing tanks during the lye treatment stage of Manzanilla table olives. The lye treatment process is affected by multiple variables, such as ambient temperature, the initial temperature of the olives before lye treatment, the temperature of the NaOH solution, the concentration of the solution, the variety of olives, and their size, which are determinants of the speed of the lye treatment process. Traditionally, an expert, relaying on their subjective judgement, manages the cooking process empirically, leading to variability in the termination timing of the cook. In this study, we introduce a system that, by using an artificial vision system, allows us to know in a deterministic way the percentage of lye treatment achieved at each moment along the cooking process; furthermore, with an interpolator that accumulates values during the lye treatment, it is possible to anticipate the completion of the cooking by indicating the moment when two-thirds, three-fourths, or some other value of the interior surface will be reached with an error of less than 10% relative to the optimal moment. Knowing this moment is crucial for proper processing, as it will affect subsequent stages of the manufacturing process and the quality of the final product.
- Published
- 2023
- Full Text
- View/download PDF
26. Repair missing data to improve corporate credit risk prediction accuracy with multi-layer perceptron.
- Author
-
Yang, Mei, Lim, Ming K., Qu, Yingchi, Li, Xingzhi, and Ni, Du
- Subjects
- *
MULTILAYER perceptrons , *MISSING data (Statistics) , *CREDIT risk , *FORECASTING , *COLUMNS , *MACHINE learning - Abstract
Data loss has become an inevitable phenomenon in corporate credit risk (CCR) prediction. To ensure the integrity of data information for subsequent analysis and prediction, it is essential to repair the missing data as accurately as possible. To solve the problem of missing data in credit classification, this study proposes a multi-layer perceptron ensemble (MLP–ESM) model that can perform data interpolation and prediction simultaneously to predict CCR. The model makes full use of non-missing information and interpolates more missing columns with fewer missing values. In this way, not only the data features needed for missing data interpolation are extracted, but also the structural relationship features between the predicted target and the existing data are extracted, which can achieve the effect of simultaneous interpolation and prediction. The results show that the MLP–ESM model can effectively interpolate and predict the missing dataset of CCR. The prediction accuracy is 83.11%, which is better than the traditional machine learning model. This fully shows that the dataset after interpolation can achieve a better prediction effect. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. MULTIVARIATE INTERPOLATION USING POLYHARMONIC SPLINES
- Author
-
Karel Segeth
- Subjects
data interpolation ,smooth interpolation ,polyharmonic spline ,fourier transform ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Data measuring and further processing is the fundamental activity in all branches of science and technology. Data interpolation has been an important part of computational mathematics for a long time. In the paper, we are concerned with the interpolation by polyharmonic splines in an arbitrary dimension. We show the connection of this interpolation with the interpolation by radial basis functions and the smooth interpolation by generating functions, which provide means for minimizing the L2 norm of chosen derivatives of the interpolant. This can be useful in 2D and 3D, e.g., in the construction of geographic information systems or computer aided geometric design. We prove the properties of the piecewise polyharmonic spline interpolant and present a simple 1D example to illustrate them.
- Published
- 2021
- Full Text
- View/download PDF
28. A Bayesian Approach for Interpolating Clear-Sky MODIS Land Surface Temperatures on Areas With Extensive Missing Data
- Author
-
Yuhong Chen, Zhuotong Nan, Shuping Zhao, and Yi Xu
- Subjects
Bayesian approach ,data fusion ,data interpolation ,empirical orthogonal function (EOF) ,land surface temperature (LST) ,similarity theory ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
The MODIS land surface temperature (LST) products contain large areas of missing data due to cloud contamination. Interpolating clear-sky equivalent LSTs on those areas is a first step in a stepwise approach toward fully recovering missing data. A previous study (viz. the Yu method) has implemented an effective clear-sky interpolation method, especially targeting large-area missing data. The Yu method postulates several global reference LST images that contain over 90% of valid pixels and that are assumed to have a close statistical relationship to the interpolated images. However, in practice, such reference images are rarely available throughout a one-year cycle, and the time gaps between the available reference images and the interpolated images are often huge, resulting in compromised interpolation accuracy. In this study, we intended to address those weaknesses and propose a novel clear-sky interpolation approach. The proposed approach uses multiple temporally proximate images as reference images, with which multiple initial estimates are made by an empirically orthogonal function method and then fused by a Bayesian approach to achieve a best estimate. The proposed approach was compared through two experiments to the Yu method and two other widely used methods, i.e., harmonic analysis of time series and co-kriging. Both experiments demonstrate the superiority of the proposed approach over those established methods, as evidenced by higher spatial correlation coefficients (0.90-0.94) and lower root-mean-square errors (1.19-3.64 °C) it achieved when measured against the original data that were intentionally removed.
- Published
- 2021
- Full Text
- View/download PDF
29. 融合钻孔与地质剖面的三维地质混合插值方法.
- Author
-
李 健, 王心宇, 刘沛溶, 马玉荣, and 王广印
- Abstract
Copyright of Journal of Zhengzhou University (Natural Science Edition) is the property of Journal of Zhengzhou University (Natural Science Edition) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
30. Data Pre-Processing Using Neural Processes for Modeling Personalized Vital-Sign Time-Series Data.
- Author
-
Sharma, Pulkit, Shamout, Farah E., Abrol, Vinayak, and Clifton, David A.
- Subjects
GAUSSIAN processes ,MISSING data (Statistics) ,ELECTRONIC health records ,LATENT variables ,PROBABILISTIC generative models ,INFORMATION modeling ,MULTIPLE imputation (Statistics) - Abstract
Clinical time-series data retrieved from electronic medical records are widely used to build predictive models of adverse events to support resource management. Such data is often sparse and irregularly-sampled, which makes it challenging to use many common machine learning methods. Missing values may be interpolated by carrying the last value forward, or through linear regression. Gaussian process (GP) regression is also used for performing imputation, and often re-sampling of time-series at regular intervals. The use of GPs can require extensive, and likely adhoc, investigation to determine model structure, such as an appropriate covariance function. This can be challenging for multivariate real-world clinical data, in which time-series variables exhibit different dynamics to one another. In this work, we construct generative models to estimate missing values in clinical time-series data using a neural latent variable model, known as a Neural Process (NP). The NP model employs a conditional prior distribution in the latent space to learn global uncertainty in the data by modelling variations at a local level. In contrast to conventional generative modelling, this prior is not fixed and is itself learned during the training process. Thus, NP model provides the flexibility to adapt to the dynamics of the available clinical data. We propose a variant of the NP framework for efficient modelling of the mutual information between the latent and input spaces, ensuring meaningful learned priors. Experiments using the MIMIC III dataset demonstrate the effectiveness of the proposed approach as compared to conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Interpolation Methods to Improve Data Quality of Indoor Positioning Data for Dairy Cattle
- Author
-
Keni Ren, Moudud Alam, Per Peetz Nielsen, Maya Gussmann, and Lars Rönnegård
- Subjects
ultra-wideband ,dairy cow ,indoor positioning ,data interpolation ,precision livestock farming ,Veterinary medicine ,SF600-1100 - Abstract
Position data from real-time indoor positioning systems are increasingly used for studying individual cow behavior and social behavior in dairy herds. However, missing data challenges achieving reliable continuous activity monitoring and behavior studies. This study investigates the pattern of missing data and alternative interpolation methods in ultra-wideband based real-time indoor positioning systems in a free-stall barn. We collected 3 months of position data from a Swedish farm with around 200 cows. Data sampled for 6 days from 69 cows were used in subsequent analyzes to determine the location and duration of missing data. Data from 20 cows with the most reliable tags were selected to compare the effects of four different interpolation methods (previous, linear interpolation, cubic spline data interpolation and modified Akima interpolation). By comparing the observed data with the interpolations of the simulated missing data, the mean error distance varied from around 55 cm, using the previously last observed position, to around 17 cm for modified Akima. Modified Akima interpolation has the lowest error distance for all investigated activities (rest, walking, standing, feeding). Larger error distances were found in areas where the cows walk and turn, such as the corner between feeding and cubicles. Modified Akima interpolation is expected to be useful in the subsequent analyses of data gathered using real-time indoor positioning systems.
- Published
- 2022
- Full Text
- View/download PDF
32. Annual CO2 Budget Estimation From Chamber-Based Flux Measurements on Intensively Drained Peat Meadows: Effect of Gap-Filling Strategies
- Author
-
Weier Liu, Christian Fritz, Stefan T. J. Weideveld, Ralf C. H. Aben, Merit van den Berg, and Mandy Velthuis
- Subjects
closed-chamber methods ,drained peatland ,carbon dioxide ,CO2 flux modeling ,data interpolation ,Environmental sciences ,GE1-350 - Abstract
Estimating annual CO2 budgets on drained peatlands is important in understanding the significance of CO2 emissions from peatland degradation and evaluating the effectiveness of mitigation techniques. The closed-chamber technique is widely used in combination with gap-filling of CO2 fluxes by parameter fitting empirical models of ecosystem respiration (Reco) and gross primary production (GPP). However, numerous gap-filling strategies are available which are suitable for different circumstances and can result in large variances in annual budget estimates. Therefore, a need for guidance on the selection of gap-filling methodology and its influence on the results exists. Here, we propose a framework of gap-filling methods with four Tiers following increasing model complexity at structural and temporal levels. Tier one is a simple parameter fitting of basic empirical models on an annual basis. Tier two adds structural complexity by including extra environmental factors such as grass height, groundwater level and drought condition. Tier three introduces temporal complexity by separation of annual datasets into seasons. Tier four is a campaign-specific parameter fitting approach, representing highest temporal complexity. The methods were demonstrated on two chamber-based CO2 flux datasets, one of which was previously published. Performance of the empirical models were compared in terms of error statistics. Annual budget estimates were indirectly validated with carbon export values. In conclusion, different gap-filling methodologies gave similar annual estimates but different intra-annual CO2 fluxes, which did not affect the detection of the treatment effects. The campaign-wise gap-filling at Tier four gave the best model performances, while Tier three seasonal gap-filling produced satisfactory results throughout, even under data scarcity. Given the need for more complete carbon balances in drained peatlands, our four-Tier framework can serve as a methodological guidance to the handling of chamber-measured CO2 fluxes, which is fundamental in understanding emissions from degraded peatlands and its mitigation. The performance of models on intra-annual data should be validated in future research with continuous measured CO2 flux data.
- Published
- 2022
- Full Text
- View/download PDF
33. Pixelwise Dynamic Convolution Neural Network for LiDAR Depth Data Interpolation.
- Author
-
Kim, Wonjik, Tanaka, Masayuki, Okutomi, Masatoshi, and Sasaki, Yoko
- Abstract
A light detecting and ranging (LiDAR) system is an important means that takes an omni-directional view to collect precise surrounding 3D information in high sampling frequency. However, due to the architecture of a LiDAR sensor, LiDAR data typically contains much less information in the vertical direction compared to the horizontal direction. In addition, LiDAR data are often defected during measurement. There have been previous researches about LiDAR data interpolation using a high-resolution reference image. However, they suffer from the calibration error due to vibration or shock during operation and do not consider defected data. Here, we propose a reference-free interpolation method with considering the defected LiDAR data using a two-step network architecture. Our proposed method is designed first to exploit the rich information in the horizontal direction of the LiDAR data to restore the defected data and then increase the vertical resolution of LiDAR data. Since the proposed interpolation method is a reference-free approach, it if free from the calibration issue. We confirm that our proposed method shows better performance than existing interpolation methods and is effective as a pre-processing function in other applications. For example, our proposed method can improve the performance of the human segmentation. We also propose a pixelwise dynamic convolution layer as a component of the proposed interpolation method. The proposed dynamic convolution layer adaptively processes each pixel with variable kernels, while existing dynamic layers only use fixed kernels. Experimental results show that the proposed dynamic layer is more effective than existing dynamic layers for LiDAR data interpolation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. SPATIAL DEPENDENCE DEGREE AND SAMPLING NEIGHBORHOOD INFLUENCE ON INTERPOLATION PROCESS FOR FERTILIZER PRESCRIPTION MAPS
- Author
-
Lucas R. do Amaral and Diego D. Della Justina
- Subjects
data interpolation ,soil sampling ,geostatistics ,site-specific management ,Agriculture (General) ,S1-972 - Abstract
ABSTRACT Data interpolation is widely required in precision agriculture. However, its effectiveness is a function of the characteristics of the dataset, being necessary for the evaluation of several parameters. This study aimed to identify how the spatial interpolators, Kriging, and Inverse Distance Weighting, are influenced by the degree of spatial dependence of the variables analyzed and the number of neighbors considered in the interpolation process (sampling neighborhood). Soil samples were collected from three sugarcane fields. By the optimization process, we verified that the sampling neighborhood influences the accuracy of interpolations, but there is not a standard recommendation to follow. Thus, the best sampling neighborhood must ever be optimized for each case when preparing fertilizer prescription maps. Evaluating the performance of interpolations is always important to infer the reliability of the prescription maps, since no index that measures the degree of spatial dependence is effective. Because high prediction errors can occur when spatial dependence is poorly modeled, one cannot expect crop response with the continuous application of fertilizers in variable rates. Thus, work with homogeneous soil zones can be an interesting palliative approach. This study guides precision agriculture practitioners on some points that should be carefully considered in the data interpolation process for generating fertilizer prescription maps.
- Published
- 2019
- Full Text
- View/download PDF
35. Data Imputation of Wind Turbine Using Generative Adversarial Nets with Deep Learning Models
- Author
-
Qu, Fuming, Liu, Jinhai, Hong, Xiaowei, Zhang, Yu, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Cheng, Long, editor, Leung, Andrew Chi Sing, editor, and Ozawa, Seiichi, editor
- Published
- 2018
- Full Text
- View/download PDF
36. A Missing Type-Aware Adaptive Interpolation Framework for Sensor Data.
- Author
-
Chen, Lingqiang, Li, Guanghui, Huang, Guangyan, and Shi, Pei
- Subjects
- *
AIR quality monitoring stations , *INTERPOLATION , *MATRIX decomposition , *PEARSON correlation (Statistics) , *INTERPOLATION algorithms , *MISSING data (Statistics) - Abstract
Data missing problems often occur on the Internet-of-Things domains. This article proposes a missing type-aware interpolation framework (IMA) for data loss problems in city-wide environmental monitoring systems that contain many scattered stations. To interpolate data as accurately as possible, IMA considers three aspects of information, i.e., spatiotemporal, all attributes of one measurement, and all values and accordingly develop three methods to estimate the missing data. First, we develop an improved multiviewer method, which uses the spatiotemporal correlation of data from neighbor stations to estimate random missing values. Second, we propose a new multi-eXtreme Gradient Boosting (multi-XGBoost) method that uses the values of the co-occurring and correlated correct attributes to predict the value of the missing attribute. Third, we take advantage of matrix factorization to estimate the missing parts if the data of the interpolation matrix are not all missing. To avoid the influence of uncorrelated data, IMA calculates Pearson’s correlation coefficient between data of each station and uses those data from its top $k$ highest correlation neighbors to form an interpolation matrix. Furthermore, due to the complexity of missing cases, IMA uses confidence levels in each of the three data prediction methods. For example, if the multiviewer method fails, IMA weights all valid results with confidence levels. We conduct our experiments on two real-world datasets from air quality monitoring stations in Beijing. Both datasets contain numerous missing measurements. Experimental results show that IMA outperforms other counterpart methods in interpolating the missing measurements, in terms of accuracy and effectiveness. Compared with the most related method, IMA improves the interpolation accuracy from 0.818 to 0.849 in a small dataset and from 0.214 to 0.759 in a large one. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. MULTIVARIATE INTERPOLATION USING POLYHARMONIC SPLINES.
- Author
-
SEGETH, KAREL
- Subjects
INTERPOLATION ,SPLINES ,RADIAL basis functions ,POLYHARMONIC functions ,SPLINE theory ,NORMED rings - Published
- 2021
- Full Text
- View/download PDF
38. DATA INTERPOLATION BASED ON CONTEXTUAL ANALYSIS FOR GENERATING TOMOGRAPHIC IMAGES IN CONCRETE SPECIMEN.
- Author
-
Quispe Urure, Roger Manuel, Garcia de Carvalho, Marco Antonio, Nascimento Moura, Marinara Andrade, and Cristina dos Santos Ferreira, Gisleiva
- Subjects
INTERPOLATION ,CONTEXTUAL analysis ,TOMOGRAPHY ,THRESHOLDING algorithms ,CONCRETE - Abstract
In the Construction area, inspections of concrete structure could be carried out using non-destructive tests, such as ultrasonic tomography. Usually, images generated by this technique are improved by means of spatial interpolation data. This research proposes the use of a new spatial interpolation technique in order to generating tomographic images of concrete. This approach was originally used on wood tomographic images and is based on contextual analysis according to the location of the point to be interpolated. Also, it is considered the influence zone of each route obtained in the concrete ultrasonic test. The ultrasonic tests were performed in cylindrical specimens with and without PVC tubes, simulating voids in concrete. The achieved result of our analysis are compared to the ground truth images and those obtained by Inverse Distance Weighting, using accuracy metric and image processing operations. The results indicated high accuracy values (greater than 90%), i.e, the spatial interpolation technique (method) is promising in identifying the presence of voids in concrete elements. [ABSTRACT FROM AUTHOR]
- Published
- 2020
39. Investigations of a functional version of a blending surface scheme for regular data interpolation.
- Author
-
Mann, Stephen
- Subjects
- *
INTERPOLATION , *POLYHEDRA , *POLYNOMIALS - Abstract
This paper describes an implementation and tests of a blending scheme for regularly sampled data interpolation, and in particular studies the order of approximation for the method. This particular implementation is a special case of an earlier scheme by Fang for fitting a parametric surface to interpolate the vertices of a closed polyhedron with n -sided faces, where a surface patch is constructed for each face of the polyhedron, and neighbouring faces can meet with a user specified order of continuity. The specialization described in this paper considers functions of the form z = f (x , y) with the patches meeting with C 2 continuity. This restriction allows for investigation of order of approximation, and it is shown that the functional version of Fang's scheme has polynomial precision. • Analyzed and tested a functional form of a parametric data interpolation scheme. • Proved that this functional scheme has the polynomial precision of subsurfaces constructed for initial step of the scheme. • Gave several surface constructions for the initial step withe polynomial precision of different degrees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Comparison of manifold learning algorithms used in FSI data interpolation of curved surfaces
- Author
-
Liu, Ming-min, Li, L.Z., and Zhang, Jun
- Published
- 2017
- Full Text
- View/download PDF
41. Efficient Coupling of Fluid and Acoustic Interaction on Massive Parallel Systems
- Author
-
Krupp, Verena, Masilamani, Kannan, Klimach, Harald, Roller, Sabine, Resch, Michael M., editor, Bez, Wolfgang, editor, Focht, Erich, editor, Patel, Nisarg, editor, and Kobayashi, Hiroaki, editor
- Published
- 2016
- Full Text
- View/download PDF
42. Shot-gather Reconstruction using a Deep Data Prior-based Neural Network Approach
- Author
-
Rodríguez López, Luis Miguel, León López, Kareth Marcela, Goyes Peñafiel, Yesid Paul, Galvis Martínez, Laura Carolina, Arguello Fuentes, Henry, Rodríguez López, Luis Miguel, León López, Kareth Marcela, Goyes Peñafiel, Yesid Paul, Galvis Martínez, Laura Carolina, and Arguello Fuentes, Henry
- Abstract
Seismic surveys are often affected by environmental obstacles or restrictions that prevent regular sampling in seismic acquisition. To address missing data, various methods, including deep learning techniques, have been developed to extract features from complex information, albeit with the limitation of requiring external seismic databases. While previous works have primarily focused on trace reconstruction, missing shot-gathers directly impact the seismic processing flow and represent a major challenge in seismic data regularization. In this paper, we propose DIPsgr, a seismic shot-gather reconstruction method that uses only the incomplete seismic acquisition measurements to estimate their missing information employing unsupervised deep learning. Numerical experiments on three databases demonstrate that DIPsgr recovers the complete set of traces in each shot-gather, with preserved information and seismic events., Los levantamientos sísmicos usualmente se ven afectados por obstáculos o restricciones ambientales que impiden el muestreo regular en la adquisición sísmica. Por lo tanto, se han desarrollado diversos métodos para reconstruir estos datos faltantes, incluidos los métodos de aprendizaje profundo, los cuales permiten extraer características de información compleja, con la limitante de bases de datos sísmicos externos. Aunque otros trabajos se han enfocado principalmente en la reconstrucción de trazas, los disparos que no se pueden adquirir impactan directamente el flujo del procesamiento sísmico y representa un reto mayor en la regularización de datos sísmicos. En este trabajo proponemos DIPsgr, un método de reconstrucción de disparos sísmicos que usa solamente las medidas de las adquisiciones sísmicas incompletas para estimar la información faltante usando aprendizaje profundo no supervisado. Los experimentos numéricos con tres bases de datos muestran que DIPsgr recupera el conjunto completo de trazas en cada shot-gather, donde la información y los eventos sísmicos se conservan correctamente.
- Published
- 2023
43. Use of artificial vision during the lye treatment of Sevillian-style green olives to determine the optimal time for terminating the cooking process
- Author
-
Universidad de Sevilla. Departamento de Ingeniería Aeroespacial y Mecánica de Fluidos, Universidad de Sevilla. Departamento de Ingeniería Gráfica, Universidad de Sevilla. AGR280: Ingeniería Rural, Universidad de Sevilla. RNM162: Composición, Arquitectura y Medio Ambiente, López Gordillo, Miguel Calixto, Madueño Luna, Antonio, Madueño Luna, José Miguel, Ramírez Juidias, Emilio, Universidad de Sevilla. Departamento de Ingeniería Aeroespacial y Mecánica de Fluidos, Universidad de Sevilla. Departamento de Ingeniería Gráfica, Universidad de Sevilla. AGR280: Ingeniería Rural, Universidad de Sevilla. RNM162: Composición, Arquitectura y Medio Ambiente, López Gordillo, Miguel Calixto, Madueño Luna, Antonio, Madueño Luna, José Miguel, and Ramírez Juidias, Emilio
- Abstract
This study focuses on characterizing the temporal evolution of the surface affected by industrial treatment with NaOH within the processing tanks during the lye treatment stage of Manzanilla table olives. The lye treatment process is affected by multiple variables, such as ambient temperature, the initial temperature of the olives before lye treatment, the temperature of the NaOH solution, the concentration of the solution, the variety of olives, and their size, which are determinants of the speed of the lye treatment process. Traditionally, an expert, relaying on their subjective judgement, manages the cooking process empirically, leading to variability in the termination timing of the cook. In this study, we introduce a system that, by using an artificial vision system, allows us to know in a deterministic way the percentage of lye treatment achieved at each moment along the cooking process; furthermore, with an interpolator that accumulates values during the lye treatment, it is possible to anticipate the completion of the cooking by indicating the moment when two-thirds, three-fourths, or some other value of the interior surface will be reached with an error of less than 10% relative to the optimal moment. Knowing this moment is crucial for proper processing, as it will affect subsequent stages of the manufacturing process and the quality of the final product.
- Published
- 2023
44. Solution-Processed Flexible Temperature Sensor Array for Highly Resolved Spatial Temperature and Tactile Mapping Using ESN-Based Data Interpolation.
- Author
-
Nakamura H, Ezaki R, Matsumura G, Chung CC, Hsu YC, Peng YR, Fukui A, Chueh YL, Kiriya D, and Takei K
- Abstract
High-performance flexible temperature sensors are crucial in various technological applications, such as monitoring environmental conditions and human healthcare. The ideal characteristics of these sensors for stable temperature monitoring include scalability, mechanical flexibility, and high sensitivity. Moreover, simplicity and low power consumption will be essential for temperature sensor arrays in future integrated systems. This study introduces a solution-based approach for creating a V
2 O5 nanowire network temperature sensor on a flexible film. Through optimization of the fabrication conditions, the sensor exhibits remarkable performance, sustaining long-term stability (>110 h) with minimal hysteresis and excellent sensitivity (∼-1.5%/°C). In addition, this study employs machine learning techniques for data interpolation among sensors, thereby enhancing the spatial resolution of temperature measurements and adding tactile mapping without increasing the sensor count. Introducing this methodology results in an improved understanding of temperature variations, advancing the capabilities of flexible-sensor arrays for various applications.- Published
- 2024
- Full Text
- View/download PDF
45. Use of Multiple Low Cost Carbon Dioxide Sensors to Measure Exhaled Breath Distribution with Face Mask Type and Wearing Behaviour
- Author
-
Naveed Salman, Muhammad Waqas Khan, Michael Lim, Amir Khan, Andrew H. Kemp, and Catherine J. Noakes
- Subjects
face mask ,CO2 sensors ,COVID-19 ,data interpolation ,Chemical technology ,TP1-1185 - Abstract
The use of cloth face coverings and face masks has become widespread in light of the COVID-19 pandemic. This paper presents a method of using low cost wirelessly connected carbon dioxide (CO2) sensors to measure the effects of properly and improperly worn face masks on the concentration distribution of exhaled breath around the face. Four types of face masks are used in two indoor environment scenarios. CO2 as a proxy for exhaled breath is being measured with the Sensirion SCD30 CO2 sensor, and data are being transferred wirelessly to a base station. The exhaled CO2 is measured in four directions at various distances from the head of the subject, and interpolated to create spatial heat maps of CO2 concentration. Statistical analysis using the Friedman’s analysis of variance (ANOVA) test is carried out to determine the validity of the null hypotheses (i.e., distribution of the CO2 is same) between different experiment conditions. Results suggest CO2 concentrations vary little with the type of mask used; however, improper use of the face mask results in statistically different CO2 spatial distribution of concentration. The use of low cost sensors with a visual interpolation tool could provide an effective method of demonstrating the importance of proper mask wearing to the public.
- Published
- 2021
- Full Text
- View/download PDF
46. M-SRPCNN: A Fully Convolutional Neural Network Approach for Handling Super Resolution Reconstruction on Monthly Energy Consumption Environments
- Author
-
Iván de-Paz-Centeno, María Teresa García-Ordás, Oscar García-Olalla, Javier Arenas, and Héctor Alaiz-Moretón
- Subjects
super resolution perception ,super resolution of energy ,data interpolation ,convolutional neural network ,deep-learning ,Technology - Abstract
We propose M-SRPCNN, a fully convolutional generative deep neural network to recover missing historical hourly data from a sensor based on the historic monthly energy consumption. The network performs a reconstruction of the load profile while keeping the overall monthly consumption, which makes it suitable to effectively replace energy apportioning systems. Experiments demonstrate that M-SRPCNN can effectively reconstruct load curves from single month overall values, outperforming traditional apportioning systems.
- Published
- 2021
- Full Text
- View/download PDF
47. Deep Interpretable Early Warning System for the Detection of Clinical Deterioration.
- Author
-
Shamout, Farah E., Zhu, Tingting, Sharma, Pulkit, Watkinson, Peter J., and Clifton, David A.
- Subjects
RECEIVER operating characteristic curves ,HOSPITAL wards ,RANDOM variables ,DEEP learning ,GROUND penetrating radar - Abstract
Assessment of physiological instability preceding adverse events on hospital wards has been previously investigated through clinical early warning score systems. Early warning scores are simple to use yet they consider data as independent and identically distributed random variables. Deep learning applications are able to learn from sequential data, however they lack interpretability and are thus difficult to deploy in clinical settings. We propose the ‘Deep Early Warning System’ (DEWS), an interpretable end-to-end deep learning model that interpolates temporal data and predicts the probability of an adverse event, defined as the composite outcome of cardiac arrest, mortality or unplanned ICU admission. The model was developed and validated using routinely collected vital signs of patients admitted to the the Oxford University Hospitals between 21st March 2014 and 31st March 2018. We extracted 45 314 vital-sign measurements as a balanced training set and 359 481 vital-sign measurements as an imbalanced testing set to mimic a real-life setting of emergency admissions. DEWS achieved superior accuracy than the state-of-the-art that is currently implemented in clinical settings, the National Early Warning Score, in terms of the overall area under the receiver operating characteristic curve (AUROC) (0.880 vs. 0.866) and when evaluated independently for each of the three outcomes. Our attention-based architecture was able to recognize ‘historical’ trends in the data that are most correlated with the predicted probability. With high sensitivity, improved clinical utility and increased interpretability, our model can be easily deployed in clinical settings to supplement existing EWS systems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Improved Interpolation and Anomaly Detection for Personal PM2.5 Measurement.
- Author
-
JinSoo Park and Sungroul Kim
- Subjects
STANDARD deviations ,INTRUSION detection systems (Computer security) - Abstract
With the development of technology, especially technologies related to artificial intelligence (AI), the fine-dust data acquired by various personal monitoring devices is of great value as training data for predicting future fine-dust concentrations and innovatively alerting people of potential danger. However, most of the fine-dust data obtained from those devices include either missing or abnormal data caused by various factors such as sensor malfunction, transmission errors, or storage errors. This paper presents methods to interpolate the missing data and detect anomalies in PM
2.5 time-series data. We validated the performance of our method by comparing ours to well-known existing methods using our personal PM2.5 monitoring data. Our results showed that the proposed interpolation method achieves more than 25% improved results in root mean square error (RMSE) than do most existing methods, and the proposed anomaly detection method achieves fairly accurate results even for the case of the highly capricious fine-dust data. These proposed methods are expected to contribute greatly to improving the reliability of data. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
49. CHRONOscope: Application for the Interactive Visualization of Carbon-14 and Beryllium-10 Atmospheric data.
- Author
-
Neocleous, Andreas, Kuitems, Margot, Scifo, Andrea, and Dee, Michael
- Subjects
CARBON cycle ,BERYLLIUM ,CLIMATE change ,DATA visualization ,RADIOCARBON dating - Abstract
Information about the global climate, the carbon cycle, changes in solar activity, and a number of other atmospheric processes are preserved in the carbon-14 and the beryllium-10 records. However, these isotope datasets are large and cumbersome to work with. We have designed a self-contained, easy-to-use application that allows for more efficient analysis of different periods and patterns of interest. For several applications in atmospheric modelling, a pre-processing stage is applied to the isotope datasets in order to interpolate the data and mitigate their low temporal resolution. In CHRONOscope, we included linear and non-linear methods of interpolation with interactive parameter optimization. The resultant interpolated data can be extracted for further use. The main functionalities of CHRONOscope include the importation and superimposition of external data, quick navigation through the data with the use of markers, expression of the carbon-14 results in both Δ
14 C and yr BP form, separation of the data by source, and the visualization of associated error bars. We make this free software available in standalone applications for both Windows and Mac operating systems. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
50. Polyharmonic splines generated by multivariate smooth interpolation.
- Author
-
Segeth, Karel
- Subjects
- *
SPLINE theory , *SPLINES , *INTERPOLATION , *RADIAL basis functions , *GEOGRAPHIC information systems , *GEOMETRICAL constructions , *COMPUTER-aided design , *SIGNAL processing - Abstract
Polyharmonic splines of order m satisfy the polyharmonic equation of order m in n variables. Moreover, if employed as basis functions for interpolation they are radial functions. We are concerned with the problem of construction of the smooth interpolation formula presented as the minimizer of suitable functionals subject to interpolation constraints for n ≥ 1. This is the principal motivation of the paper. We show a particular procedure for determining the interpolation formula that in a natural way leads to a linear combination of polyharmonic splines of a fixed order, possibly complemented with lower order polynomial terms. If it is advantageous for the interpolant in the problem solved to be a polyharmonic spline we can construct such an interpolant directly using the multivariate smooth approximation technique. The smoothness of the spline can be a priori chosen. Smooth interpolation can be very useful e.g. in signal processing, computer aided geometric design or construction of geographic information systems. A 1D computational example is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.