344 results
Search Results
2. Noise Reduction with Inference Based on Fuzzy Rule Interpolation at an Infinite Number of Activating Points: Toward Fuzzy Rule Learning in a Unified Inference Platform.
- Author
-
Kiyohiko Uehara and Kaoru Hirota
- Subjects
NOISE control ,FUZZY systems ,INTERPOLATION ,FUZZY control systems ,ALGORITHMS ,NUMERICAL analysis - Abstract
In order to provide a unified platform for fuzzy inference and fuzzy rule learning with noise-corrupted data, a method is proposed for reducing noise in learning data on the basis of a fuzzy inference method called α-GEMINAS (α-level-set and generalizedmean- based inference with fuzzy rule interpolation at an infinite number of activating points). It is expected to prevent fuzzy rules from overfitting to noise in learning data, especially when there is less learning data available for fuzzy rule optimization. The proposed method is named α-GEMI-ES (α-GEMINASbased local-evolution toward slight linearity for global smoothness) in this paper. α-GEMI-ES iteratively performs α-GEMINAS and reduces the noise in each iteration. This paper mathematically proves that α- GEMI-ES effectively reduces the noise. The noisereduction process is decisive and thus relies less on trial-and-error-based progress. The noise is reduced by a large amount in the early iterations and the amount of its reduction is decelerated in the later iterations where the deviation in the learning data is suppressed to a great extent. This property makes it easy to determine the termination conditions for the iterative process. Simulation results demonstrate that α-GEMI-ES properly reduces noise as the mathematical proof suggests. The above-mentioned properties indicate that α-GEMI-ES is feasible in practice for the unified platform. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. Study on 3D Terrain Mapping Method Based on Triangulation.
- Author
-
Wang Qinglin, Yao Hua, and Wang Yukun
- Subjects
TRIANGULATION ,GEODESY ,ALGORITHMS ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
A new method is used to generate three-dimensional terrain maps from discrete points in this paper. The process of generating terrain generates triangulation using the discrete points, by interpolation to calculate the elevation data of regular grid, with the Diamond-Square algorithm in fractal theory to further refine the grid data. The results of programming experiments show that realistic terrain maps can be got from a small amount of terrain feature points, which is suitable for rendering the virtual scene. [ABSTRACT FROM AUTHOR]
- Published
- 2010
4. Algorithm implementation and numerical analysis for the two-dimensional tempered fractional Laplacian.
- Author
-
Sun, Jing, Nie, Daxin, and Deng, Weihua
- Subjects
NUMERICAL analysis ,LEVY processes ,FINITE differences ,POISSON'S equation ,ALGORITHMS ,INTERPOLATION ,TRAPEZOIDS - Abstract
Tempered fractional Laplacian is the generator of the tempered isotropic Lévy process. This paper provides the finite difference discretization for the two-dimensional tempered fractional Laplacian by using the weighted trapezoidal rule and the bilinear interpolation. Then it is used to solve the tempered fractional Poisson equation with homogeneous Dirichlet boundary condition and the error estimate is also derived. Numerical experiments verify the predicted convergence rates and effectiveness of the schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. An Unsymmetric FDTD Subgridding Algorithm With Unconditional Stability.
- Author
-
Yan, Jin and Jiao, Dan
- Subjects
FINITE difference time domain method ,SYMMETRY (Physics) ,ELECTRON tube grids ,ALGORITHMS ,NUMERICAL analysis - Abstract
To preserve accuracy in a grid with arbitrary subgrids, a finite-difference time-domain (FDTD) subgridding scheme, in general, would result in an unsymmetric numerical system. Such a numerical system can have complex-valued eigenvalues, which will render a traditional explicit time marching of FDTD absolutely unstable. In this paper, we develop an accurate FDTD subgridding algorithm suitable for arbitrary subgridding settings with arbitrary contrast ratios between the normal grid and the subgrid. Although the resulting system matrix is also unsymmetric, we develop a time-marching method to overcome the stability problem without sacrificing the matrix-free merit of the original FDTD. This method is general, which is also applicable to other subgridding algorithms whose underlying numerical systems are unsymmetric. The proposed FDTD subgridding algorithm is then further made unconditionally stable, thus permitting the use of a time step independent of space step. Extensive numerical experiments involving both 2- and 3-D subgrids with various contrast ratios have demonstrated the accuracy, stability, and efficiency of the proposed subgridding algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Fine-granularity inference and estimations to network traffic for SDN.
- Author
-
Jiang, Dingde, Huo, Liuwei, and Li, Ya
- Subjects
COMPUTER networks ,COMPUTER simulation ,PHYSICAL sciences ,INTERPOLATION - Abstract
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. A Reliable Vector-Valued Rational Interpolation and Its Existence Study.
- Author
-
Xiaolin Zhu
- Subjects
INTERPOLATION ,APPROXIMATION theory ,NUMERICAL analysis ,ALGORITHMS ,EQUATIONS - Abstract
This paper presents a modified Thiele-Werner algorithm to construct a kind of reliable vector-valued rational interpolants (RVRIs) and then studies their existence. The reliability of this method means that if a solution of the basic vector-valued rational interpolation problem exists, the method given in this paper finds it. Then a method for testing the existence for RVRIs and some methods for dealing with unattainable points for RVRIs are given. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
8. Ultrasonic Signal Decomposition via Matching Pursuit with an Adaptive and Interpolated Dictionary.
- Author
-
Lu, Yinghui and Michaels, Jennifer E.
- Subjects
INTERPOLATION ,ALGORITHMS ,NUMERICAL analysis ,APPROXIMATION theory ,ULTRASONIC waves ,SPECTRUM analysis - Abstract
Matching pursuit is an iterative method whereby a signal is decomposed into a linear combination of functions that are selected from a redundant dictionary. In the original paper by Mallat and Zhang, a dictionary of Gabor functions is proposed. Each Gabor function is the product of a Gaussian function with a complex sinusoid, and is specified by time, frequency and scale. Since these functions are qualitatively and quantitatively very similar to ultrasonic echoes, it is appropriate to use the matching pursuit method to decompose ultrasonic signals to locate and identify discrete echoes embedded in complex signals. In this paper, a modified implementation of the matching pursuit algorithm is described, where the algorithm is specifically designed for an efficient decomposition of ultrasonic signals. The size of the wavelet dictionary is adaptively determined by the spectrum of the ultrasonic signal and is further controlled by additional physically meaningful restrictions. In each iterative step, the pursuit of the matching function begins with a coarse grid in the parameter space of the dictionary, and the highest energy matching function is found by interpolation of this coarse grid over the parameters. The algorithm is applied to a variety of measured ultrasonic signals. Signals consisting of multiple echoes are successfully decomposed, and the individual wavelets are well-matched to the original echoes. © 2007 American Institute of Physics [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
9. A New DEM Generalization Method Based on Watershed and Tree Structure.
- Author
-
Chen, Yonggang, Ma, Tianwu, Chen, Xiaoyin, Chen, Zhende, Yang, Chunju, Lin, Chenzhi, and Shan, Ligang
- Subjects
DIGITAL elevation models ,FOREST canopies ,WATERSHEDS ,GEODATABASES ,TOPOGRAPHY - Abstract
The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. Copula-Based Approach to Synthetic Population Generation.
- Author
-
Jeong, Byungduk, Lee, Wonjoon, Kim, Deok-Soo, and Shin, Hayong
- Subjects
SIMULATION methods & models ,TRANSPORTATION planning ,ITERATIVE methods (Mathematics) ,DEPENDENCE (Statistics) ,APPLIED mathematics - Abstract
Generating synthetic baseline populations is a fundamental step of agent-based modeling and simulation, which is growing fast in a wide range of socio-economic areas including transportation planning research. Traditionally, in many commercial and non-commercial microsimulation systems, the iterative proportional fitting (IPF) procedure has been used for creating the joint distribution of individuals when combining a reference joint distribution with target marginal distributions. Although IPF is simple, computationally efficient, and rigorously founded, it is unclear whether IPF well preserves the dependence structure of the reference joint table sufficiently when fitting it to target margins. In this paper, a novel method is proposed based on the copula concept in order to provide an alternative approach to the problem that IPF resolves. The dependency characteristic measures were computed and the results from the proposed method and IPF were compared. In most test cases, the proposed method outperformed IPF in preserving the dependence structure of the reference joint distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. How Reed-Solomon Codes Can Improve Steganographic Schemes.
- Author
-
Fontaine, Caroline and Galand, Fabien
- Subjects
RESEARCH methodology ,ELECTRIC distortion ,ALGORITHMS ,NUMERICAL analysis ,REED-Solomon codes ,ERROR-correcting codes ,EMBEDDING theorems ,INTERPOLATION ,APPROXIMATION theory - Abstract
The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers (J. Fridrich et al., 2005) and allow to lock positions which cannot be modified. Recently, binary BCH codes have been investigated and seem to be good candidates in this context (D. Schönfeld and A. Winkler, 2006). Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions; in fact, they are optimal. First, a simple and efficient scheme based on Lagrange interpolation is provided to achieve the optimal number of locked positions. We also consider a new and more general problem, mixing wet papers (locked positions) and simple syndrome coding (low number of changes) in order to face not only passive but also active wardens. Using list decoding techniques, we propose an efficient algorithm that enables an adaptive tradeoff between the number of locked positions and the number of changes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
12. Causal Cubic Splines: Formulations, Interpolation Properties and Implementations.
- Author
-
Petrinovié, Davor
- Subjects
SPLINES ,INTERPOLATION ,MIMO systems ,STATISTICAL sampling ,MATRICES (Mathematics) ,POLYNOMIALS ,ALGORITHMS ,NUMERICAL analysis ,COMPUTATIONAL complexity - Abstract
The paper presents two formulations of causal cubic splines with equidistant knots. Both are based on a causal direct B-spline filter with parallel or cascade implementation. In either implementation, the causal part of the impulse response is realized with an efficient infinite-impulse-response (HR) structure, while only the anticausal part is approximated with a finite-order finite-impulse-response (FIR) filter. Resulting cubic coefficients are computed from the causal B-spline coefficients by using a third-order output FIR filter with either single-input multiple-output (SIMO) or multiple-input multiple-output (MIMO) structure, depending on the chosen formulation of the cubic spline. The paper demonstrates and proves that the properties of the resulting causal splines are quite different, whether they are based on a more popular B-spline formulation, or a bit neglected tridiagonal matrix formulation. It is shown that the proposed low-complexity but accurate causal interpolators can be realized for many practical applications with the delay of only a few samples. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
13. Dynamic Phasor Estimates for Power System Oscillations.
- Author
-
de la O. Serna, José Antonio
- Subjects
PULSED power systems ,FREQUENCIES of oscillating systems ,BANDPASS filters ,ALGORITHMS ,NUMERICAL analysis ,FLUCTUATIONS (Physics) - Abstract
Since its invention, the phasor has essentially been considered as a steady-state concept. Up to now, this assumption has shaped most of the algorithms for phasor estimation. This paper breaks that old paradigm by relaxing the static phasor concept to a dynamic one, i.e., the dynamic phasor, which is one complex time function with movement freedom. This paper presents the algorithm to approximate the dynamic phasor by a second-order Taylor polynomial and compares its phasor estimate to the traditional one. This approximation leads to the definition of the phasor state vector, which contains not only the estimate of the dynamic phasor but the estimates of its derivatives as well. These new estimates improve the accuracy of oscillation estimation by including new Taylor details into the interpolation process. Errors on the order of 10
-4 are achieved with this approximation in bandpass signals over observation intervals of two cycles. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
14. Kernel CMAC With Improved Capability.
- Author
-
Horváth, Gábor and Szabó, Tamás
- Subjects
KERNEL functions ,ALGORITHMS ,NUMERICAL analysis ,FOUNDATIONS of arithmetic ,INTERPOLATION - Abstract
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
15. CONVERGENCE ANALYSIS OF THE GENERALIZED EMPIRICAL INTERPOLATION METHOD.
- Author
-
MADAY, Y., MULA, O., and TURINICI, G.
- Subjects
INTERPOLATION ,APPROXIMATION theory ,FUNCTIONAL analysis ,NUMERICAL analysis ,ALGORITHMS - Abstract
Let F be a compact set of a Banach space X. This paper analyzes the "generalized empirical interpolation method," which, given a function f ∊ F, builds an interpolant J
n [f] in an n dimensional subspace Xn ⊂ X with the knowledge of n outputs (σi (f))n i=1 , where σi ∊ X' and X' is the dual space of X. The space Xn is built with a greedy algorithm that is adapted to F in the sense that it is generated by elements of F itself. The algorithm also selects the linear functionals (σi )n i=1 from a dictionary Σ ⊂ X'. In this paper, we study the interpolation error maxf∊F ‖f - Jn [f]‖X by comparing it with the best possible performance on an n dimensional space, i.e., the Kolmogorov n-width of F in X, dn (F,X). For polynomial or exponential decay rates of dn (F,X), we prove that the interpolation error has the same behavior modulo the norm of the interpolation operator. Sharper results are obtained in the case where X is a Hilbert space. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
16. Numerical study of turbulent channel flow with strong temperature gradients.
- Author
-
Bamdad Lessani and Miltiadis V. Papalexandris
- Subjects
TURBULENCE ,CHANNELS (Hydraulic engineering) ,TEMPERATURE ,EDDY flux ,ALGORITHMS ,NUMERICAL analysis ,SIMULATION methods & models ,INTERPOLATION ,MATHEMATICAL models of fluid dynamics - Abstract
Purpose - This paper sets out to perform a detailed numerical study of turbulent channel flow with strong temperature gradients using large-eddy simulations. Design/methodology/approach - A recently developed time-accurate algorithm based on a predictor-corrector time integration scheme is used in the simulations. Spatial discretization is performed on a collocated grid system using a flux interpolation technique. This interpolation technique avoids the pressure odd-even decoupling problem that is typically encountered in collocated grids. The eddy viscosity is calculated with the extension of the dynamic Smagorinsky model to variable-density flows. Findings - The mean velocity profile at the cold side deviates from the classical isothermal logarithmic law of the wall. Nonetheless, at the hot side, there is a better agreement between the present results and the isothermal law of the wall. Further, the numerical study predicts that the turbulence kinetic energy near the cold wall is higher than near the hot one. In other words heat addition tends to laminarize the channel flow. The temperature fluctuations were also higher in the vicinity of the cold wall, even though the peak of these fluctuations occurs at the side of the hot wall. Practical implications - The findings of the paper have applications in the design and analysis of convective heat transfer equipment such as heat exchangers and cooling systems of nuclear reactors. Originality/value - The paper presents the first numerical results for non-isothermal turbulent channel flow with high wall-temperature ratios (up to 9). These findings can be of interest to scientists carrying out research in turbulent flows. [ABSTRACT FROM AUTHOR]
- Published
- 2008
17. One-for-All: Grouped Variation Network-Based Fractional Interpolation in Video Coding.
- Author
-
Liu, Jiaying, Xia, Sifeng, Yang, Wenhan, Li, Mading, and Liu, Dong
- Subjects
INTERPOLATION ,VIDEO codecs ,VIDEO compression ,ALGORITHMS ,NUMERICAL analysis - Abstract
Fractional interpolation is used to provide sub-pixel level references for motion compensation in the interprediction of video coding, which attempts to remove temporal redundancy in video sequences. Traditional handcrafted fractional interpolation filters face the challenge of modeling discontinuous regions in videos, while existing deep learning-based methods are either designed for a single quantization parameter (QP), only generating half-pixel samples, or need to train a model for each sub-pixel position. In this paper, we present a one-for-all fractional interpolation method based on a grouped variation convolutional neural network (GVCNN). Our method can deal with video frames coded using different QPs and is capable of generating all sub-pixel positions at one sub-pixel level. Also, by predicting variations between integer-position pixels and sub-pixels, our network offers more expressive power. Moreover, we perform specific measurements in training data generation to simulate practical situations in video coding, including blurring the down-sampled sub-pixel samples to avoid aliasing effects and coding integer pixels to simulate reconstruction errors. In addition, we analyze the impact of the size of blur kernels theoretically. Experimental results verify the efficiency of GVCNN. Compared with HEVC, our method achieves 2.2% in bit saving on average and up to 5.2% under low-delay P configuration. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. High capacity reversible data hiding with interpolation and adaptive embedding.
- Author
-
Wahed, Md. Abdul and Nyeem, Hussain
- Subjects
REVERSIBLE data hiding (Computer science) ,INTERPOLATION ,DIGITAL image processing ,BIG data ,VERTEBRATES ,LABOR economics - Abstract
A new Interpolation based Reversible Data Hiding (IRDH) scheme is reported in this paper. For different applications of an IRDH scheme to the digital image, video, multimedia, big-data and biological data, the embedding capacity requirement usually varies. Disregarding this important consideration, existing IRDH schemes do not offer a better embedding rate-distortion performance for varying size payloads. To attain this varying capacity requirement with our proposed adaptive embedding, we formulate a capacity control parameter and propose to utilize it to determine a minimum set of embeddable bits in a pixel. Additionally, we use a logical (or bit-wise) correlation between the embeddable pixel and estimated versions of an embedded pixel. Thereby, while a higher range between an upper and lower limit of the embedding capacity is maintained, a given capacity requirement within that limit is also attained with a better-embedded image quality. Computational modeling of all new processes of the scheme is presented, and performance of the scheme is evaluated with a set of popular test-images. Experimental results of our proposed scheme compared to the prominent IRDH schemes have recorded a significantly better-embedding rate-distortion performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Implementation and assessment of the black body bias correction in quantitative neutron imaging.
- Author
-
Carminati, Chiara, Boillat, Pierre, Schmid, Florian, Vontobel, Peter, Hovind, Jan, Morgano, Manuel, Raventos, Marc, Siegwart, Muriel, Mannes, David, Gruenzweig, Christian, Trtik, Pavel, Lehmann, Eberhard, Strobl, Markus, and Kaestner, Anders
- Subjects
COMPUTED tomography ,IMAGE reconstruction ,SIMULATION methods & models ,DATA analysis ,ALGORITHMS - Abstract
We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. A real-time interpolation strategy for transition tool path with C2 and G2 continuity.
- Author
-
Wang, Hui, Wu, Jianhua, Liu, Chao, and Xiong, Zhenhua
- Subjects
INTERPOLATION ,ALGORITHMS ,MATHEMATICAL optimization ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
A typical interpolation strategy for line segments consists of a transition scheme, a Look-ahead ACC/DEC scheduling, and an interpolation algorithm. In these three parts, the main computation occurs in the first and second part. Some research work has been carried out to decrease the computation in the previous literatures, but these methods occupy a lot of computing resources for the optimization process during the calculation of transition curve parameters and feed rates. Consequently, the computational efficiency of interpolation strategy is greatly reduced. To deal with the issue, a real-time interpolation strategy is proposed in this paper. In the transition scheme, a Bézier curve is utilized to smooth the line segments. Based on the relationship among the approximation error, the approximation radius, and the transition curve, the curve can be directly generated when the approximation error is given. In the ACC/DEC scheduling, a 3-segment feed rate profile with jerk continuity is constructed. Meanwhile, a Look-ahead planning based on Backward Scanning and Forward Revision (BSFR) algorithm is utilized to eliminate redundant computation. Compared with Zhao’s and Shi’s strategy, the proposed strategy has the merits of C
2 and G2 continuity for the tool path, jerk continuity for the tool movement, and distinguished real-time performance for interpolation. The experiments of 3D pentagram and 2D butterfly are carried out with different strategies and their results demonstrate that the interpolation efficiency can be greatly improved with the proposed strategy. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
21. Approximate sparse spectral clustering based on local information maintenance for hyperspectral image classification.
- Author
-
Yan, Qing, Ding, Yun, Zhang, Jing-Jing, Xun, Li-Na, and Zheng, Chun-Hou
- Subjects
HYPERSPECTRAL imaging systems ,COMPUTATIONAL complexity ,CLUSTER analysis (Statistics) ,APPROXIMATION theory ,APPLIED mathematics - Abstract
Sparse spectral clustering (SSC) has become one of the most popular clustering approaches in recent years. However, its high computational complexity prevents its application to large-scale datasets such as hyperspectral images (HSIs). In this paper, we propose two efficient approximate sparse spectral clustering methods for HSIs clustering in which clustering performance is improved by utilizing local information among the data. Firstly, we construct a smaller representative dataset on which sparse spectral clustering is performed. Then the labels of ground object are extending to whole dataset based on the local information according to two extending strategies. The first one is that the local interpolation is utilized to improve the extension of the clustering result. The other one is that the label extension is turned to a problem of subspace embedding, and is fulfilled by locally linear embedding (LLE). Several experiments on HSIs demonstrated that the proposed algorithms are effective for HSIs clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Novel DCT-Based Image Up-Sampling Using Learning-Based Adaptive k -NN MMSE Estimation.
- Author
-
Hung, Kwok-Wai and Siu, Wan-Chi
- Subjects
LEARNING ,ALGORITHMS ,NUMERICAL analysis ,DISCRETE cosine transforms ,MATHEMATICAL transformations - Abstract
Image up-sampling in the discrete cosine transform (DCT) domain is a challenging problem because DCT coefficients are de-correlated, such that it is nontrivial to estimate directly high-frequency DCT coefficients from observed low-frequency DCT coefficients. In the literature, DCT-based up-sampling algorithms usually pad zeros as high-frequency DCT coefficients or estimate such coefficients with limited success mainly due to the nonadaptive estimator and restricted information from a single observed image. In this paper, we tackle the problem of estimating high-frequency DCT coefficients in the spatial domain by proposing a learning-based scheme using an adaptive $k$ -nearest neighbor weighted minimum mean squares error (MMSE) estimation framework. Our proposed scheme makes use of the information from precomputed dictionaries to formulate an adaptive linear MMSE estimator for each DCT block. The scheme is able to estimate high-frequency DCT coefficients with very successful results. Experimental results show that the proposed up-sampling scheme produces the minimal ringing and blocking effects, and significantly better results compared with the state-of-the-art algorithms in terms of peak signal-to-noise ratio (more than 1 dB), structural similarity, and subjective quality measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
23. Two-grid Algorithms for The Solution of 2D Semilinear Singularly-perturbed Convection-diffusion Equations Using an Exponential Finite Difference Scheme.
- Author
-
Vulkov, L. and Zadorin, A. I.
- Subjects
HEAT equation ,ALGORITHMS ,INTERPOLATION ,LINEAR systems ,NUMERICAL analysis - Abstract
In this paper we solve by exponentially-fitted difference scheme a singularly perturbed nonlinear two-dimensional convection-diffusion equation. To find the solution of the nonlinear algebraic system we use both Newton and Picard methods. We propose a new version of the two-grid method with an exponential interpolation. In the first step the nonlinear differential problem is solved on a “coarse” grid of size H. In the second step, the problem is linearized around an appropriate interpolant of the computed solution on the first step and the linear problem then is solved on a fine grid of size h<
2 m ), m = 1, 2,... , where m is the number of the Newton (Picard) iteration for the difference scheme. The convergence of the discrete solutions is always [variant_greek_epsilon]—uniformly. We count the number of the arithmetical operations to illustrate the computational cost of the algorithms. [ABSTRACT FROM AUTHOR]- Published
- 2009
- Full Text
- View/download PDF
24. Method for the Non-linear Identification of Aircraft Parameters by Testing Maneuvers.
- Author
-
Boguslavskiy, I. A.
- Subjects
EQUATIONS ,STATISTICS ,NUMERICAL analysis ,APPROXIMATION theory ,ALGORITHMS ,INTERPOLATION ,EQUATIONS of motion ,LAGRANGE equations - Abstract
In this paper, we describe a variant of a solution for a common problem in applied statistics—we offer a variant method for estimating the parameters of a dynamic system, and observe its magnitudes, which statistically depend on the sequence of states of the system that are not observed. The method is realized by means of the multipolynomial approximations algorithm (the MPA algorithm). The method is validated by applying it to a problem of correction of finite sets of nominal experimental data on which nominal functions are constructed equationsby means of interpolation from the current states of the system. Nominal experimental data are presented on a finite set of points covering the domains of definition of the nominal functions. The nominal equations of motion of the dynamical system are defined by the nominal functions. In this paper, the concrete example of the nominal equations of motion correspond to the longitudinal motion of the aircraft similar of the F-l6 aircraft. The nominal functions are the calculated aerodynamic characteristics. The nominal experimental data are recorded by means of experiments in a wind-tunnel. The outcomes of measurements of the parameters of motion of the aircraft act on inputs for the MPA algorithm on a segment of real flight. The MPA algorithm defines a 32×1-vector of estimates of parameters, which are additive corrections to the nominal experimental data. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
25. Robust and automatic motion-capture data recovery using soft skeleton constraints and model averaging.
- Author
-
Tits, Mickaël, Tilmanne, Joëlle, and Dutoit, Thierry
- Subjects
DATA recovery ,ELECTRONIC data processing ,SKELETON ,SPORTS sciences ,HUMAN-computer interaction - Abstract
Motion capture allows accurate recording of human motion, with applications in many fields, including entertainment, medicine, sports science and human computer interaction. A common difficulty with this technology is the occurrence of missing data, due to occlusions, or recording conditions. Various models have been proposed to estimate missing data. Some are based on interpolation, low-rank properties or inter-correlations. Others involve dataset matching or skeleton constraints. While the latter have the advantage of promoting a realistic motion estimation, they require prior knowledge of skeleton constraints, or the availability of a prerecorded dataset. In this article, we propose a probabilistic averaging method of several recovery models (referred to as Probabilistic Model Averaging (PMA) in this paper), based on the likelihoods of the distances between body points. This method has the advantage of being automatic, while allowing an efficient gap data recovery. To support and validate the proposed method, we use a set of four individual recovery models, based on linear/nonlinear regression in local coordinate systems. Finally, we propose two heuristic algorithms to enforce skeleton constraints in the reconstructed motion, which can be used on any individual recovery model. For validation purposes, random gaps were introduced into motion-capture sequences, and the effects of factors such as the number of simultaneous gaps, gap length and sequence duration were analyzed. Results show that the proposed probabilistic averaging method yields better recovery than (i) each of the four individual models and (ii) two recent state-of-the-art models, regardless of gap length, sequence duration and number of simultaneous gaps. Moreover, both of our heuristic skeleton-constraint algorithms significantly improve the recovery for 7 out of 8 tested motion-capture sequences (p < 0.05), for 10 simultaneous gaps of 5 seconds. The code is available for free download at: . [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. An improved algorithm for the estimation of the root mean square value as an optimal solution for commercial measurement equipment.
- Author
-
Bulat, Marina, Mirković, Stefan, Gazivoda, Nemanja, Pejić, Dragan, Urekar, Marjan, and Antić, Boris
- Subjects
- *
ROOT-mean-squares , *NUMERICAL integration , *SIGNAL sampling , *ALGORITHMS - Abstract
• Numerical methods used for the estimation of the RMS. • Simpson's 1/3 rule and Simpson's 3/8 rule modified for the purpose of general application. • Modified Simpson's rules do not necessitate more complex mathematical calculations than those used in the existing methods. • Modifications provide better measurement results for some a lower ratio of the frequency of sampling and the frequency of the signal. • Modified methods do not require the manufacturers of commercial measurement equipment to additionally invest in it. This paper demonstrates that direct changes in the algorithm for the estimation of the root mean square value of a voltage signal of an arbitrary waveform can lead to improved performances and lower measurement uncertainty of commercially available instruments without requiring any upgrade of their existing hardware. The research conducted and presented here is an original contribution to the development of estimation techniques and mathematical models for measurement oriented purposes regardless of the number of samples in the given period relying on mathematical calculation of the equal complexity as in the methods already in use. The theoretical approach examines the problem of numerical integration focusing on modified Simpson's 1/3 rule and modified Simpson's 3/8 rule used for the purpose of the estimation of the root mean square value when a small number of samples per period is available. It highlights the limitations of Simpson's 1/3 rule and Simpson's 3/8 rule, and shows that the newly proposed algorithm is optimal with respect to measurement accuracy and precision even in cases when the ratio of the sampling frequency and the signal's fundamental frequency is low. All theoretical results have been validated experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. 3D craniofacial registration using thin-plate spline transform and cylindrical surface projection.
- Author
-
Chen, Yucong, Zhao, Junli, Deng, Qingqiong, and Duan, Fuqing
- Subjects
FACE perception ,COGNITIVE psychology ,CEREBROSPINAL fluid ,BODY fluids ,NERVOUS system - Abstract
Craniofacial registration is used to establish the point-to-point correspondence in a unified coordinate system among human craniofacial models. It is the foundation of craniofacial reconstruction and other craniofacial statistical analysis research. In this paper, a non-rigid 3D craniofacial registration method using thin-plate spline transform and cylindrical surface projection is proposed. First, the gradient descent optimization is utilized to improve a cylindrical surface fitting (CSF) for the reference craniofacial model. Second, the thin-plate spline transform (TPST) is applied to deform a target craniofacial model to the reference model. Finally, the cylindrical surface projection (CSP) is used to derive the point correspondence between the reference and deformed target models. To accelerate the procedure, the iterative closest point ICP algorithm is used to obtain a rough correspondence, which can provide a possible intersection area of the CSP. Finally, the inverse TPST is used to map the obtained corresponding points from the deformed target craniofacial model to the original model, and it can be realized directly by the correspondence between the original target model and the deformed target model. Three types of registration, namely, reflexive, involutive and transitive registration, are carried out to verify the effectiveness of the proposed craniofacial registration algorithm. Comparison with the methods in the literature shows that the proposed method is more accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Interpolation-Based Modeling of MIMO LPV Systems.
- Author
-
De Caigny, Jan, Camino, Juan F., and Swevers, Jan
- Subjects
POLYNOMIALS ,INTERPOLATION ,MATHEMATICAL models ,MATHEMATICAL optimization ,NUMERICAL analysis ,INVARIANTS (Mathematics) ,ALGORITHMS - Abstract
This paper presents State-space Model Interpolation of Local Estimates (SMILE), a technique to estimate linear parameter-varying (LPV) state-space models for multiple-input multiple-output (MIMO) systems whose dynamics depends on multiple time-varying parameters, called scheduling parameters. The SMILE technique is based on the interpolation of linear time-invariant models estimated for constant values of the scheduling parameters. As the linear time-invariant models can be either continuous- or discrete-time, both continuous- and discrete-time LPV models can be obtained. The underlying interpolation technique is formulated as a linear least-squares problem that can be efficiently solved. The proposed technique yields homogeneous polynomial LPV models in the multi-simplex that are numerically well-conditioned and therefore suitable for LPV control synthesis. The potential of the SMILE technique is demonstrated by computing a continuous-time interpolating LPV model for an analytic mass-spring-damper system and a discrete-time interpolating LPV model for a mechatronic XY-motion system based on experimental data. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
29. An Algorithm for Direct Multiplication of B-Splines.
- Author
-
Xianming Chen, Riesenfeld, Richard F., and Cohen, Elaine
- Subjects
ALGORITHMS ,SPLINES ,CALCULUS of tensors ,INTERPOLATION ,POLAR forms (Mathematics) ,NUMERICAL analysis - Abstract
B-s pline multiplication, that is, finding the coefficients of the product B-spline of two given B-splines is useful as an end result, in addition to being an important prerequisite component to many other symbolic computation operations on B-splines. Algorithms for B-spline multiplication standardly use indirect approaches such as nodal interpolation or computing the product of each set of polynomial pieces using various bases. The original direct approach is complicated. B-spline blossoming provides another direct approach that can be straightforwardly translated from mathematical equation to implementation; however, the algorithm does not scale well with degree or dimension of the subject tensor product B-splines. To addresses the difficulties mentioned heretofore, we present the Sliding Windows Algorithm (SWA), a new blossoming based algorithm for the multiplication of two B-spline curves, two B-spline surfaces, or any two general multivariate B-splines. Note to Practitioners--Geometric kernels in commercial CAD systems typically use B-splines to represent smooth curves and surfaces. Geometric inquiry (such as curvature) on such curves and surfaces requires the fundamental mathematical operation of multiplying two B-splines. There are a few existing algorithms in the CAD community to perform B-spline multiplication. All of them are indirect methods, in the sense of either by some sampling and interpolation strategy, or leaving the domain of B-spline representation. The only direct multiplication, reported in early 1990s, actually only solved the problem from a purely mathematical perspective. It is so inefficient as to be not feasible for any practical usage. The presented paper re-exams this initial idea of direct B-spline multiplication, and finds some simple characteristics of the apparently combinatorial problem, and designs a set of efficient algorithms, known as Sliding Window Algorithm (SWA). [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
30. BINARY TREE IMAGE CODING ALGORITHM BASED ON NON-SEPARABLE WAVELET TRANSFORM VIA LIFTING SCHEME.
- Author
-
WANG, CHENG-YOU, HOU, ZHENG-XIN, and YANG, AI-PING
- Subjects
WAVELETS (Mathematics) ,ALGORITHMS ,IMAGE compression ,NUMERICAL analysis ,INTERPOLATION - Abstract
In recent years, image coding based on wavelet transform has made rapid progress. In this paper, quincunx lifting scheme in wavelet transform is introduced and all phase interpolation filter banks which can be used in the lifting scheme for prediction and update are designed. Based on the basic idea of set partitioning in hierarchical trees (SPIHT) algorithm, the binary tree image coding algorithm is proposed. Just like SPIHT, the encoding algorithms can be stopped at any compressed file size or let run until the compressed file is a representation of a nearly lossless image. The experimental results on test images show that compared with SPIHT algorithm, the PSNRs of the proposed algorithm are superior by about 0.5 dB at the same bit rates and the subjective quality of reconstructed images is also better. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
31. Wavelet-Based Approach to Character Skeleton.
- Author
-
Xinge You and Yuan Yan Tang
- Subjects
ALGORITHMS ,IMAGE processing ,INTERPOLATION ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Kernel Regression for Image Processing and Reconstruction.
- Author
-
Takeda, Hiroyuki, Farsiu, Sina, and Milanfar, Peyman
- Subjects
KERNEL functions ,GEOMETRIC function theory ,REGRESSION analysis ,IMAGE processing ,ALGORITHMS ,NUMERICAL analysis - Abstract
In this paper, we make contact with the field of non-parametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. Multiresolution Elastic Registration of X-Ray Angiography Images Using Thin-Plate Spline.
- Author
-
Jian Yang, Yongtian Wang, Songyuan Tang, Shoujun Zhou, Yue Liu, and Wufan Chen
- Subjects
X-rays ,RADIOSCOPIC diagnosis ,DIGITAL angiography ,ALGORITHMS ,CLINICAL medicine ,NUMERICAL analysis ,MEDICAL radiography ,INTERPOLATION ,VISUAL perception - Abstract
X-ray angiography, a powerful technique for the visualization of blood vessels, has been widely used in clinical practice. However, due to unavoidable motion of patient, the subtraction images often suffer from misregistration artifacts. In order to improve the quality of subtraction images, registration algorithms are often employed before direct subtraction of mask and live images. A novel multiresolution elastic registration algorithm is proposed for the registration of the digital angiographic images using thin-plate spline (TPS). Our main contribution is a multiresolution search strategy specifically designed for the template matching method. In this strategy, the mask image is decomposed to coarse and fine sub-image blocks iteratively using the pyramid approach. Experimental results show that the multiresolution refinement strategy is well adapted to the template matching method, and can achieve better performance than comparable single step algorithms, because local minima can be overcome by the gradual coarse-to-fine approach that also ensures convergence. Registration results of four typical similarity measures, namely energy of histogram of differences (EHD), mutual information (MI), correlation and sum of squared differences (SSD), are compared. Three different interpolation methods, including nearest-neighbor, bilinear and bicubic, are also tested and compared. The overall conclusion is that the multiresolution refinement algorithm based on EHD combined with the bicubic interpolation method is very robust and effective for the registration of X-ray angiography images, which can obtain sub-pixel registration accuracy and is fully automatic. In addition, the objective measurement method developed in this paper on simulated data makes it possible to quantitatively evaluate the quality of the elastic registration results. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
34. Edge-Forming Methods for Color Image Zooming.
- Author
-
Youngjoon Cha and Seongjai Kim
- Subjects
IMAGE processing ,ALGORITHMS ,COMPUTER graphics ,INTERPOLATION ,NUMERICAL analysis ,DIFFERENTIAL equations - Abstract
This paper introduces edge-forming schemes for image zooming of color images by general magnification factors. In order to remove/reduce artifacts arising in image interpolation, such as image blur and the checkerboard effect, an edge-forming method is suggested to be applied as a postprocess of standard interpolation methods. The method is based on nonconvex non-linear partial differential equations. The equations are carefully discretized, incorporating numerical schemes of anisotropic diffusion, to be able to form reliable edges satisfactorily. The alternating direction implicit (ADI) method is employed for an efficient simulation of the model. It has been numerically verified that the resulting algorithm can form clear edges in 2 to 3 AD! iterations. Various results are given to show th eeffectiveness and reliability of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
35. An Interpolation Technique Based on Grey Relational Theory.
- Author
-
Ke Hongfa and Chen Yongguang
- Subjects
NUMERICAL analysis ,INTERPOLATION ,INFORMATION theory ,ALGORITHMS ,MATHEMATICAL analysis - Abstract
Based on grey relational theory, a technique to do the interpolation for indexed sequence is proposed in this paper. The simulation results show that the proposed interpolation technique is feasible and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2006
36. Adaptive Reconstruction of Intermediate Views From Stereoscopic Images.
- Author
-
Liang Zhang, Demin Wang, and Vincent, André
- Subjects
STEREOSCOPIC views ,ALGORITHMS ,NUMERICAL analysis ,VIEWS ,ALGEBRA ,INTERPOLATION - Abstract
This paper deals with disparity estimation and the reconstruction of intermediate views from stereoscopic images. Using block-wise maximum-likelihood (ML) disparity estimation, it was found that the Laplacian model outperformed the Cauchy and Gaussian models in terms of disparity compensation errors and the number of correspondence matches. The disparity values in occluded regions were then determined using both object-based and reliability-based interpolation. Finally, an adaptive technique was used to interpolate the intermediate views. One distinguishing characteristic of this algorithm is that the left and right-eye images were projected onto the plane of the intermediate view to be reconstructed. This resulted in two projected images. The intermediate view was created using a weighted average of these two projected images with the weights based on the quality of the corresponding areas of the projected images. Subjective examination of the reconstructed images indicate that they have high image quality and good stable depth when viewed stereoscopically. An objective evaluation with the test image sequence "Flower Garden" shows that the proposed algorithm can achieve a peak signal-to-noise ratio gain of around 1 dB, when compared to a reference algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
37. Joint Design of Interpolation Filters and Decision Feedback Equalizers.
- Author
-
Mu-Huo Cheng and Tsai-Sheng Kao
- Subjects
ALGORITHMS ,ITERATIVE methods (Mathematics) ,NUMERICAL analysis ,INTERPOLATION ,APPROXIMATION theory ,STOCHASTIC convergence - Abstract
This paper presents an algorithm to design jointly interpolation filters and decision feedback equalizers in the sense of minimum mean-square error such that the joint capacity which is neglected in conventional design is explored to improve the receiver performance. The algorithm comprises an iteration of two alternating simple quadratic minimizing operations and ensures convergence. A simulation example for the raised-cosine channel demonstrates that via this approach an improvement over the conventional design can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
38. Black-Box Modeling of Passive Systems by Rational Function Approximation.
- Author
-
Gao, Rong, Mekonnen, Yidnekachew S., Beyene, Wendemagegnehu T., and Schutt-Ainé, Jose E.
- Subjects
INTERPOLATION ,CHEBYSHEV approximation ,ALGORITHMS ,POLYNOMIALS ,APPROXIMATION theory ,NUMERICAL analysis - Abstract
In this paper, a rational interpolation approach is used to approximate the transfer function of passive systems characterized by sampled data. Orthogonal polynomials are used to improve the numerical stability of the ill-conditioned Vandermonde-like interpolation matrix associated with the ordinary power series. First, the poles of the system are obtained by efficiently and accurately transforming the coefficients of the orthogonal polynomials to the ordinary power series using Clenshaw's recurrence algorithm. Then, the residues are solved in real or in complex conjugate pairs to insure a physically realizable system. Finally, the passivity of the system is enforced by applying certain constraints on the poles and residues of the system. The performances of the three most common orthogonal polynomials, Legends and Chebyshev of the first and second kinds, are also compared to that of the power series. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
39. Adaptive Homogeneity-Directed Demosaicing Algorithm.
- Author
-
Hirakawa, Keigo and Parks, Thomas W.
- Subjects
HOMOGENEITY ,ALGORITHMS ,ALGEBRA ,MATHEMATICS ,NUMERICAL analysis - Abstract
A cost-effective digital camera uses a single-image sensor, applying alternating patterns of red, green, and blue color filters to each pixel location. A way to reconstruct a full three-color representation of color images by estimating the missing pixel components in each color plane is called a demosaicing algorithm. This paper presents three inherent problems often associated with demosaicing algorithms that incorporate two-dimensional (2-D) directional interpolation: misguidance color artifacts, interpolation color artifacts, and aliasing. The level of misguidance color artifacts present in two images can be compared using metric neighborhood modeling. The proposed demosaicing algorithm estimates missing pixels by interpolating in the direction with fewer color artifacts. The aliasing problem is addressed by applying filterbank techniques to 2-D directional interpolation. The interpolation artifacts are reduced using a nonlinear iterative procedure. Experimental results using digital images confirm the effectiveness of this approach. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
40. New Edge Dependent Deinterlacing Algorithm Based on Horizontal Edge Pattern.
- Author
-
Park, Min Kyu, Kang, Moon Gi, Nam, Kichul, and Oh, Sang Gun
- Subjects
DIGITAL image processing ,DIGITAL electronics ,HIGH definition television ,IMAGE quality of television cameras ,ALGORITHMS ,NUMERICAL analysis ,INTERPOLATION - Abstract
In this paper, we propose a new deinterlacing algorithm which is an edge dependent interpolation (EDI) algorithm based on a horizontal edge pattern. Generally, a conventional EDI algorithm has a visually better performance than any other deinterlacing algorithms using one field However, it produces unpleasant results due to the failure of estimating edge direction. In order to exactly detect edge direction, we use not only simple difference but also edge patterns. Experimental results indicate that the proposed algorithm outperforms conventional approaches with respect to both objective and subjective criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
41. Numerical analysis of local interpolation error for 2D-MLFMA.
- Author
-
Shinichiro Ohnuki and Weng Cho Chew
- Subjects
INTERPOLATION ,ALGORITHMS ,ERROR analysis in mathematics ,NUMERICAL analysis ,MATHEMATICAL statistics - Abstract
The error control of local interpolation for a 2D MLFMA will be discussed. The way to select proper parameters is proposed in terms of both numerical accuracy and computational cost. Satisfying the conditions derived in this paper, error can be controlled at the same level as global interpolation, and the computational cost becomes less expensive than the global one. © 2002 Wiley Periodicals, Inc. Microwave Opt Technol Lett 36: 8–12, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.10655 [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
42. FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems.
- Author
-
Abella, Monica, Serrano, Estefania, Garcia Blas, Javier, García, Ines, De Molina, Claudia, Carretero, Jesus, and Desco, Manuel
- Subjects
X-ray detection ,RADIOLOGY ,MEDICAL imaging systems ,THREE-dimensional imaging ,ALGORITHM software ,COMPUTER simulation equipment ,CUDA (Computer architecture) ,OPENCL (Computer program language) ,EQUIPMENT & supplies - Abstract
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. A fast and robust interpolation filter for airborne lidar point clouds.
- Author
-
Chen, Chuanfa, Li, Yanyan, Zhao, Na, Guo, Jinyun, and Liu, Guolin
- Subjects
INTERPOLATION ,COSINE transforms ,COHEN'S kappa coefficient (Statistics) ,PERFORMANCE of optical radar - Abstract
A fast and robust interpolation filter based on finite difference TPS has been proposed in this paper. The proposed method employs discrete cosine transform to efficiently solve the linear system of TPS equations in case of gridded data, and by a pre-defined weight function with respect to simulation residuals to reduce the effect of outliers and misclassified non-ground points on the accuracy of reference ground surface construction. Fifteen groups of benchmark datasets, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were employed to compare the performance of the proposed method with that of the multi-resolution hierarchical classification method (MHC). Results indicate that with respect to kappa coefficient and total error, the proposed method is averagely more accurate than MHC. Specifically, the proposed method is 1.03 and 1.32 times as accurate as MHC in terms of kappa coefficient and total errors. More importantly, the proposed method is averagely more than 8 times faster than MHC. In comparison with some recently developed methods, the proposed algorithm also achieves a good performance. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Solving global shallow water equations on heterogeneous supercomputers.
- Author
-
Fu, Haohuan, Gan, Lin, Yang, Chao, Xue, Wei, Wang, Lanning, Wang, Xinliang, Huang, Xiaomeng, and Yang, Guangwen
- Subjects
SHALLOW-water equations ,SUPERCOMPUTERS ,ATMOSPHERIC models ,FIELD programmable gate arrays ,GRAPHICS processing units - Abstract
The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. A Novel Multi-Receiver Signcryption Scheme with Complete Anonymity.
- Author
-
Pang, Liaojun, Yan, Xuxia, Zhao, Huiyang, Hu, Yufei, and Li, Huixian
- Subjects
LAGRANGE equations ,POLYNOMIALS ,APPLIED mathematics ,COMPUTER simulation ,NUMERICAL analysis - Abstract
Anonymity, which is more and more important to multi-receiver schemes, has been taken into consideration by many researchers recently. To protect the receiver anonymity, in 2010, the first multi-receiver scheme based on the Lagrange interpolating polynomial was proposed. To ensure the sender’s anonymity, the concept of the ring signature was proposed in 2005, but afterwards, this scheme was proven to has some weakness and at the same time, a completely anonymous multi-receiver signcryption scheme is proposed. In this completely anonymous scheme, the sender anonymity is achieved by improving the ring signature, and the receiver anonymity is achieved by also using the Lagrange interpolating polynomial. Unfortunately, the Lagrange interpolation method was proven a failure to protect the anonymity of receivers, because each authorized receiver could judge whether anyone else is authorized or not. Therefore, the completely anonymous multi-receiver signcryption mentioned above can only protect the sender anonymity. In this paper, we propose a new completely anonymous multi-receiver signcryption scheme with a new polynomial technology used to replace the Lagrange interpolating polynomial, which can mix the identity information of receivers to save it as a ciphertext element and prevent the authorized receivers from verifying others. With the receiver anonymity, the proposed scheme also owns the anonymity of the sender at the same time. Meanwhile, the decryption fairness and public verification are also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
46. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.
- Author
-
Zahid, Erum, Hussain, Ijaz, Spöck, Gunter, Faisal, Muhammad, Shabbir, Javid, M. AbdEl-Salam, Nasser, and Hussain, Tajammal
- Subjects
GROUNDWATER analysis ,PHYSIOLOGICAL effects of sodium ,BLOOD pressure ,HYPERTENSION ,PREDICTION theory ,MATHEMATICAL optimization - Abstract
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
47. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models.
- Author
-
Jeronymo, Daniel Cavalcanti and Coelho, Antonio Augusto Rodrigues
- Subjects
FUZZY logic ,HAMMERSTEIN equations ,HYPERCUBES ,INTERPOLATION ,MIMO systems ,KERNEL functions - Abstract
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. An improved joint dictionary training method for single image super resolution.
- Author
-
Zeng, Lei, Xiaofeng Li, and Xu, Jin
- Subjects
IMAGE processing ,NUMERICAL analysis ,ALGORITHMS ,IMAGE quality analysis ,INTERPOLATION - Abstract
Purpose - The purpose of this paper is to introduce an improved method for joint training of low- and high-resolution dictionaries in the single image super resolution. With simulations, the proposed method is thereafter evaluated. Design/methodology/approach - Sparse representations of low-resolution image patches are used to reconstruct the high-resolution image patches with high resolution dictionary. By using different factors, the scheme weights the two dictionaries in the high- and low-resolution spaces in the training process. It is reasonable to achieve better reconstructed images with more emphasis on the high-resolution spaces. Findings - An improved joint training algorithm based on K-SVD is developed with flexible weight factors on dictionaries in the high- and low-resolution spaces. From the experiment results, the proposed scheme outperforms the classic bicubic interpolation and neighbor-embedding learning based method. Originality/value - By using flexible weight factors in joint training of the dictionaries for super resolution, better reconstruction results can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
49. Particle Swarm Algorithm for the Shortest Cubic Spline Interpolation.
- Author
-
Shang Gao, Zaiyue Zhang, and Cungen Cao
- Subjects
NUMERICAL analysis ,ALGORITHMS ,CUBIC equations ,INTERPOLATION ,COMPUTER graphics - Abstract
The spline technology has applications in CAD, CAM, and computer graphics systems. Based on analysis of cubic spline interpolation, the problem of the shortest cubic spline interpolation is discussed in this paper. Furthermore, the particle swarm algorithm for this problem is presented. At last, an example is given. The result of numerical examples shows the efficacy of the proposed model and algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2009
50. Frequency-Adaptive Network Modeling for Integrative Simulation of Natural and Envelope Waveforms in Power Systems and Circuits.
- Author
-
Strunz, Kai, Shintaku, Rachel, and Feng Gao
- Subjects
ALGORITHMS ,HILBERT transform ,NUMERICAL integration ,DEFINITE integrals ,INTERPOLATION ,NUMERICAL analysis - Abstract
Algorithms for the simulation of transients in power electric systems and circuits can be classified into two major categories. For the simulation of diverse transients in ac and dc networks, the algorithms process instantaneous signals in the time domain to track natural waveforms as observed in reality. For the simulation of lower frequency transients that modulate ac carriers in ac networks, algorithms that process phasor signals to track envelope waveforms are popular. The methodology proposed in this paper uses analytic signals to bridge the merits of instantaneous and phasor signals and enable the efficient simulation of both natural and envelope waveforms as well as the smooth transition between both. The key enabling method referred to as frequency-adaptive simulation of transients (FAST) is distinguished by the introduction of the shift frequency as a simulation parameter in addition to the time-step size. This distinguishes the methodology from the known methods of power system and circuit simulation, which only use the setting of the time-step size to adapt the simulation process. By setting the shift frequency to a nonzero value, the Fourier spectra of the analytic signals are shifted and adapted according to the waveform type of interest. This adds value because different types of transients with and without an ac carrier can be simulated efficiently and accurately within one and the same simulation run. To provide compatibility with existing tools, the numerical integration is formulated to model the network branches such that nodal analysis can readily be used to construct the overall network model. Calculations of the accuracy as well as test studies that cover network energization and deenergization, angle modulation, and amplitude modulation substantiate the claims made and demonstrate the application of the methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.