15 results
Search Results
2. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting
- Author
-
Van Than Dung, Tegoeh Tjahjowidodo, Wu, Rongling, and School of Mechanical and Aerospace Engineering
- Subjects
Polynomial ,Computer science ,Statistics as Topic ,lcsh:Medicine ,02 engineering and technology ,Least squares ,Polynomials ,Computer Applications ,Mathematical and Statistical Techniques ,0202 electrical engineering, electronic engineering, information engineering ,Medicine and Health Sciences ,Parametric equation ,lcsh:Science ,Immune Response ,Multidisciplinary ,Direct method ,Applied Mathematics ,Simulation and Modeling ,Curve Fitting ,Multidisciplinary Sciences ,Ordinary least squares ,Physical Sciences ,Curve fitting ,Science & Technology - Other Topics ,Computer-Aided Design ,020201 artificial intelligence & image processing ,Algorithm ,Algorithms ,Research Article ,Optimization ,Computer and Information Sciences ,Immunology ,Clonal Selection ,Research and Analysis Methods ,Knot (unit) ,OPTIMIZATION APPROACH ,Computer Graphics ,Computer Simulation ,ALGORITHM ,Science & Technology ,B-spline ,lcsh:R ,Biology and Life Sciences ,020207 software engineering ,Computing Methods ,Algebra ,Family of curves ,lcsh:Q ,POINTS ,Nonlinear Least Squares Method ,Mathematical Functions ,Mathematics ,APPROXIMATION - Abstract
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications. ispartof: PLOS ONE vol:12 issue:3 ispartof: location:United States status: published
- Published
- 2017
3. Point Set Denoising Using Bootstrap-Based Radial Basis Function
- Author
-
Ahmad Ramli, Ahmad Abd. Majid, and Khang Jie Liew
- Subjects
Computer science ,Noise reduction ,Geometry ,lcsh:Medicine ,010103 numerical & computational mathematics ,02 engineering and technology ,Research and Analysis Methods ,Polynomials ,01 natural sciences ,Mathematical and Statistical Techniques ,Tangents ,0202 electrical engineering, electronic engineering, information engineering ,Radial basis function ,Statistical Methods ,0101 mathematics ,lcsh:Science ,Numerical Analysis ,Principal Component Analysis ,Signal processing ,Multidisciplinary ,Approximation Methods ,Applied Mathematics ,Simulation and Modeling ,Point set ,lcsh:R ,Signal Processing, Computer-Assisted ,020207 software engineering ,Models, Theoretical ,Noise Reduction ,Interpolation ,Spline (mathematics) ,Algebra ,Data Interpretation, Statistical ,Physical Sciences ,Multivariate Analysis ,Signal Processing ,Principal component analysis ,Engineering and Technology ,lcsh:Q ,Algorithm ,Mathematics ,Algorithms ,Statistics (Mathematics) ,Smoothing ,Research Article - Abstract
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
- Published
- 2016
4. Improving eye-tracking calibration accuracy using symbolic regression
- Author
-
Christophe Hurter, Vsevolod Peysakhovich, Almoctar Hassoumi, Ecole Nationale de l'Aviation Civile - ENAC (FRANCE), Institut Supérieur de l'Aéronautique et de l'Espace - ISAE-SUPAERO (FRANCE), Ecole Nationale de l'Aviation Civile (ENAC), Département Conception et conduite des véhicules Aéronautiques et Spatiaux (DCAS), and Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)
- Subjects
Male ,Polynomial ,Eye Movements ,Computer science ,Computer Vision ,Normal Distribution ,Selection Markers ,Symbolic regression ,02 engineering and technology ,Polynomials ,Transfer function ,Autre ,Reflexes ,Medicine and Health Sciences ,0202 electrical engineering, electronic engineering, information engineering ,050207 economics ,Ground truth ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Regression analysis ,Reflex, Vestibulo-Ocular ,Cameras ,Optical Equipment ,Physical Sciences ,Calibration ,Regression Analysis ,Engineering and Technology ,Medicine ,Female ,020201 artificial intelligence & image processing ,Anatomy ,Algorithm ,Algorithms ,Research Article ,Adult ,Computer and Information Sciences ,Ocular Anatomy ,Computation ,Science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Equipment ,Fixation, Ocular ,Research and Analysis Methods ,Young Adult ,Ocular System ,0502 economics and business ,Humans ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Molecular Biology Techniques ,Molecular Biology ,Reproducibility of Results ,Biology and Life Sciences ,Pupil ,Marker Genes ,Pursuit, Smooth ,Algebra ,Eyes ,Eye tracking ,Eye-tracking ,Head ,Mathematics ,Neuroscience - Abstract
International audience; Eye tracking systems have recently experienced a diversity of novel calibration procedures, including smooth pursuit and vestibulo-ocular reflex based calibrations. These approaches allowed collecting more data compared to the standard 9-point calibration. However, the computation of the mapping function which provides planar gaze positions from pupil features given as input is mostly based on polynomial regressions, and little work has investigated alternative approaches. This paper fills this gap by providing a new calibration computation method based on symbolic regression. Instead of making prior assumptions on the polynomial transfer function between input and output records, symbolic regression seeks an optimal model among different types of functions and their combinations. This approach offers an interesting perspective in terms of flexibility and accuracy. Therefore, we designed two experiments in which we collected ground truth data to compare vestibulo-ocular and smooth pursuit calibrations based on symbolic regression, both using a marker or a finger as a target, resulting in four different calibrations. As a result, we improved calibration accuracy by more than 30%, with reasonable extra computation time.
- Published
- 2019
5. Efficient estimation of generalized linear latent variable models
- Author
-
David I. Warton, Francis K. C. Hui, Wesley Brooks, Sara Taskinen, Riki Herliansyah, and Jenni Niku
- Subjects
0106 biological sciences ,Multivariate statistics ,Multivariate analysis ,Computer science ,Binomials ,01 natural sciences ,Polynomials ,010104 statistics & probability ,Amoebas ,tilastolliset mallit ,estimointi ,Protozoans ,Likelihood Functions ,Multidisciplinary ,Approximation Methods ,Statistical Models ,Simulation and Modeling ,Applied Mathematics ,Statistics ,Linear model ,Eukaryota ,Laplace's method ,Data Interpretation, Statistical ,Physical Sciences ,Vertebrates ,Medicine ,Algorithm ,Algorithms ,Research Article ,Optimization ,Science ,Latent variable ,Research and Analysis Methods ,010603 evolutionary biology ,generalized linear latent variable models ,Set (abstract data type) ,Birds ,Animals ,Computer Simulation ,0101 mathematics ,ta112 ,Organisms ,Biology and Life Sciences ,Statistical model ,Marginal likelihood ,Algebra ,Amniotes ,Multivariate Analysis ,Linear Models ,Mathematics ,Software - Abstract
Generalized linear latent variable models (GLLVM) are popular tools for modeling multivariate, correlated responses. Such data are often encountered, for instance, in ecological studies, where presence-absences, counts, or biomass of interacting species are collected from a set of sites. Until very recently, the main challenge in fitting GLLVMs has been the lack of computationally efficient estimation methods. For likelihood based estimation, several closed form approximations for the marginal likelihood of GLLVMs have been proposed, but their efficient implementations have been lacking in the literature. To fill this gap, we show in this paper how to obtain computationally convenient estimation algorithms based on a combination of either the Laplace approximation method or variational approximation method, and automatic optimization techniques implemented in R software. An extensive set of simulation studies is used to assess the performances of different methods, from which it is shown that the variational approximation method used in conjunction with automatic optimization offers a powerful tool for estimation. peerReviewed
- Published
- 2019
6. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications
- Author
-
Mohd Amran Mohd Radzi, M. Z. A. Ab Kadir, Suhaidi Shafie, Ahmad H. Sabry, and Wan Zuha Wan Hasan
- Subjects
Polynomial ,Atmospheric Science ,Computer science ,020209 energy ,lcsh:Medicine ,02 engineering and technology ,Wind ,Research and Analysis Methods ,Transfer function ,Polynomials ,Electric power system ,Meteorology ,Mathematical and Statistical Techniques ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Solar Energy ,Renewable Energy ,Statistical Methods ,Least-Squares Analysis ,lcsh:Science ,Parametric statistics ,Wind Power ,Multidisciplinary ,Bode plot ,Numerical analysis ,Applied Mathematics ,Simulation and Modeling ,lcsh:R ,020206 networking & telecommunications ,Mathematical Concepts ,Models, Theoretical ,Curve Fitting ,Energy and Power ,Algebra ,Frequency domain ,Physical Sciences ,Photovoltaic Power ,Curve fitting ,Earth Sciences ,Engineering and Technology ,lcsh:Q ,Alternative Energy ,Algorithm ,Mathematical Functions ,Mathematics ,Algorithms ,Statistics (Mathematics) ,Research Article ,Forecasting - Abstract
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
- Published
- 2018
7. Gridding discretization-based multiple stability switching delay search algorithm: The movement of a human being on a controlled swaying bow
- Author
-
Libor Pekař, Roman Prokop, and Radek Matušů
- Subjects
0209 industrial biotechnology ,Polynomial ,Computer science ,lcsh:Medicine ,Control Systems ,02 engineering and technology ,Polynomials ,Systems Science ,Mathematical and Statistical Techniques ,020901 industrial engineering & automation ,Search algorithm ,Reflexes ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Numerical Analysis ,Transfer Functions ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Linear system ,Physical Sciences ,Bilinear transform ,Engineering and Technology ,020201 artificial intelligence & image processing ,System Instability ,Algorithm ,Algorithms ,Research Article ,Interpolation ,Computer and Information Sciences ,Discretization ,Movement ,Computation ,Stability (learning theory) ,Research and Analysis Methods ,Humans ,Eigenvalues and eigenvectors ,Characteristic polynomial ,System Stability ,lcsh:R ,Biology and Life Sciences ,Root locus ,Control Engineering ,Algebra ,lcsh:Q ,Mathematical Functions ,Mathematics ,Neuroscience - Abstract
Delay represents a significant phenomenon in the dynamics of many human-related systems - including biological ones. It has i.a. a decisive impact on system stability, and the study of this influence is often mathematically demanding. This paper presents a computationally simple numerical gridding algorithm for the determination of stability margin delay values in multiple-delay linear systems. The characteristic quasi-polynomial - the roots of which decide about stability - is subjected to iterative discretization by means of pre-warped bilinear transformation. Then, a linear and a quadratic interpolation are applied to obtain the associated characteristic polynomial with integer powers. The roots of the associated characteristic polynomial are closely related to the estimation of roots of the original characteristic quasi-polynomial which agrees with the system's eigenvalues. Since the stability border is crossed by the leading one, the switching root locus is enhanced using the Regula Falsi interpolation method. Our methodology is implemented on - and verified by - a numerical bio-cybernetic example of the stabilization of a human-being's movement on a controlled swaying bow. The advantage of the proposed novel algorithm lies in the possibility of the rapid computation of polynomial zeros by means of standard programs for technical computing; in the low level of mathematical knowledge required; and, in the sufficiently high precision of the roots loci estimation. The relationship to the direct search QuasiPolynomial (mapping) Rootfinder algorithm and computational complexity are discussed as well. This algorithm is also applicable for systems with non-commensurate delays. © 2017 Pekař et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited., CZ.1.05/2.1.00/19.0376, ERDF, European Regional Development Fund, Ministry of Education, Youth and Sports of the Czech Republic within the National Sustainability Programme [LO1303 (MSMT-7778/2014)]; European Regional Development Fund under the project CEBIA-Tech Instrumentation [CZ.1.05/2.1.00/19.0376]
- Published
- 2017
8. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems
- Author
-
Paul H. Garthwaite, Nick Andrews, Andre Charlett, C. Paddy Farrington, Doyo Gragn Enki, and Angela Noufaily
- Subjects
Epidemiology ,Scoring rule ,Binomials ,lcsh:Medicine ,Disease ,01 natural sciences ,Polynomials ,Geographical locations ,Disease Outbreaks ,010104 statistics & probability ,0302 clinical medicine ,Mathematical and Statistical Techniques ,Public health surveillance ,Medicine and Health Sciences ,Medicine ,Public Health Surveillance ,030212 general & internal medicine ,lcsh:Science ,Disease surveillance ,Multidisciplinary ,Data Processing ,Mathematical Models ,Applied Mathematics ,Simulation and Modeling ,3. Good health ,Europe ,Infectious Diseases ,England ,Physical Sciences ,Probability distribution ,Information Technology ,Algorithm ,Algorithms ,Research Article ,Computer and Information Sciences ,Infectious Disease Control ,Disease Surveillance ,Research and Analysis Methods ,03 medical and health sciences ,Humans ,False Positive Reactions ,0101 mathematics ,Models, Statistical ,business.industry ,lcsh:R ,Outbreak ,Probability Theory ,Probability Distribution ,United Kingdom ,Algebra ,Infectious disease (medical specialty) ,Infectious Disease Surveillance ,lcsh:Q ,People and places ,business ,RA ,Mathematics ,Test data - Abstract
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace.
- Published
- 2016
9. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording
- Author
-
Tetiana Aksenova and Andrey Eliseyev
- Subjects
Man-Computer Interface ,0301 basic medicine ,Moving horizon estimation ,Polynomial ,Computer science ,lcsh:Medicine ,Monkeys ,Polynomials ,Mathematical and Statistical Techniques ,0302 clinical medicine ,Partial least squares regression ,lcsh:Science ,Mammals ,Brain Mapping ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Signal Processing, Computer-Assisted ,Brain-Computer Interfaces ,Calibration ,Vertebrates ,Physical Sciences ,Engineering and Technology ,Ensemble Kalman filter ,Fast Kalman filter ,Kalman Filter ,Algorithm ,Algorithms ,Statistics (Mathematics) ,Decoding methods ,Research Article ,Primates ,Optimization ,Imaging Techniques ,Movement ,Neuroimaging ,Research and Analysis Methods ,Online Systems ,03 medical and health sciences ,Extended Kalman filter ,Imaging, Three-Dimensional ,Humans ,Animals ,Least-Squares Analysis ,Statistical Methods ,Recursive least squares filter ,Models, Statistical ,lcsh:R ,Neurosciences ,Organisms ,Reproducibility of Results ,Biology and Life Sciences ,Kalman filter ,Algebra ,030104 developmental biology ,Amniotes ,Human Factors Engineering ,lcsh:Q ,Electrocorticography ,Mathematics ,030217 neurology & neurosurgery ,Forecasting ,Neuroscience - Abstract
In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience.
- Published
- 2016
10. Robust and automatic motion-capture data recovery using soft skeleton constraints and model averaging
- Author
-
Joëlle Tilmanne, Mickaël Tits, and Thierry Dutoit
- Subjects
Computer science ,Video Recording ,lcsh:Medicine ,02 engineering and technology ,Polynomials ,Database and Informatics Methods ,Mathematical and Statistical Techniques ,Medicine and Health Sciences ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Musculoskeletal System ,Numerical Analysis ,Multidisciplinary ,Heuristic ,Applied Mathematics ,Simulation and Modeling ,Biomechanical Phenomena ,Data Accuracy ,Physical Sciences ,Regression Analysis ,020201 artificial intelligence & image processing ,Anatomy ,Sequence Analysis ,Kalman Filter ,Algorithm ,Statistics (Mathematics) ,Algorithms ,Research Article ,Interpolation ,Bioinformatics ,Movement ,Sequence Databases ,Linear Regression Analysis ,Research and Analysis Methods ,Motion capture ,Bone and Bones ,Data recovery ,Motion ,Motion estimation ,Humans ,Statistical Methods ,Skeleton ,business.industry ,lcsh:R ,Probabilistic logic ,Biology and Life Sciences ,020207 software engineering ,Statistical model ,Missing data ,Biological Databases ,Algebra ,lcsh:Q ,business ,Mathematics ,Software - Abstract
Motion capture allows accurate recording of human motion, with applications in many fields, including entertainment, medicine, sports science and human computer interaction. A common difficulty with this technology is the occurrence of missing data, due to occlusions, or recording conditions. Various models have been proposed to estimate missing data. Some are based on interpolation, low-rank properties or inter-correlations. Others involve dataset matching or skeleton constraints. While the latter have the advantage of promoting a realistic motion estimation, they require prior knowledge of skeleton constraints, or the availability of a prerecorded dataset. In this article, we propose a probabilistic averaging method of several recovery models (referred to as Probabilistic Model Averaging (PMA) in this paper), based on the likelihoods of the distances between body points. This method has the advantage of being automatic, while allowing an efficient gap data recovery. To support and validate the proposed method, we use a set of four individual recovery models, based on linear/nonlinear regression in local coordinate systems. Finally, we propose two heuristic algorithms to enforce skeleton constraints in the reconstructed motion, which can be used on any individual recovery model. For validation purposes, random gaps were introduced into motion-capture sequences, and the effects of factors such as the number of simultaneous gaps, gap length and sequence duration were analyzed. Results show that the proposed probabilistic averaging method yields better recovery than (i) each of the four individual models and (ii) two recent state-of-the-art models, regardless of gap length, sequence duration and number of simultaneous gaps. Moreover, both of our heuristic skeleton-constraint algorithms significantly improve the recovery for 7 out of 8 tested motion-capture sequences (p < 0.05), for 10 simultaneous gaps of 5 seconds. The code is available for free download at: https://github.com/numediart/MocapRecovery.
- Published
- 2018
11. Marathon: An open source software library for the analysis of Markov-Chain Monte Carlo algorithms
- Author
-
Annabell Berger and Steffen Rechner
- Subjects
FOS: Computer and information sciences ,Polynomial ,Discrete Mathematics (cs.DM) ,Computer science ,Monte Carlo method ,lcsh:Medicine ,Infographics ,Polynomials ,01 natural sciences ,Upper and lower bounds ,Mathematical and Statistical Techniques ,lcsh:Science ,Multidisciplinary ,Markov chain mixing time ,Mathematical Models ,Applied Mathematics ,Simulation and Modeling ,Libraries, Digital ,Sampling (statistics) ,Random walk ,Markov Chains ,010201 computation theory & mathematics ,Physical Sciences ,symbols ,Probability distribution ,Monte Carlo Method ,Graphs ,Algorithm ,Algorithms ,Network Analysis ,Research Article ,Gibbs sampling ,Computer and Information Sciences ,Markov Models ,0102 computer and information sciences ,Research and Analysis Methods ,Markov model ,Hybrid Monte Carlo ,symbols.namesake ,0101 mathematics ,Markov chain ,Data Visualization ,lcsh:R ,010102 general mathematics ,Eigenvalues ,Markov chain Monte Carlo ,Probability Theory ,Probability Distribution ,Algebra ,Linear Algebra ,Random Walk ,Computer Science - Mathematical Software ,lcsh:Q ,Mathematical Software (cs.MS) ,Software ,Mathematics ,Computer Science - Discrete Mathematics - Abstract
In this paper, we consider the Markov-Chain Monte Carlo (MCMC) approach for random sampling of combinatorial objects. The running time of such an algorithm depends on the total mixing time of the underlying Markov chain and is unknown in general. For some Markov chains, upper bounds on this total mixing time exist but are too large to be applicable in practice. We try to answer the question, whether the total mixing time is close to its upper bounds, or if there is a significant gap between them. In doing so, we present the software library marathon which is designed to support the analysis of MCMC based sampling algorithms. The main application of this library is to compute properties of so-called state graphs which represent the structure of Markov chains. We use marathon to investigate the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graph realizations. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time.
- Published
- 2015
- Full Text
- View/download PDF
12. A fast and robust interpolation filter for airborne lidar point clouds
- Author
-
Chuanfa Chen, Guolin Liu, Jinyun Guo, Yanyan Li, and Na Zhao
- Subjects
Computer and Information Sciences ,Weight function ,010504 meteorology & atmospheric sciences ,Computer science ,0211 other engineering and technologies ,Point cloud ,lcsh:Medicine ,02 engineering and technology ,Research and Analysis Methods ,Polynomials ,01 natural sciences ,Remote Sensing ,Discrete cosine transform ,lcsh:Science ,Terrain ,Geographic Areas ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Numerical Analysis ,Lidar ,Multidisciplinary ,Geography ,Applied Mathematics ,Simulation and Modeling ,lcsh:R ,Linear system ,Correction ,Geomorphology ,Computing Methods ,Interpolation ,Algebra ,Photogrammetry ,Physical Sciences ,Outlier ,Earth Sciences ,Benchmark (computing) ,Engineering and Technology ,lcsh:Q ,Algorithm ,Mathematics ,Algorithms ,Research Article ,Urban Areas - Abstract
A fast and robust interpolation filter based on finite difference TPS has been proposed in this paper. The proposed method employs discrete cosine transform to efficiently solve the linear system of TPS equations in case of gridded data, and by a pre-defined weight function with respect to simulation residuals to reduce the effect of outliers and misclassified non-ground points on the accuracy of reference ground surface construction. Fifteen groups of benchmark datasets, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were employed to compare the performance of the proposed method with that of the multi-resolution hierarchical classification method (MHC). Results indicate that with respect to kappa coefficient and total error, the proposed method is averagely more accurate than MHC. Specifically, the proposed method is 1.03 and 1.32 times as accurate as MHC in terms of kappa coefficient and total errors. More importantly, the proposed method is averagely more than 8 times faster than MHC. In comparison with some recently developed methods, the proposed algorithm also achieves a good performance.
- Published
- 2017
13. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models
- Author
-
Antonio Augusto Rodrigues Coelho and Daniel Cavalcanti Jeronymo
- Subjects
Optimization ,Computer and Information Sciences ,0209 industrial biotechnology ,Polynomial ,Computer science ,Kernel Functions ,MathematicsofComputing_NUMERICALANALYSIS ,lcsh:Medicine ,02 engineering and technology ,Research and Analysis Methods ,Polynomials ,Systems Science ,Fuzzy logic ,Matrix (mathematics) ,020901 industrial engineering & automation ,Fuzzy Logic ,020401 chemical engineering ,Robustness (computer science) ,0204 chemical engineering ,Operator Theory ,lcsh:Science ,Numerical Analysis ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Multivariable calculus ,lcsh:R ,Computing Methods ,Interpolation ,Nonlinear system ,Spline (mathematics) ,Model predictive control ,Lanczos resampling ,Algebra ,Nonlinear Dynamics ,Physical Sciences ,lcsh:Q ,Hypercube ,Inverse function ,Algorithm ,Mathematics ,Algorithms ,Nonlinear Systems ,Research Article - Abstract
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC).
- Published
- 2016
14. Queues with Dropping Functions and General Arrival Processes
- Author
-
Pawel Mrozowski and Andrzej Chydzinski
- Subjects
0209 industrial biotechnology ,Time Factors ,Distribution (number theory) ,Economics ,Computer science ,lcsh:Medicine ,Social Sciences ,02 engineering and technology ,Markov Processes ,Polynomials ,Mathematical and Statistical Techniques ,020901 industrial engineering & automation ,Salaries ,0202 electrical engineering, electronic engineering, information engineering ,Computer Networks ,lcsh:Science ,Queue ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Computer Science::Performance ,Autocorrelation ,Physical Sciences ,symbols ,Engineering and Technology ,Algorithm ,Statistics (Mathematics) ,Algorithms ,Research Article ,Computer and Information Sciences ,Markov Models ,Equipment ,Markov process ,Research and Analysis Methods ,Markov model ,symbols.namesake ,Statistical Methods ,Internet ,lcsh:R ,Numerical Analysis, Computer-Assisted ,020206 networking & telecommunications ,Function (mathematics) ,Models, Theoretical ,Probability Theory ,Algebra ,Labor Economics ,Signal Processing ,lcsh:Q ,Bulk queue ,Mathematics - Abstract
In a queueing system with the dropping function the arriving customer can be denied service (dropped) with the probability that is a function of the queue length at the time of arrival of this customer. The potential applicability of such mechanism is very wide due to the fact that by choosing the shape of this function one can easily manipulate several performance characteristics of the queueing system. In this paper we carry out analysis of the queueing system with the dropping function and a very general model of arrival process--the model which includes batch arrivals and the interarrival time autocorrelation, and allows for fitting the actual shape of the interarrival time distribution and its moments. For such a system we obtain formulas for the distribution of the queue length and the overall customer loss ratio. The analytical results are accompanied with numerical examples computed for several dropping functions.
- Published
- 2016
15. Implementation and assessment of the black body bias correction in quantitative neutron imaging
- Author
-
Muriel Siegwart, Christian Gruenzweig, M. Morgano, Eberhard Lehmann, Anders Kaestner, Jan Hovind, Markus Strobl, Pierre Boillat, Chiara Carminati, Peter Vontobel, David Mannes, Florian Schmid, Marc Raventós, and Pavel Trtik
- Subjects
Databases, Factual ,Computer science ,Normalization (image processing) ,02 engineering and technology ,Neutron scattering ,Polynomials ,Diagnostic Radiology ,Scattering ,Data acquisition ,0202 electrical engineering, electronic engineering, information engineering ,Medicine and Health Sciences ,Image Processing, Computer-Assisted ,Neutron Scattering ,Tomography ,Numerical Analysis ,Multidisciplinary ,Phantoms, Imaging ,Physics ,Radiology and Imaging ,021001 nanoscience & nanotechnology ,3. Good health ,Neutron Diffraction ,Data Acquisition ,Physical Sciences ,Medicine ,0210 nano-technology ,Algorithm ,Algorithms ,Interpolation ,Research Article ,Normalization (statistics) ,Computer and Information Sciences ,Imaging Techniques ,020209 energy ,Science ,Neuroimaging ,Research and Analysis Methods ,Bias ,Diagnostic Medicine ,Computer Simulation ,Particle Physics ,Nuclear Physics ,Nucleons ,Neutrons ,Reproducibility ,Neutron imaging ,Experimental data ,Biology and Life Sciences ,Water ,Computed Axial Tomography ,Algebra ,Lead ,Tomography, X-Ray Computed ,Mathematics ,Copper ,Software ,Neuroscience - Abstract
We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut.
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.