97 results
Search Results
2. Probabilistic solutions to DAEs learning from physical data.
- Author
-
Wu, Zongmin and Zhang, Ran
- Subjects
- *
CONTINUOUS time systems , *STATISTICS , *NON-Newtonian fluids , *ALGEBRAIC equations , *APPROXIMATION theory , *DIFFERENTIAL inclusions - Abstract
The nonlinear chaotic differential/algebraic equation (DAE) has been established to simulate the nonuniform oscillations of the motion of a falling sphere in the non-Newtonian fluid. The DAE is obtained only by learning the experimental data with sparse optimization method. However, the deterministic solution will become increasingly inaccurate for long time approximation of the continuous system. In this paper, we introduce two probabilistic solutions to compute the totally DAE, the Random branch selection iteration (RBSI) and Random switching iteration (RSI). The samples are also taken as the reference trajectory to learn random parameter. The proposed probabilistic solutions can be regarded as the discrete analogues of differential inclusion and switching DAEs, respectively. They have been also compared with the deterministic method, i.e. backward differentiation formula (BDF). The deterministic methods only give limited candidates of all the probability solutions, while the RSI can include all the possible trajectories. The numerical results and statistical information criterion show that RSI can successfully reveal the sustaining instabilities of the motion itself and long time chaotic behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Statistically testing the validity of analytical and computational approximations to the chemical master equation.
- Author
-
Jenkinson, Garrett and Goutsias, John
- Subjects
- *
CHEMICAL equations , *APPROXIMATION theory , *CHEMICAL reactions , *STOCHASTIC analysis , *STATISTICS , *PHENOMENOLOGICAL theory (Physics) , *NUMERICAL solutions to equations - Abstract
The master equation is used extensively to model chemical reaction systems with stochastic dynamics. However, and despite its phenomenological simplicity, it is not in general possible to compute the solution of this equation. Drawing exact samples from the master equation is possible, but can be computationally demanding, especially when estimating high-order statistical summaries or joint probability distributions. As a consequence, one often relies on analytical approximations to the solution of the master equation or on computational techniques that draw approximative samples from this equation. Unfortunately, it is not in general possible to check whether a particular approximation scheme is valid. The main objective of this paper is to develop an effective methodology to address this problem based on statistical hypothesis testing. By drawing a moderate number of samples from the master equation, the proposed techniques use the well-known Kolmogorov-Smirnov statistic to reject the validity of a given approximation method or accept it with a certain level of confidence. Our approach is general enough to deal with any master equation and can be used to test the validity of any analytical approximation method or any approximative sampling technique of interest. A number of examples, based on the Schlögl model of chemistry and the SIR model of epidemiology, clearly illustrate the effectiveness and potential of the proposed statistical framework. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
4. Statistical metamodeling of dynamic network loading.
- Author
-
Song, Wenjing, Han, Ke, Wang, Yiou, Friesz, Terry L., and del Castillo, Enrique
- Subjects
- *
MATHEMATICAL models , *STATISTICS , *KRIGING , *NONLINEAR theories , *APPROXIMATION theory , *LIPSCHITZ spaces - Abstract
Abstract Dynamic traffic assignment models rely on a network performance module known as dynamic network loading (DNL), which expresses flow propagation, flow conservation, and travel delay at a network level. The DNL defines the so-called network delay operator , which maps a set of path departure rates to a set of path travel times (or costs). It is widely known that the delay operator is not available in closed form, and has undesirable properties that severely complicate DTA analysis and computation, such as discontinuity, non-differentiability, non-monotonicity, and computational inefficiency. This paper proposes a fresh take on this important and difficult issue, by providing a class of surrogate DNL models based on a statistical learning method known as Kriging. We present a metamodeling framework that systematically approximates DNL models and is flexible in the sense of allowing the modeler to make trade-offs among model granularity, complexity, and accuracy. It is shown that such surrogate DNL models yield highly accurate approximations (with errors below 8%) and superior computational efficiency (9 to 455 times faster than conventional DNL procedures such as those based on the link transmission model). Moreover, these approximate DNL models admit closed-form and analytical delay operators, which are Lipschitz continuous and infinitely differentiable, with closed-form Jacobians. We provide in-depth discussions on the implications of these properties to DTA research and model applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. MATI: An efficient algorithm for influence maximization in social networks.
- Author
-
Rossi, Maria-Evgenia G., Shi, Bowen, Tziortziotis, Nikolaos, Malliaros, Fragkiskos D., Giatsidis, Christos, and Vazirgiannis, Michalis
- Subjects
- *
EPIDEMICS , *SOCIAL networks , *SOCIAL movements , *APPROXIMATION theory , *VIRAL marketing - Abstract
Influence maximization has attracted a lot of attention due to its numerous applications, including diffusion of social movements, the spread of news, viral marketing and outbreak of diseases. The objective is to discover a group of users that are able to maximize the spread of influence across a network. The greedy algorithm gives a solution to the Influence Maximization problem while having a good approximation ratio. Nevertheless it does not scale well for large scale datasets. In this paper, we propose Matrix Influence, MATI, an efficient algorithm that can be used under both the Linear Threshold and Independent Cascade diffusion models. MATI is based on the precalculation of the influence by taking advantage of the simple paths in the node’s neighborhood. An extensive empirical analysis has been performed on multiple real-world datasets showing that MATI has competitive performance when compared to other well-known algorithms with regards to running time and expected influence spread. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Modeling Link-Level Crash Frequency Using Integrated Geospatial Land Use Data and On-Network Characteristics.
- Author
-
Duddu, Venkata R. and Pulugurtha, Srinivas S.
- Subjects
- *
CONJUGATE gradient methods , *ARTIFICIAL neural networks , *APPROXIMATION theory , *LEAST absolute deviations (Statistics) , *ESTIMATION theory - Abstract
Abstract: The primary focus of this paper is to develop models to estimate link-level crash frequency using land use data extracted and integrated through the use of a distance gradient method. The on-network characteristics were added to the integrated land use characteristics database and were also used in the development and validation of link-level crash frequency estimation models. Both statistical and back- propagation neural network (BPNN)-based approaches were tested and evaluated for modeling. Mean absolute deviation (MAD), median error, 85th percentile error, and root-mean squared error (RMSE) were computed to validate the developed link-level crash frequency es- timation models and compare the two approaches. The results obtained from validation of the link-level crash frequency estimation models indicate that the computed errors are low for models based on both statistical and neural network approaches. Both the approaches have reasonably good predictive capability and can be used to estimate crash frequency. The role of predictor (includes integrated land use) variables on crash frequency along links can be easily understood using outputs from the statistical modeling approach. Also, findings indicate that models based on integrated land use and on-network characteristics (excluding traffic volume) have good predictive capability and can be used as surrogate data to estimate crash frequency if traffic volume data are not available. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. Method for the Non-linear Identification of Aircraft Parameters by Testing Maneuvers.
- Author
-
Boguslavskiy, I. A.
- Subjects
- *
EQUATIONS , *STATISTICS , *NUMERICAL analysis , *APPROXIMATION theory , *ALGORITHMS , *INTERPOLATION , *EQUATIONS of motion , *LAGRANGE equations - Abstract
In this paper, we describe a variant of a solution for a common problem in applied statistics—we offer a variant method for estimating the parameters of a dynamic system, and observe its magnitudes, which statistically depend on the sequence of states of the system that are not observed. The method is realized by means of the multipolynomial approximations algorithm (the MPA algorithm). The method is validated by applying it to a problem of correction of finite sets of nominal experimental data on which nominal functions are constructed equationsby means of interpolation from the current states of the system. Nominal experimental data are presented on a finite set of points covering the domains of definition of the nominal functions. The nominal equations of motion of the dynamical system are defined by the nominal functions. In this paper, the concrete example of the nominal equations of motion correspond to the longitudinal motion of the aircraft similar of the F-l6 aircraft. The nominal functions are the calculated aerodynamic characteristics. The nominal experimental data are recorded by means of experiments in a wind-tunnel. The outcomes of measurements of the parameters of motion of the aircraft act on inputs for the MPA algorithm on a segment of real flight. The MPA algorithm defines a 32×1-vector of estimates of parameters, which are additive corrections to the nominal experimental data. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
8. Approximation of distributions by using the Anderson Darling statistic.
- Author
-
Liebscher, Eckhard
- Subjects
- *
APPROXIMATION theory , *DISTRIBUTION (Probability theory) , *ASYMPTOTIC normality , *STATISTICS , *ASYMPTOTIC efficiencies - Abstract
In practice, it is often not possible to find an appropriate family of distributions which can be used for fitting the sample distribution with high precision. In these cases, it seems to be opportune to search for the best approximation by a family of distributions instead of an exact fit. In this paper, we consider the Anderson–Darling statistic with plugged-in minimum distance estimator for the parameter vector. We prove asymptotic normality of the Anderson–Darling statistic which is used for a test of goodness of approximation. Moreover, we introduce a measure of discrepancy between the sample distribution and the model class. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. Accurate Tree-based Missing Data Imputation and Data Fusion within the Statistical Learning Paradigm.
- Author
-
D'Ambrosio, Antonio, Aria, Massimo, and Siciliano, Roberta
- Subjects
- *
STATISTICAL matching , *STATISTICS , *INFORMATION science , *MISSING data (Statistics) , *DATA editing , *ALGORITHMS , *PERFORMANCE evaluation , *APPROXIMATION theory - Abstract
Framework of this paper is statistical data editing, specifically how to edit or impute missing or contradictory data and how to merge two independent data sets presenting some lack of information. Assuming a missing at random mechanism, this paper provides an accurate tree-based methodology for both missing data imputation and data fusion that is justified within the Statistical Learning Theory of Vapnik. It considers both an incremental variable imputation method to improve computational efficiency as well as boosted trees to gain in prediction accuracy with respect to other methods. As a result, the best approximation of the structural risk (also known as irreducible error) is reached, thus reducing at minimum the generalization (or prediction) error of imputation. Moreover, it is distribution free, it holds independently of the underlying probability law generating missing data values. Performance analysis is discussed considering simulation case studies and real world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
10. Large deviations of realized volatility
- Author
-
Kanaya, Shin and Otsu, Taisuke
- Subjects
- *
DEVIATION (Statistics) , *MARKET volatility , *STATISTICS , *APPROXIMATION theory , *PROBABILITY theory , *CENTRAL limit theorem - Abstract
Abstract: This paper studies large and moderate deviation properties of a realized volatility statistic of high frequency financial data. We establish a large deviation principle for the realized volatility when the number of high frequency observations in a fixed time interval increases to infinity. Our large deviation result can be used to evaluate tail probabilities of the realized volatility. We also derive a moderate deviation rate function for a standardized realized volatility statistic. The moderate deviation result is useful for assessing the validity of normal approximations based on the central limit theorem. In particular, it clarifies that there exists a trade-off between the accuracy of the normal approximations and the path regularity of an underlying volatility process. Our large and moderate deviation results complement the existing asymptotic theory on high frequency data. In addition, the paper contributes to the literature of large deviation theory in that the theory is extended to a high frequency data environment. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
11. Another look at statistical learning theory and regularization
- Author
-
Cherkassky, Vladimir and Ma, Yunqian
- Subjects
- *
LEARNING , *STATISTICS , *MATHEMATICAL functions , *APPROXIMATION theory , *REGRESSION analysis , *ERROR analysis in mathematics , *PREDICTION models , *PROBABILITY theory - Abstract
Abstract: The paper reviews and highlights distinctions between function-approximation (FA) and VC theory and methodology, mainly within the setting of regression problems and a squared-error loss function, and illustrates empirically the differences between the two when data is sparse and/or input distribution is non-uniform. In FA theory, the goal is to estimate an unknown true dependency (or ‘target’ function) in regression problems, or posterior probability in classification problems. In VC theory, the goal is to ‘imitate’ unknown target function, in the sense of minimization of prediction risk or good ‘generalization’. That is, the result of VC learning depends on (unknown) input distribution, while that of FA does not. This distinction is important because regularization theory originally introduced under clearly stated FA setting [Tikhonov, N. (1963). On solving ill-posed problem and method of regularization. Doklady Akademii Nauk USSR, 153, 501–504; Tikhonov, N., & V. Y. Arsenin (1977). Solution of ill-posed problems. Washington, DC: W. H. Winston], has been later used under risk-minimization or VC setting. More recently, several authors [Evgeniou, T., Pontil, M., & Poggio, T. (2000). Regularization networks and support vector machines. Advances in Computational Mathematics, 13, 1–50; Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning: Data mining, inference and prediction. Springer; Poggio, T. and Smale, S., (2003). The mathematics of learning: Dealing with data. Notices of the AMS, 50 (5), 537–544] applied constructive methodology based on regularization framework to learning dependencies from data (under VC-theoretical setting). However, such regularization-based learning is usually presented as a purely constructive methodology (with no clearly stated problem setting). This paper compares FA/regularization and VC/risk minimization methodologies in terms of underlying theoretical assumptions. The control of model complexity, using regularization and using the concept of margin in SVMs, is contrasted in the FA and VC formulations. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
12. Near-exact distributions for the sphericity likelihood ratio test statistic
- Author
-
Marques, Filipe J. and Coelho, Carlos A.
- Subjects
- *
STATISTICS , *DISTRIBUTION (Probability theory) , *ASYMPTOTIC distribution , *PROBABILITY theory , *APPROXIMATION theory - Abstract
Abstract: In this paper three near-exact distributions are developed for the sphericity test statistic. The exact probability density function of this statistic is usually represented through the use of the Meijer G function, which renders the computation of quantiles impossible even for a moderately large number of variables. The main purpose of this paper is to obtain near-exact distributions that lie closer to the exact distribution than the asymptotic distributions while, at the same time, correspond to density and cumulative distribution functions practical to use, allowing for an easy determination of quantiles. In addition to this, two asymptotic distributions that lie closer to the exact distribution than the existing ones were developed. Two measures are considered to evaluate the proximity between the exact and the asymptotic and near-exact distributions developed. As a reference we use the saddlepoint approximations developed by Butler et al. [1993. Saddlepoint approximations for tests of block independence, sphericity and equal variances and covariances. J. Roy. Statist. Soc., Ser. B 55, 171–183] as well as the asymptotic distribution proposed by Box. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
13. Reduced Support Vector Machines: A Statistical Theory.
- Author
-
Yuh-Jye Lee and Su-Yun Huang
- Subjects
- *
COMPUTER science , *STATISTICS , *MATRICES (Mathematics) , *APPROXIMATION theory , *TASK analysis , *ALGORITHMS - Abstract
In dealing with large data sets, the reduced support vector machine (RSVM) was proposed for the practical objective to overcome some computational difficulties as well as to reduce the model complexity. In this paper, we study the RSVM from the viewpoint of sampling design, its robustness, and the spectral analysis of the reduced kernel. We consider the nonlinear separating surface as a mixture of kernels. Instead of a full model, the RSVM uses a reduced mixture with kernels sampled from certain candidate set. Our main results center on two major themes. One is the robustness of the random subset mixture model. The other is the spectral analysis of the reduced kernel. The robustness is judged by a few criteria as follows: 1) model variation measure; 2) model bias (deviation) between the reduced model and the full model; and 3) test power in distinguishing the reduced model from the full one. For the spectral analysis, we compare the eigenstructures of the full kernel matrix and the approximation kernel matrix. The approximation kernels are generated by uniform random subsets. The small discrepancies between them indicate that the approximation kernels can retain most of the relevant information for learning tasks in the full kernel. We focus on some statistical theory of the reduced set method mainly in the context of the RSVM. The use of a uniform random subset is not limited to the RSVM. This approach can act as a supplemental algorithm on top of a basic optimization algorithm, wherein the actual optimization takes place on the subset-approximated data. The statistical properties discussed in this paper are still valid. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. The Impact of Intensity in Surveillance of Cyclical Processes.
- Author
-
Andersson, Eva
- Subjects
- *
STATISTICS , *PROBABILITY theory , *APPROXIMATION theory , *CONTROL theory (Engineering) , *ESTIMATION theory , *MATHEMATICAL models - Abstract
In many cyclical processes it is of interest to have a system for on-line detection of turning points. This could be detection of the turns in leading economic indicators in order to predict the next turn of the business cycle. In natural family planning we want to detect the peak in the human menstrual cycle in order to predict the most fertile phase. Another example is detection of an influenza outbreak, by monitoring the weekly reports of the number of patients showing influenza-like symptoms. The methodology of statistical surveillance is used here to construct alarm systems, consisting of an alarm statistic and an alarm limit. At each time point a new observation becomes available and the alarm system is used to make a decision as to whether the turning point has occurred or not. The same methodology is used in control charts. Optimal alarm systems are based on the likelihood ratio, the ratio between the likelihood functions for the in-control process and the out-of-control process. The optimal likelihood ratio method for surveillance is based on the assumption that the parametric model for the cyclical process is known. This rather strong assumption can be avoided by using an approximation, the maximum likelihood ratio method, where a non-parametric estimation procedure is used. The estimation is made using only monotonicity restrictions, no parametric restrictions are placed on the cycles. Hereby, mis-specification of the parameters is avoided and thus we are sure that the false alarms are really controlled at the nominal level. The aim of this paper is to evaluate how two likelihood ratio based methods for on-line detection(the likelihood ratio method and the maximum likelihood ratio method) react to different assumption about the intensity, i.e., different assumption regarding how often we can expect a turn. Information about the previous intensity of the change point process is included in the methods by the empirical density of the change point times. This paper compares the performances of the likelihood ratio method and the maximum likelihood ratio method, when an empirical intensity is used(based on a bell shaped density) and when a constant intensity is used. In addition, these methods are compared and evaluated for the situation when no information about the time of the turn is used. The results of the study in this paper show that the likelihood ratio method with the empirical intensity works well except for early turns, when the time until detection is very long. For early turns the likelihood ratio method with a constant intensity without any information about the intensity, gives quicker detection. The maximum likelihood ratio method with the empirical intensity only works well for those turning point times that are the most likely according to the empirical density. Early turns take a long time to detect and late alarms have low predictive value. For the maximum likelihood ratio approach, the constant intensity gives the shortest expected delay and high predictive values. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
15. SUCCESSIVE RANK-ONE APPROXIMATIONS FOR NEARLY ORTHOGONALLY DECOMPOSABLE SYMMETRIC TENSORS.
- Author
-
CUN MU, HSU, DANIEL, and GOLDFARB, DONALD
- Subjects
- *
APPROXIMATION theory , *TENSOR algebra , *SIGNAL processing , *MACHINE learning , *STATISTICS , *PERTURBATION theory - Abstract
Many idealized problems in signal processing, machine learning, and statistics can be reduced to the problem of finding the symmetric canonical decomposition of an underlying symmetric and orthogonally decomposable (SOD) tensor. Drawing inspiration from the matrix case, the successive rank-one approximation (SROA) scheme has been proposed and shown to yield this tensor decomposition exactly, and a plethora of numerical methods have thus been developed for the tensor rank-one approximation problem. In practice, however, the inevitable errors (say) from estimation, computation, and modeling necessitate that the input tensor can only be assumed to be a nearly SOD tensor--i.e., a symmetric tensor slightly perturbed from the underlying SOD tensor. This paper shows that even in the presence of perturbation, SROA can still robustly recover the symmetric canonical decomposition of the underlying tensor. It is shown that when the perturbation error is small enough, the approximation errors do not accumulate with the iteration number. Numerical results are presented to support the theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. MOMENTS OF THE DISTRIBUTION OF SAMPLE SIZE IN A SPRT.
- Author
-
Ghosh, B. K.
- Subjects
- *
MOMENTS method (Statistics) , *STATISTICAL sampling , *ARITHMETIC , *DISTRIBUTION (Probability theory) , *DIFFERENTIABLE functions , *PROBABILITY theory , *APPROXIMATION theory , *EQUATIONS , *STATISTICS - Abstract
The article discusses moments of the distribution of sample size, N in a sequential probability ratio test (SPRT). The present paper provides variance, the third and the fourth moments of N. The details are worked out in five common applications of the SPRIT. The relation of the variance of N to the truncation of a SPRT is discussed is also discussed in the paper. Scholar A. Wald indicated in passing how one can obtain the moments of N, but the only published literature where the author encountered a general expression for the variance of N. However, their expression is incorrect. Using scholar J. Wolfowitz's results, which they do, or differentiating Wald's, fundamental identity twice one gets provided. In many practical applications of the SPRT, μ and moments in an equation derived are differentiable functions of a real-valued parameter. The limiting expressions for the moments can then be determined by standard methods of mathematical analysis. However, for the third and fourth moments the actual technique may involve an excessive amount of arithmetic.
- Published
- 1969
- Full Text
- View/download PDF
17. Collective Motions of Heterogeneous Swarms.
- Author
-
Szwaykowska, Klementyna, Romero, Luis Mier-y-Teran, and Schwartz, Ira B.
- Subjects
- *
HETEROGENEOUS computing , *ROBOTICS , *MATHEMATICAL functions , *APPROXIMATION theory , *COMPUTER simulation - Abstract
The emerging collective motions of swarms of interacting agents are a subject of great interest in application areas ranging from biology to physics and robotics. In this paper, we conduct a careful analysis of the collective dynamics of a swarm of self-propelled heterogeneous, delay-coupled agents. We show the emergence of collective motion patterns and segregation of populations of agents with different dynamic properties; both of these behaviors (pattern formation and segregation) emerge naturally in our model, which is based on self-propulsion and attractive pairwise interactions between agents. We derive the bifurcation structure for emergence of different swarming behaviors in the mean field as a function of physical parameters and verify these results through simulation. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
18. IMPROVED TRANSFORMED STATISTICS FOR THE TEST OF ONE FACTOR INDEPENDENCE FROM THE OTHER TWO IN AN R × S × T CONTINGENCY TABLE.
- Author
-
Takasumi Kobe, Nobuhiro Taneichi, and Yuri Sekiya
- Subjects
- *
STATISTICS , *CONTINGENCY tables , *CHI-squared test , *CHI-square distribution , *APPROXIMATION theory - Abstract
We consider φ-divergence statistics Cϕ for the test of one factor independence from the other two in an r × s × t contingency table. Statistics Cϕ include the statistics Ra based on the power divergence as a special case. Statistic R0 is the log likelihood ratio statistic and R1 is Pearson's X2 statistic. Statistic R2/3 corresponds to the statistic for the goodness-of-fit test recommended by Cressie and Read (1984). Statistics Cϕ have the same chi-square limiting distribution under the hypothesis that one factor and the other two are independent. In this paper, when we assume that the distribution of Cϕ is continuous, we show the derivation of an expression of approximation based on a multivariate Edgeworth expansion for the distribution of Cϕ under the hypothesis that one factor and the other two are independent. Using the expression, we propose a new approximation of the distribution of Cϕ. In addition, on the basis of the approximation, we obtain transformed statistics that improve the speed of convergence to a chi-square limiting distribution of Cϕ. By numerical comparison in the case of Ra, we show that the transformed statistics perform well for a small sample. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. Maximum Entropy Approximation.
- Author
-
Sukumar, N.
- Subjects
- *
MAXIMUM entropy method , *PHYSICS , *APPROXIMATION theory , *STATISTICS , *DISTRIBUTION (Probability theory) - Abstract
In this paper, the construction of scattered data approximants is studied using the principle of maximum entropy. For under-determined and ill-posed problems, Jaynes’s principle of maximum information-theoretic entropy is a means for least-biased statistical inference when insufficient information is available. Consider a set of distinct nodes {xi}i=1n in Rd, and a point p with coordinate x that is located within the convex hull of the set {xi}. The convex approximation of a function u(x) is written as: uh(x) = Σi=1n [lowercase_phi_synonym]i(x)ui, where {[lowercase_phi_synonym]i}i=1n >= 0 are known as shape functions, and uh must reproduce affine functions (d = 2): Σi=1n [lowercase_phi_synonym]i = 1, Σi=1n [lowercase_phi_synonym]ixi = x, Σi=1n [lowercase_phi_synonym]iyi = y. We view the shape functions as a discrete probability distribution, and the linear constraints as the expectation of a linear function. For n > 3, the problem is under-determined. To obtain a unique solution, we compute [lowercase_phi_synonym]i by maximizing the uncertainty H([lowercase_phi_synonym]) = - Σi=1n [lowercase_phi_synonym]i log [lowercase_phi_synonym]i, subject to the above three constraints. In this approach, only the nodal coordinates are used, and neither the nodal connectivity nor any user-defined parameters are required to determine [lowercase_phi_synonym]i—the defining characteristics of a mesh-free Galerkin approximant. Numerical results for {[lowercase_phi_synonym]i}i=1n are obtained using a convex minimization algorithm, and shape function plots are presented for different nodal configurations. © 2005 American Institute of Physics [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
20. Generalized Baskakov-Szász type operators.
- Author
-
Agrawal, P.N., Gupta, Vijay, Sathish Kumar, A., and Kajla, Arun
- Subjects
- *
OPERATOR theory , *APPROXIMATION theory , *STATISTICS , *STOCHASTIC convergence , *MATHEMATICAL bounds , *ESTIMATION theory - Abstract
Abstract: In the present paper, we introduce generalized Baskakov-Szász type operators and study some approximation properties of these operators e.g., rate of convergence in ordinary and simultaneous approximation, statistical convergence and the estimate of the rate of convergence for absolutely continuous functions having a derivative coinciding a.e. with a function of bounded variation. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
21. Korovkin type approximation theorems proved via -statistical convergence.
- Author
-
Aktuğlu, Hüseyin
- Subjects
- *
APPROXIMATION theory , *MATHEMATICS theorems , *STATISTICS , *STOCHASTIC convergence , *GROUP extensions (Mathematics) , *MATHEMATICAL sequences - Abstract
Abstract: The concept of statistical convergence was introduced by H. Fast, and studied by various authors. Recently, by using the idea of statistical convergence, M. Balcerzak, K. Dems and A. Komisarski introduced a new type of convergence for sequences of functions called equistatistical convergence. In the present paper we introduce the concepts of -statistical convergence and -statistical convergence of order . We show that -statistical convergence is a non-trivial extension of ordinary and statistical convergences. Moreover we show that -statistical convergence includes statistical convergence, -statistical convergence, and lacunary statistical convergence. We also introduce the concept of -equistatistical convergence which is a non-trivial extension of equistatistical convergence. Moreover, we prove that -equistatistical convergence lies between -statistical pointwise convergence and -statistical uniform convergence. Finally we prove Korovkin type approximation theorems via -statistical uniform convergence of order and -equistatistical convergence of order . [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
22. Advanced Research in the Field of Instruments for Use in the Probabilistic Study of Security.
- Author
-
Vătăsescu, Mihail, Vătăsescu, Mihaela, Vasilescu, Gabriel Dragoş, and Dan Lemle, Ludovic
- Subjects
- *
QUANTITATIVE research , *PROBABILITY theory , *MATHEMATICAL analysis , *NUMERICAL analysis , *APPROXIMATION theory - Abstract
Systems of work are complex structures with specific characteristics, which require the use of probabilistic methods based on the use of instruments, both in the design phase and the stage of analysis and evaluation to ensure the safe conditions of work within them. This paper presents a methodological approach to probabilistic and statistical analysis and assessment of risk scenarios identified in the systems work. Statistical approach is based on quantification of what is rational and can be seen, this constitutes the probabilistic extrapolation to what can be reasonably inferred from these statistics to assess the probability of occurrence of less common events that can not be observed directly. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. CS-RBF interpolation of surfaces with vertical faults from scattered data.
- Author
-
Izquierdo, Diego, de Silanes, María Cruz López, Parra, María Cruz, and Torrens, Juan José
- Subjects
- *
DATA analysis , *PROBLEM solving , *NUMERICAL analysis , *RADIAL basis functions , *APPROXIMATION theory , *STATISTICS ,PROBLEM solving ability testing - Abstract
Abstract: In this paper, the problem of the interpolation of explicit surfaces with vertical faults from scattered data is studied. A new interpolation scheme for compactly supported radial basis functions is proposed which is well adapted to our purpose. This method is based on the construction of an interpolant of the same type in dimension 3 by moving the centers to a convenient auxiliary bivariate explicit surface. Several numerical and graphical examples are shown which illustrate the efficiency of the method. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
24. ON MULTIPLICATIVE λ-APPROXIMATIONS AND SOME GEOMETRIC APPLICATIONS.
- Author
-
NEWMAN, ILAN and RABINOVICH, YURI
- Subjects
- *
SET theory , *APPROXIMATION theory , *FUNCTIONAL analysis , *DIMENSION reduction (Statistics) , *STATISTICS - Abstract
Let F be a set system over an underlying finite set X, and let μ be a nonnegative measure over X; i.e., for every S ⊆ X, μ(S) = Σx∈S μ(x). A measure μ* on X is called a multiplicative λ-approximation of μ on (F,X) if for every S ∈ F it holds that aμ(S) ≤ μ*(S) ≤ bμ(S), and b/a = λ ≥ 1. The central question raised and partially answered in the present paper is about the existence of meaningful structural properties of F implying that for any μ on X there exists an 1+∈/1-∈-approximation μ* supported on a small subset of X. It turns out that the parameter that governs the support size of a multiplicative approximation is the triangular rank of F, trk(F). It is defined as the maximal length of a sequence of sets {Si}i=1t in F such that for all 1 < i ≤ t, Si ∪j
- Published
- 2013
- Full Text
- View/download PDF
25. Generalized weighted statistical convergence and application.
- Author
-
Belen, C. and Mohiuddine, S.A.
- Subjects
- *
GENERALIZATION , *STOCHASTIC convergence , *STATISTICS , *APPLICATION software , *SUMMABILITY theory , *APPROXIMATION theory - Abstract
Abstract: The object of this paper is to introduce the concepts of weighted λ-statistical convergence and statistical summability . We also establish some inclusion relations and some related results for these new summability methods. Further, we determine a Korovkin type approximation theorem through statistical summability and we show that our approximation theorem is stronger than classical Korovkin theorem by using classical Bernstein polynomials. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
26. Statistical analysis of the stochastic solution processes of 1-D stochastic Navier–Stokes equation using WHEP technique
- Author
-
El-Tawil, Magdy A. and El-Shekhipy, AbdelHafeez A.
- Subjects
- *
STOCHASTIC analysis , *NAVIER-Stokes equations , *PERTURBATION theory , *STATISTICS , *MATHEMATICAL expansion , *APPROXIMATION theory , *CASE studies - Abstract
Abstract: In this paper, the one dimensional Navier–Stokes equation under stochastic excitation is analysed using WHEP technique. The Wiener Hermite expansion properties are used together with the perturbation technique (WHEP) to obtain an approximate formula for the ensemble average, variance and higher statistical moments of the solution process. Some case studies are considered to illustrate the method of analysis. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
27. Operators constructed by means of q-Lagrange polynomials and A-statistical approximation
- Author
-
Mursaleen, M., Khan, Asif, Srivastava, H.M., and Nisar, K.S.
- Subjects
- *
LINEAR operators , *POLYNOMIALS , *LAGRANGE equations , *APPROXIMATION theory , *MATHEMATICAL proofs , *STATISTICS - Abstract
Abstract: In this paper we construct some positive linear operators by means of q-Lagrange polynomials and prove some approximation results via A-statistical convergence. We also define and study the rate of A-statistical approximation of these operators by using the notion of modulus of continuity and Lipschitz class. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
28. High-quality bilingual subtitle document alignments with application to spontaneous speech translation
- Author
-
Tsiartas, Andreas, Ghosh, Prasanta, Georgiou, Panayiotis, and Narayanan, Shrikanth
- Subjects
- *
SPEECH , *TRANSLATING & interpreting , *STATISTICS , *TRANSCRIPTION , *MACHINE theory , *DATA extraction , *COMPARATIVE studies , *APPROXIMATION theory - Abstract
Abstract: In this paper, we investigate the task of translating spontaneous speech transcriptions by employing aligned movie subtitles in training a statistical machine translator (SMT). In contrast to the lexical-based dynamic time warping (DTW) approaches to bilingual subtitle alignment, we align subtitle documents using time-stamps. We show that subtitle time-stamps in two languages are often approximately linearly related, which can be exploited for extracting high-quality bilingual subtitle pairs. On a small tagged data-set, we achieve a performance improvement of 0.21 F-score points compared to traditional DTW alignment approach and 0.39 F-score points compared to a simple line-fitting approach. In addition, we achieve a performance gain of 4.88 BLEU score points in spontaneous speech translation experiments using the aligned subtitle data obtained by the proposed alignment approach compared to that obtained by the DTW based alignment approach demonstrating the merit of the time-stamps based subtitle alignment scheme. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
29. Graph characterizations from von Neumann entropy
- Author
-
Han, Lin, Escolano, Francisco, Hancock, Edwin R., and Wilson, Richard C.
- Subjects
- *
GRAPH theory , *VON Neumann algebras , *ENTROPY , *STATISTICS , *THERMODYNAMICS , *APPROXIMATION theory , *SPECTRAL geometry - Abstract
Abstract: In this paper we explore how the von Neumann entropy can be used as a measure of graph complexity. We also develop a simplified form for the von Neumann entropy of a graph that can be computed in terms of node degree statistics. We compare the resulting complexity with Estrada’s heterogeneity index which measures the heterogeneity of the node degree across a graph and reveal a new link between Estrada’s index and the commute time on a graph. Finally, we explore how the von Neumann entropy can be used in conjunction with thermodynamic depth. This measure has been shown to overcome problems associated with iso-spectrality encountered when using complexity measures based on spectral graph theory. Our experimental evaluation of the simplified von Neumann entropy explores (a) the accuracy of the underlying approximation, (b) a comparison with alternative graph characterizations, and (c) the application of the entropy-based thermodynamic depth to characterize protein–protein interaction networks. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
30. APPROXIMATION FOR PERIODIC FUNCTIONS VIA STATISTICAL A-SUMMABILITY.
- Author
-
KARAKUŞ, S. and DEMIRCI, K.
- Subjects
- *
APPROXIMATION theory , *MATHEMATICAL functions , *STATISTICS , *SUMMABILITY theory , *STOCHASTIC convergence , *LINEAR operators , *REAL numbers - Abstract
In this paper, using the concept of statistical A-summability which is stronger than the A-statistical convergence, we prove a Korovkin type approximation theorem for sequences of positive linear operator defined on C*(ℝ) which is the space of all 2π-periodic and continuous functions on ℝ, the set of all real numbers. We also compute the rates of statistical A-summability of sequence of positive linear operators. [ABSTRACT FROM AUTHOR]
- Published
- 2012
31. Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation.
- Author
-
Sugiyama, Masashi, Suzuki, Taiji, and Kanamori, Takafumi
- Subjects
- *
MATCHING theory , *PARAMETER estimation , *PROBABILITY theory , *STATISTICS , *APPROXIMATION theory , *DIVERGENCE theorem , *MATHEMATICAL analysis - Abstract
Estimation of the ratio of probability densities has attracted a great deal of attention since it can be used for addressing various statistical paradigms. A naive approach to density-ratio approximation is to first estimate numerator and denominator densities separately and then take their ratio. However, this two-step approach does not perform well in practice, and methods for directly estimating density ratios without density estimation have been explored. In this paper, we first give a comprehensive review of existing density-ratio estimation methods and discuss their pros and cons. Then we propose a new framework of density-ratio estimation in which a density-ratio model is fitted to the true density-ratio under the Bregman divergence. Our new framework includes existing approaches as special cases, and is substantially more general. Finally, we develop a robust density-ratio estimation method under the power divergence, which is a novel instance in our framework. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
32. -statistic with side information
- Author
-
Yuan, Ao, He, Wenqing, Wang, Binhuan, and Qin, Gengsheng
- Subjects
- *
STATISTICS , *INFORMATION theory , *EMPIRICAL research , *CONFIDENCE intervals , *NUMERICAL analysis , *APPROXIMATION theory , *DISTRIBUTION (Probability theory) - Abstract
Abstract: In this paper we study -statistics with side information incorporated using the method of empirical likelihood. Some basic properties of the proposed statistics are investigated. We find that by implementing the side information properly, the proposed -statistics can have smaller asymptotic variance than the existing -statistics in the literature. The proposed -statistics can achieve asymptotic efficiency in a formal sense and their weak limits admit a convolution result. We also find that the corresponding -likelihood ratio procedure, as well as the -empirical likelihood based confidence interval construction, do not benefit from incorporating side information, a result that is consistent with the result under the standard empirical likelihood ratio procedure. The impact of incorrect side information implementation in the proposed -statistics is also explored. Simulation studies are conducted to assess the finite sample performance of the proposed method. The numerical results show that with side information implemented, the deduction of asymptotic variance can be substantial in some cases, and the coverage probability of the confidence interval using the -empirical likelihood ratio based method outperforms that of the normal approximation based method, in particular in the cases when the underlying distribution is skewed. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
33. Combined small scale high dimensional model representation.
- Author
-
Özay, Evrim and Demiralp, Metin
- Subjects
- *
MATHEMATICAL models , *MULTIVARIATE analysis , *APPROXIMATION theory , *ALGORITHMS , *PERTURBATION theory , *MATHEMATICAL functions , *STATISTICS - Abstract
Nowadays the utilization of High Dimensional Model Representation (HDMR), which is an algorithm for approximating multivariate functions, is becoming more pervasive in the applications of approximation theory. This extensive usage motivates new works on HDMR, to get better solutions while approximating to the multivariate functions. One of them is recently developed 'Combined Small Scale High Dimensional Model Representation (CSSHDMR)". This new scheme not only optimises HDMR results but also provides good approximation with less terms than HDMR does. This paper presents the theory and the numerical results of the new method and shows that it is possible to apply approximation to multivariate functions by keeping only constant term of HDMR. From this aspect CSSHDMR can be used in any scientific problem which includes multivariate functions, from chemistry to statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
34. A Galerkin/neural approach for the stochastic dynamics analysis of nonlinear uncertain systems
- Author
-
Betti, Michele, Biagini, Paolo, and Facchini, Luca
- Subjects
- *
GALERKIN methods , *STOCHASTIC systems , *STRUCTURAL dynamics , *NONLINEAR mechanics , *APPROXIMATION theory , *PARAMETER estimation , *STATISTICS - Abstract
Abstract: The paper presents a Galerkin/neural approach (GNa) for the dynamics analysis of nonlinear mechanical systems affected by parameter randomness. In the specialised literature various procedures are nowadays available to evaluate the response statistics of such systems, but a choice has sometimes to be done between simple methods (that often provide unreliable solutions) and other more complex methods (where accurate solutions are provided with a heavy computational effort). The proposed method, where a Galerkin approach is combined with a neural one (basically an expansion of RBF for the approximation of the system response) could be a valid alternative to the classical procedures. Furthermore the proposed Galerkin/neural approach introduces an error parameter which can provide an effective criterion to accept or refuse the obtained approximate solution. To validate the proposed approach several nonlinear systems with random parameters are introduced as case studies, and the results (main moments of the response process) are compared with Monte Carlo Simulation (MCS). [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
35. Extended Sum-of-Sinusoids-Based Simulation for Rician Fading Channels in Vehicular Ad Hoc Networks.
- Author
-
Wang, Yuhao, Xing, Xing, and Chen, Siyue
- Subjects
- *
AD hoc computer networks , *VEHICLES , *SIMULATION methods & models , *APPROXIMATION theory , *STATISTICS , *COMPUTER network security , *COMPUTER network resources - Abstract
In this paper, we propose an extended reference model and two novel Sum-of-Sinusoids (SoS) models (statistical and deterministic simulation models) propagation models considering the Rician K-factor and vehicle speed ratio in Vehicular Ad Hoc Networks (VANETs). Our models consider comprehensive scene of wave propagation in VANETs, including infrastructure-to-vehicle (I2V) channels with a LOS or NLOS environment, inter-vehicle communication (IVC) channels with a LOS or NLOS environment. The analysis of the statistical properties of the proposed models show that the statistics of the new models match those of the reference model at a large range of normalized time delays. The proposed models show improved approximations to the desired auto-correlation and faster convergence with the increase of Rician K-factor and vehicle speed ratio. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
36. Statistical approximation by Meyer-König and Zeller operators of finite type based on the -integers
- Author
-
Trif, Tiberiu
- Subjects
- *
STATISTICS , *APPROXIMATION theory , *OPERATOR theory , *STOCHASTIC convergence , *MATHEMATICS theorems , *FUNCTIONAL analysis , *INTEGERS - Abstract
Abstract: Gavrea and Trif [I. Gavrea, T. Trif, The rate of convergence by certain new Meyer-König and Zeller operators of finite type, Rend. Circ. Mat. Palermo (2) Suppl. 76 (2005) 375–394] introduced a sequence of Meyer-König and Zeller operators “of finite type” and investigated the rate of convergence of these operators for continuous functions. In the present paper we generalize these operators to the framework of -calculus. By deriving a sharp estimate of the second moment, we establish a Bohman–Korovkin type approximation theorem for the new -operators via -statistical convergence. We also compute the rate of -statistical convergence of the -operators in terms of Peetre’s functional. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
37. Statistical approximation of a kind of Kantorovich type -Szász–Mirakjan operators
- Author
-
Örkcü, Mediha and Doğru, Ogün
- Subjects
- *
STATISTICS , *APPROXIMATION theory , *KANTOROVICH method , *POSITIVE operators , *CONTINUITY , *FUNCTIONAL analysis - Abstract
Abstract: In the present paper, we introduce a Kantorovich type generalization of the -Szász–Mirakjan operators and investigate their -statistical approximation properties. Also, the rate of -statistical convergence of the proposed sequence of linear positive operators is obtained by a weighted modulus of continuity. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
38. Radiation efficiency of submerged rectangular plates
- Author
-
Cheng, Zhao, Fan, Jun, Wang, Bin, and Tang, Weilin
- Subjects
- *
STRUCTURAL plates , *STATISTICS , *UNDERWATER tunnels , *RADIATION , *STIFFNESS (Mechanics) , *VIBRATION (Mechanics) , *APPROXIMATION theory - Abstract
Abstract: The average radiation efficiency of point-excited submerged rectangular plates is investigated in two methods, deterministic analysis and statistical approach, respectively. In the deterministic analysis, the effect of mutual impedance by water loading on the velocity of the plate is illustrated analytically by using a modal summation method. The cross-modal contributions to the average radiation efficiency are averaged to zero by averaging over all possible excitation positions on the plate. In the statistical approach, by analyzing the engineering formulae of the average radiation efficiency in air, this paper modifies the formulae to be applicable in water. The numerical comparisons show that the modified formulae reflect the average level in the frequency region controlled by corner modes and are accurate enough in the region controlled by monopole and edge modes. On this basis, approximate expressions for predicting the average radiation efficiency of the submerged beam-stiffened rectangular plates are proposed. The main differences between air loading and water loading are considered. Firstly, as dry modes taken in analysis instead of real vibration modes in water, the vibration of the stiffened plate is not only determined by the first mode but by several modes simultaneously at low frequencies. Secondly, the “corner mode region” becomes inappropriate as the plate is stiffened. The proposed formulae are validated numerically in different size, thickness, and stiffener amount of the stiffened plate. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
39. -statistical approximation of generalized Szász–Mirakjan–Beta operators
- Author
-
Özarslan, Mehmet Ali and Aktuğlu, Hüseyin
- Subjects
- *
APPROXIMATION theory , *GENERALIZATION , *OPERATOR theory , *MATHEMATICAL proofs , *STOCHASTIC convergence , *POSITIVE systems , *STATISTICS , *ESTIMATION theory - Abstract
Abstract: In this paper we prove a Korovkin type approximation theorem and obtain the rate of convergence of the generalized Szász–Mirakjan–Beta operators by means of modulus of continuity and elements of Lipschitz class. Furthermore we give the -statistical approximation theorem for these operators and investigate the case which provides the best estimation. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
40. Statistical approximation properties of high order operators constructed with the Chan–Chyan–Srivastava polynomials
- Author
-
Erkuş-Duman, Esra and Duman, Oktay
- Subjects
- *
APPROXIMATION theory , *DIFFERENTIAL operators , *POLYNOMIALS , *STOCHASTIC convergence , *STATISTICS , *MULTIVARIATE analysis , *LINEAR operators - Abstract
Abstract: In this paper, by including high order derivatives of functions being approximated, we introduce a general family of the linear positive operators constructed by means of the Chan–Chyan–Srivastava multivariable polynomials and study a Korovkin-type approximation result with the help of the concept of A-statistical convergence, where A is any non-negative regular summability matrix. We obtain a statistical approximation result for our operators, which is more applicable than the classical case. Furthermore, we study the A-statistical rates of our approximation via the classical modulus of continuity. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
41. Maximum of the weighted Kaplan-Meier tests for the two-sample censored data.
- Author
-
Lee, Seung-Hwan
- Subjects
- *
ESTIMATION theory , *COMPUTER simulation , *STATISTICS , *COMBINATORIAL designs & configurations , *APPROXIMATION theory , *MATHEMATICAL functions , *GAUSSIAN processes , *ASYMPTOTIC distribution - Abstract
In this paper, some versatile test procedures are considered, which are useful for the case where the association of survival functions is unclear. These procedures are based on a maximum of a class of the weighted Kaplan-Meier statistics. The weight functions used in the procedures account for both the censored and non-censored data points. Numerical simulations with these weight functions show that, under various levels of censoring, the proposed procedures perform well across a wide range of alternative configurations. Implementation of the proposed procedures is illustrated in a real data example. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
42. Rigid-body point-based registration: The distribution of the target registration error when the fiducial registration errors are given
- Author
-
Seginer, A.
- Subjects
- *
IMAGE registration , *ERRORS , *DIGITAL image processing , *APPROXIMATION theory , *MEDICAL imaging systems , *STATISTICS - Abstract
Abstract: Medical guidance systems often employ several data sources using different coordinate systems. In order to map positions from one coordinate system to the other, these guidance systems usually employ rigid-body point-based registration, using pairs of fiducial points: pairs which describe the same physical positions, but in different coordinate systems. The customary test for the quality of the registration is the fiducial registration error (FRE), which is the root-mean-square of the mismatch between the fiducials in each pair (after the registration). The FRE, however, does not give an answer to the question which is usually of interest, and that is the accuracy at a “target” point which is not part of the set of fiducial points. The statistics of the target registration error (TRE) have been studied before and approximate expressions were derived, but those expressions require as input the unknown true fiducial positions. In the present paper, it is proven that by replacing these unknowable true positions with the known measured positions in the expression for mean-square TRE, a higher order approximation is achieved. In other words, it is shown that more accurate estimates are obtained by using less accurate, but available, inputs. Furthermore, in previous approximations FRE and TRE were shown to be statistically independent, whereas here, due to the higher approximation level, it is shown that a slight dependence exists. Thus, the knowledge of FRE can in fact be employed to improve predictions of the TRE statistics. These results are supported by simulations and hold even for fiducial localization error (FLE) distributions with large standard deviations. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
43. Optimal experimental designs when an independent variable is potentially censored.
- Author
-
López-Fidalgo, Jesús and Garcet-Rodríguez, Sandra
- Subjects
- *
MATHEMATICAL variables , *STATISTICS , *APPROXIMATION theory , *OPTIMAL designs (Statistics) , *MATHEMATICAL optimization , *MATHEMATICAL programming - Abstract
This paper considers the problem of constructing optimal approximate designs when an independent variable might be censored. The problem is which design should be applied in practice to obtain the best approximate design when a censoring distribution is assumed known in advance. The approach for finite or continuous design spaces deserves different attention. In both cases, equivalent theorems and algorithms are provided in order to calculate optimal designs. Some examples illustrate this approach for D-optimality. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
44. Consensus-based Page's test in sensor networks
- Author
-
Braca, Paolo, Marano, Stefano, Matta, Vincenzo, and Willett, Peter
- Subjects
- *
SENSOR networks , *STATISTICS , *DISTRIBUTION (Probability theory) , *SIGNAL processing , *STATISTICAL matching , *APPROXIMATION theory , *SIMULATION methods & models - Abstract
Abstract: Page''s test is a well-known statistical technique to approach quickest detection problems, namely the detection of an abrupt change in the statistical distribution of a certain monitored phenomenon. Running consensus is a recently proposed signal processing procedure aimed at reaching agreement among the nodes of a fully flat network, and its peculiar feature is the simultaneity of two stages: that of acquiring new measurements by the sensors, and that of data fusion involving inter-sensor communications. In this paper we study a quickest detector based on the running consensus scheme, and compare it to a bank of independent Page''s tests. Exploiting insights from previous studies, we propose closed-form analytical approximations of the performances of these detection schemes and address a comparison in terms of relative efficiencies. The approximated performance figures are then checked by simulation to validate the analysis and to investigate non-asymptotic scenarios. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
45. APTA: advanced probability-based tolerance analysis of products.
- Author
-
Gayton, Nicolas, Beaucaire, Paul, Bourinet, Jean-Marc, Duc, Emmanuel, Lemaire, Maurice, and Gauvrit, Laurent
- Subjects
- *
PRODUCT design , *PRODUCT quality , *TECHNICAL specifications , *PROBABILITY theory , *STATISTICS , *APPROXIMATION theory - Abstract
In mass production, the customer defines the constraints of assembled products by functional and quality requirements. The functional requirements are expressed by the designer through the chosen dimensions, which are linked by linear equations in the case of a simple stack-up or non-linear equations in a more complex case. The customer quality requirements are defined by the maximum allowable number of out-of-tolerance assemblies. The aim of this paper is to prove that quality requirements can be accurately predicted in the design stage thanks to a better knowledge of the statistical characteristics of the process. The authors propose an approach named Advanced Probability based Tolerance Analysis (APTA), assessing the defect probability (called PD) that the assembled product has of not conforming to the functional requirements. This probability depends on the requirements (nominal value, tolerance, capability levels) set by the designer for each part of the product and on the knowledge of production devices that will produce batches with variable statistical characteristics (mean value, standard deviation). The interest of the proposed methodology is shown for linear and non-linear equations related to industrial products manufactured by the RADIALL SA Company. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
46. Statistical summability and approximation by de la Vallée-Poussin mean
- Author
-
Mursaleen, M. and Alotaibi, A.
- Subjects
- *
SUMMABILITY theory , *APPROXIMATION theory , *STATISTICS , *STOCHASTIC convergence , *DENSITY functionals , *MATHEMATICAL analysis - Abstract
Abstract: In this paper we define a new type of summability method via statistical convergence by using the density and -summability. We further apply our new summability method to prove a Korovkin type approximation theorem. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
47. Statistical approximation to max-product operators
- Author
-
Karakuş, Sevda and Demirci, Kamil
- Subjects
- *
APPROXIMATION theory , *MATHEMATICAL sequences , *STATISTICS , *OPERATOR theory , *STOCHASTIC convergence , *MATHEMATICAL models - Abstract
Abstract: In this paper, using the concept of statistical -convergence which is stronger than statistical convergence, we obtain a statistical approximation theorem for a general sequence of max-product operators, including Shepard type operators, although its classical limit fails. We also compute the corresponding statistical rates of the approximation. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
48. Efficient Estimation of Crosstalk Statistics in Random Wire Bundles With Lacing Cords.
- Author
-
Bellan, Diego and Pignari, Sergio A.
- Subjects
- *
ESTIMATION theory , *CROSSTALK , *ELECTRIC extension cords , *MULTICONDUCTOR transmission lines , *STATISTICS , *NUMERICAL analysis , *ELECTRONIC modulation , *APPROXIMATION theory - Abstract
This paper deals with crosstalk estimation in a set of individually insulated wires laced together to form a bundle, and running above a ground plane. A simplified and numerically efficient multiconductor transmission line model is developed, involving a single reference cross section with pseudocircular shape and staircase approximation for the wire trajectories. The presence of lacing cords along the bundle is modeled as a sinusoidal modulation of the reference cross-section dimension. It is shown that crosstalk in wire bundles is mainly affected by the relative positions of the generator and receptor wires, whose paths are to be accurately represented as smooth curved trajectories, whereas it results to be substantially insensitive to rough modeling of the paths of the remaining wires. The weak random nonuniformity of the generator and receptor wires proves to be the fundamental property to be reproduced in order to obtain physically sound crosstalk predictions. This is enforced by deducing the staircase approximation of the generator and receptor wires from a preliminary representation of only that wire pair in terms of splines with a very small number of nodes. Differently, the staircase trajectories of the remaining wires are obtained by direct mapping onto the reference cross section. Model performance is assessed by comparing crosstalk predictions versus measurement data and theoretical results available in the technical literature. Predictions compare appreciably well with measurements, also in the most critical case of high-impedance loads. The coupling distribution shows nonsymmetrical structure and a long tail to the right, as several other crosstalk analyses have shown. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. On the Linear Relationship between Loop Current Retreat Latitude and Eddy Separation Period.
- Author
-
Lugo-Fernáández, Alexis and Leben, Robert R.
- Subjects
- *
OCEAN currents , *EDDIES , *ARTIFICIAL satellites , *STATISTICAL correlation , *VORTEX motion , *APPROXIMATION theory , *MATHEMATICAL functions , *EMPIRICAL research , *ALTIMETERS - Abstract
A linear correlation exists between the retreat latitude of the Loop Current following eddy separation and the subsequent eddy separation period. This empirical relationship was first identified in satellite altimeter-derived Loop Current metrics. In this paper, a simple vorticity model of the Loop Current is used to provide a semitheoretical basis for this relationship. After suitable scaling approximations, the theory predicts that the LC separation period is a linear function of retreat latitude, which agrees well with altimeter-derived empirical results. Specifically, the predicted slope and y intercept agree to within 9%% and 2%%, respectively, with the altimetry-derived values. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Existence and decay of solutions to the two-dimensional fractional quasigeostrophic equation.
- Author
-
Pu, Xueke and Guo, Boling
- Subjects
- *
MATHEMATICAL programming , *STATISTICS , *GALERKIN methods , *SOBOLEV spaces , *APPROXIMATION theory , *EXISTENCE theorems , *FRACTIONAL calculus - Abstract
This paper studies the fractional quasigeostrophic equation with modified dissipativity. We prove the global existence of weak solutions by employing the Galerkin approximation method, and when α∈(
,1), the weak solution is unique. Finally, decay estimate for solutions in Sobolev norms is given. [ABSTRACT FROM AUTHOR]1 2 - Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.