1,921 results
Search Results
2. Bayesian two-sided group chain sampling plan for beta binomial distribution under quality regions
- Author
-
Hafeez, Waqar and Aziz, Nazrina
- Published
- 2022
- Full Text
- View/download PDF
3. Improved binomial and Poisson approximations to the Type-A operating characteristic function
- Author
-
Chukhrova, Nataliya and Johannssen, Arne
- Published
- 2019
- Full Text
- View/download PDF
4. The Uses and Usefulness of Binomial Probability Paper
- Author
-
John W. Tukey and Frederick Mosteller
- Subjects
Paper ,Statistics and Probability ,Biometry ,Binomial approximation ,Statistics as Topic ,Negative binomial distribution ,Continuity correction ,Binomial test ,Binomial distribution ,Statistics ,Humans ,Applied mathematics ,Central binomial coefficient ,Statistics, Probability and Uncertainty ,Binomial proportion confidence interval ,Probability ,Count data ,Mathematics - Abstract
This article describes certain uses of Binomial Probability Paper. This graph paper was designed to facilitate the employment of R. A. Fisher's inverse sine transformation for proportions. The transformation itself is designed to adjust binomially distributed data so that the variance will not depend on the true value of the proportion p, but only on the sample size n. In addition, binomial data so transformed more closely approximate normality than the raw data. The usefulness of plotting binomial data in rectangular coordinates, using a square-root scale for the number observed in each category, was first pointed out by Fisher and Mather [10]. The graph paper under discussion 1 is specially ruled to make this mode of plotting both simple and rapid. A graduated quadrant makes the angular transformation (p = cos2 φ or p = sin2 φ) easily available at the same time. Most tests of counted data can be made quickly, easily and with what is usually adequate accuracy with this paper. Some 22 examples ar...
- Published
- 1949
5. Software reliability estimation using Bayesian approach
- Author
-
Vasanthi, Thankappan and Arulmozhi, Ganapathy
- Published
- 2013
- Full Text
- View/download PDF
6. Zero‐modified discrete distributions for operational risk modelling
- Author
-
Mouatassim, Younès
- Published
- 2012
- Full Text
- View/download PDF
7. Some applications of Binomial Probability Paper in genetic analyses
- Author
-
J. H. A. Ferguson
- Subjects
Binomial distribution ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Statistics ,Genetics ,Plant Science ,Horticulture ,Biology ,Agronomy and Crop Science ,Exposition (narrative) - Abstract
The article deals with “Binomial Probability Paper”, designed by Mosteller and Tukey. This paper provides a quick approximate method for solving various genetic problems. After a brief mathematical exposition some uses are given in four examples. The accuracy appears to be surprizingly good; even for small samples (n=8) the accuracy is satisfactory for all practical uses.
- Published
- 1956
8. An enhanced multilevel secure data dissemination approximate solution for future networks.
- Author
-
Otoom, Mohammad Mahmood, Jemmali, Mahdi, Sarhan, Akram Y., Achour, Imen, Alsaduni, Ibrahim, and Omri, Mohamed Nazih
- Subjects
BINOMIAL distribution ,METAHEURISTIC algorithms ,DATA protection - Abstract
Sensitive data, such as financial, personal, or classified governmental information, must be protected throughout its cycle. This paper studies the problem of safeguarding transmitted data based on data categorization techniques. This research aims to use a novel routine as a new meta-heuristic to enhance a novel data categorization based-traffic classification technique where private data is classified into multiple confidential levels. As a result, two packets belonging to the same confidentiality level cannot be transmitted through two routers simultaneously, ensuring a high data protection level. Such a problem is determined by a non-deterministic polynomial-time hardness (NP-hard) problem; therefore, a scheduling algorithm is applied to minimize the total transmission time over the two considered routers. To measure the proposed scheme's performance, two types of distribution, uniform and binomial distributions used to generate packets transmission time datasets. The experimental result shows that the most efficient algorithm is the Best-Random Algorithm (BR˜), recording 0.028 s with an average gap of less than 0.001 in 95.1% of cases compared to all proposed algorithms. In addition, BR˜ is compared to the best-proposed algorithm in the literature which is the Modified decreasing Estimated-Transmission Time algorithm (MDETA). The results show that BR˜ is the best one in 100% of cases where MDETA reaches the best results in only 48%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Distributed Consensus Estimation for Networked Multi-Sensor Systems under Hybrid Attacks and Missing Measurements.
- Author
-
Cheng, Zhijian, Yang, Lan, Yuan, Qunyao, Long, Yinren, and Ren, Hongru
- Subjects
DISTRIBUTED algorithms ,BINOMIAL distribution ,CYBERTERRORISM ,RANDOM variables ,COMPUTATIONAL complexity ,COMMUNICATION models - Abstract
Cyber-security research on networked multi-sensor systems is crucial due to the vulnerability to various types of cyberattacks. For the development of effective defense measures, attention is required to gain insight into the complex characteristics and behaviors of cyber attacks from the attacker's perspective. This paper aims to tackle the problem of distributed consensus estimation for networked multi-sensor systems subject to hybrid attacks and missing measurements. To account for both random denial of service (DoS) attacks and false data injection (FDI) attacks, a hybrid attack model on the estimator-to-estimator communication channel is presented. The characteristics of missing measurements are defined by random variables that satisfy the Bernoulli distribution. Then a modified consensus-based distributed estimator, integrated with the characteristics of hybrid attacks and missing measurements, is presented. For reducing the computational complexity of the optimal distributed estimation method, a scalable suboptimal distributed consensus estimator is designed. Sufficient conditions are further provided for guaranteeing the stability of the proposed suboptimal distributed estimator. Finally, a simulation experiment on aircraft tracking is executed to validate the effectiveness and feasibility of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. On coefficients of a new Ma-Minda type class connected to binomial distribution.
- Author
-
Khazali, Salam and Najafzadeh, Shahram
- Subjects
BINOMIAL distribution ,COEFFICIENTS (Statistics) ,MATHEMATICAL convolutions ,SET theory ,MATHEMATICAL series - Abstract
In this paper, we define a new Ma-Minda type class based on binomial distribution series. Our investigation will be focused on the coefficients of the function f belonging to that class. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. scds: computational annotation of doublets in single-cell RNA sequencing data
- Author
-
Dennis Kostka and Abha S. Bais
- Subjects
Statistics and Probability ,Computer science ,Sequencing data ,Gene Expression ,Biochemistry ,Bioconductor ,03 medical and health sciences ,Annotation ,0302 clinical medicine ,Molecular Biology ,030304 developmental biology ,0303 health sciences ,Base Sequence ,business.industry ,Sequence Analysis, RNA ,Pattern recognition ,Original Papers ,Computer Science Applications ,Binomial distribution ,Computational Mathematics ,Identification (information) ,Computational Theory and Mathematics ,RNA ,Artificial intelligence ,Single-Cell Analysis ,business ,030217 neurology & neurosurgery ,Software - Abstract
Motivation Single-cell RNA sequencing (scRNA-seq) technologies enable the study of transcriptional heterogeneity at the resolution of individual cells and have an increasing impact on biomedical research. However, it is known that these methods sometimes wrongly consider two or more cells as single cells, and that a number of so-called doublets is present in the output of such experiments. Treating doublets as single cells in downstream analyses can severely bias a study’s conclusions, and therefore computational strategies for the identification of doublets are needed. Results With scds, we propose two new approaches for in silico doublet identification: Co-expression based doublet scoring (cxds) and binary classification based doublet scoring (bcds). The co-expression based approach, cxds, utilizes binarized (absence/presence) gene expression data and, employing a binomial model for the co-expression of pairs of genes, yields interpretable doublet annotations. bcds, on the other hand, uses a binary classification approach to discriminate artificial doublets from original data. We apply our methods and existing computational doublet identification approaches to four datasets with experimental doublet annotations and find that our methods perform at least as well as the state of the art, at comparably little computational cost. We observe appreciable differences between methods and across datasets and that no approach dominates all others. In summary, scds presents a scalable, competitive approach that allows for doublet annotation of datasets with thousands of cells in a matter of seconds. Availability and implementation scds is implemented as a Bioconductor R package (doi: 10.18129/B9.bioc.scds). Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2019
12. Based on Knowledge Recognition and Using Binomial Distribution Function to Establish a Mathematical Model of Random Selection of Test Questions in the Test Bank.
- Author
-
Chang, Yuan
- Subjects
DISTRIBUTION (Probability theory) ,MATHEMATICAL functions ,MATHEMATICAL models ,RANDOM variables ,BINOMIAL distribution - Abstract
With the in-depth development of social reforms, the scientificization of enterprise online examinations has become more and more urgent and important. The key to realizing scientific examinations is the automation and rationalization of propositions. Therefore, the construction and realization of the test question bank is also more important. In the realization of the entire test question database, how to select satisfactory test questions randomly from a large number of test questions through the selection of test questions so that the average difficulty, discriminability, and reliability of the test are satisfactory? These requirements are also more important. Among them, random selection of questions is an important difficulty in the realization of the test question bank. In order to solve the difficulties of random selection of these test questions, the author combines the experience of constructing the test question bank and uses the discrete binomial distribution to draw conclusions. Random variables established the first mathematical model for topic selection. By determining the form of the test questions and the distribution of the difficulty of the test questions and then making it use a random function to select questions, this will achieve better results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Fast Fusion Clustering via Double Random Projection.
- Author
-
Wang, Hongni, Li, Na, Zhou, Yanqiu, Yan, Jingxin, Jiang, Bei, Kong, Linglong, and Yan, Xiaodong
- Subjects
RANDOM projection method ,OPTIMIZATION algorithms ,BINOMIAL distribution ,HIERARCHICAL clustering (Cluster analysis) ,POINT processes ,K-means clustering - Abstract
In unsupervised learning, clustering is a common starting point for data processing. The convex or concave fusion clustering method is a novel approach that is more stable and accurate than traditional methods such as k-means and hierarchical clustering. However, the optimization algorithm used with this method can be slowed down significantly by the complexity of the fusion penalty, which increases the computational burden. This paper introduces a random projection ADMM algorithm based on the Bernoulli distribution and develops a double random projection ADMM method for high-dimensional fusion clustering. These new approaches significantly outperform the classical ADMM algorithm due to their ability to significantly increase computational speed by reducing complexity and improving clustering accuracy by using multiple random projections under a new evaluation criterion. We also demonstrate the convergence of our new algorithm and test its performance on both simulated and real data examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Comments on the Bernoulli Distribution and Hilbe's Implicit Extra-Dispersion.
- Author
-
Griffith, Daniel A.
- Subjects
RANDOM variables ,BINOMIAL distribution ,BELL pepper ,REAL numbers ,AUTOCORRELATION (Statistics) ,INTUITION - Abstract
For decades, conventional wisdom maintained that binary 0–1 Bernoulli random variables cannot contain extra-binomial variation. Taking an unorthodox stance, Hilbe actively disagreed, especially for correlated observation instances, arguing that the universally adopted diagnostic Pearson or deviance dispersion statistics are insensitive to a variance anomaly in a binary context, and hence simply fail to detect it. However, having the intuition and insight to sense the existence of this departure from standard mathematical statistical theory, but being unable to effectively isolate it, he classified this particular over-/under-dispersion phenomenon as implicit. This paper explicitly exposes his hidden quantity by demonstrating that the variance in/deflation it represents occurs in an underlying predicted beta random variable whose real number values are rounded to their nearest integers to convert to a Bernoulli random variable, with this discretization masking any materialized extra-Bernoulli variation. In doing so, asymptotics linking the beta-binomial and Bernoulli distributions show another conventional wisdom misconception, namely a mislabeling substitution involving the quasi-Bernoulli random variable; this undeniably is not a quasi-likelihood situation. A public bell pepper disease dataset exhibiting conspicuous spatial autocorrelation furnishes empirical examples illustrating various features of this advocated proposition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The Point Binomial and Probability Paper
- Author
-
Frank H. Byron
- Subjects
Binomial distribution ,Beta negative binomial distribution ,Binomial approximation ,Poisson binomial distribution ,Statistics ,Negative binomial distribution ,Multinomial distribution ,Central binomial coefficient ,Negative multinomial distribution ,Mathematics - Published
- 1935
16. The Global Health Security index and Joint External Evaluation score for health preparedness are not correlated with countries' COVID-19 detection response time and mortality outcome
- Author
-
Abdinasir Yusuf Osman, Yu-Mei Chang, Camilla T. O. Benfield, Mohammad Hasan, Francine Ntoumi, Richard Kock, Md. Jamal Uddin, Najmul Haider, Alexei Yavlinsky, Alimuddin Zumla, and Osman Dar
- Subjects
China ,risk analysis ,Epidemiology ,Pneumonia, Viral ,030231 tropical medicine ,Population ,Negative binomial distribution ,Global Health ,Rate ratio ,surveillance system ,Betacoronavirus ,03 medical and health sciences ,0302 clinical medicine ,Linear regression ,Global health ,Humans ,Medicine ,030212 general & internal medicine ,education ,Pandemics ,Original Paper ,education.field_of_study ,SARS-CoV-2 ,business.industry ,GHS index ,JEE ,COVID-19 ,medicine.disease ,Comorbidity ,Confidence interval ,Binomial Distribution ,Infectious Diseases ,Relative risk ,pandemic preparedness ,Coronavirus Infections ,business ,Demography - Abstract
Global Health Security Index (GHSI) and Joint External Evaluation (JEE) are two well-known health security and related capability indices. We hypothesised that countries with higher GHSI or JEE scores would have detected their first COVID-19 case earlier, and would experience lower mortality outcome compared to countries with lower scores. We evaluated the effectiveness of GHSI and JEE in predicting countries' COVID-19 detection response times and mortality outcome (deaths/million). We used two different outcomes for the evaluation: (i) detection response time, the duration of time to the first confirmed case detection (from 31st December 2019 to 20th February 2020 when every country's first case was linked to travel from China) and (ii) mortality outcome (deaths/million) until 11th March and 1st July 2020, respectively. We interpreted the detection response time alongside previously published relative risk of the importation of COVID-19 cases from China. We performed multiple linear regression and negative binomial regression analysis to evaluate how these indices predicted the actual outcome. The two indices, GHSI and JEE were strongly correlated (r = 0.82), indicating a good agreement between them. However, both GHSI (r = 0.31) and JEE (r = 0.37) had a poor correlation with countries' COVID-19–related mortality outcome. Higher risk of importation of COVID-19 from China for a given country was negatively correlated with the time taken to detect the first case in that country (adjusted R2 = 0.63–0.66), while the GHSI and JEE had minimal predictive value. In the negative binomial regression model, countries' mortality outcome was strongly predicted by the percentage of the population aged 65 and above (incidence rate ratio (IRR): 1.10 (95% confidence interval (CI): 1.01–1.21) while overall GHSI score (IRR: 1.01 (95% CI: 0.98–1.01)) and JEE (IRR: 0.99 (95% CI: 0.96–1.02)) were not significant predictors. GHSI and JEE had lower predictive value for detection response time and mortality outcome due to COVID-19. We suggest introduction of a population healthiness parameter, to address demographic and comorbidity vulnerabilities, and reappraisal of the ranking system and methods used to obtain the index based on experience gained from this pandemic.
- Published
- 2020
17. A note on Craig's paper on the minimum of binomial variates
- Author
-
B. K. Shah
- Subjects
Statistics and Probability ,Binomial approximation ,Applied Mathematics ,General Mathematics ,Negative binomial distribution ,Continuity correction ,Agricultural and Biological Sciences (miscellaneous) ,Negative multinomial distribution ,Binomial distribution ,Beta-binomial distribution ,Statistics ,Multinomial distribution ,Statistics, Probability and Uncertainty ,Binomial proportion confidence interval ,General Agricultural and Biological Sciences ,Mathematics - Abstract
Craig (1962) described an application of order statistics from a binomial distribution and gave an approximate expression for the mean value of the minimum of two binomial variables having the same probability of success and the same number of trials. In this note we suggest another approximation for the mean and also for the standard deviation of the minimum, using normal order statistics tabulated by Teichroew (1956).
- Published
- 1966
18. Distributed asynchronous measurement system fusion estimation based on inverse covariance intersection algorithm.
- Author
-
Guo, Taishan, Wang, Mingquan, Zhou, Shuyu, and Song, Wenai
- Subjects
KALMAN filtering ,BINOMIAL distribution ,RANDOM sets ,ALGORITHMS ,DATA integrity ,INFORMATION measurement - Abstract
For state estimation of multi-source asynchronous measurement systems with measurement missing phenomena, this paper proposes a distributed sequential inverse covariance intersection (DSICI) fusion algorithm based on conditional Kalman filtering method. It is mainly divided into synchronized state space module, local filtering module and fusion estimation module. The missing measurements occurring in the system are modelled and described by a set of random variables obeying a Bernoulli distribution. The synchronized state space module uses a state iteration method to synchronize the asynchronous measurement system at the moment of measurement update and it ensures the integrity of the measurement information. The local filtering module uses a conditional Kalman filtering algorithm for filter estimation. The reliability of the local filtering results is guaranteed because the local estimator designs a method to interact information with the domain sensors. The fusion estimation module designs a DSICI fusion algorithm with higher accuracy and satisfying consistency, which fuses the filtering results provided by each sensor when the relevant information between multiple sensors is unknown. Simulation examples demonstrate the excellent performance of the proposed algorithm, with a 33% improvement in accuracy over existing algorithms and an iteration time of less than 3 ms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Nonlinear event-based state estimation using particle filter under packet loss.
- Author
-
Gasmi, Elhadi, Sid, Mohamed Amine, and Hachana, Oussama
- Subjects
KALMAN filtering ,BINOMIAL distribution ,NONLINEAR estimation ,RANDOM variables ,DISCRETE systems ,COVARIANCE matrices - Abstract
In this research paper, we investigate the problem of remote state estimation for nonlinear discrete systems. Specifically, we focus on scenarios where event-triggered sensor schedules are utilized and where packet drops occur between the sensor and the estimator. In the sensor scheduler, the SOD mechanism is proposed to decrease the amount of data transmitted from the sensor to a remote estimator and the phenomena of packet drops modeled with random variables obeying the Bernoulli distribution. As a consequence of packet drops, the assumption of Gaussianity no longer holds at the estimator side. By fully considering the non-linearity and non-Gaussianity of the dynamic system, this paper develops an event-trigger particle filter algorithm to relieve the communication burden and achieve an appropriate estimation accuracy. First, we derive an explicit expression for the likelihood function when an event trigger occurs and the possible occurrence of packet dropout is taken into consideration. Then, using a special form of sequential Monte–Carlo algorithm, the posterior distribution is approximated and the corresponding minimum mean-squared error is derived. By contrasting the error covariance matrix with the posterior Cramér–Rao lower bound, the estimator's performance is assessed. An illustrative numerical example shows the effectiveness of the proposed design. [Display omitted] • Limited bandwidth networked systems require sensors scheduling algorithm design. • Event-based particle filter is designed in conjunction with scheduling algorithms and packet dropout. • With Cramér–Rao lower bound the proposed scheduling algorithms improve clearly the filter performance. • Innovation scheduling allows robustness against packet dropouts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Limit Distributions of Products of Independent and Identically Distributed Random 2 × 2 Stochastic Matrices: A Treatment with the Reciprocal of the Golden Ratio.
- Author
-
Chakraborty, Santanu
- Subjects
GOLDEN ratio ,STOCHASTIC matrices ,BINOMIAL distribution ,RANDOM walks - Abstract
Consider a sequence (X n) n ≥ 1 of i.i.d. 2 × 2 stochastic matrices with each X n distributed as μ. This μ is described as follows. Let (C n , D n) T denote the first column of X n and for a given real r with 0 < r < 1 , let r − 1 C n and r − 1 D n each be Bernoulli distributions with parameters p 1 and p 2 , respectively, and 0 < p 1 , p 2 < 1 . Clearly, the weak limit of the sequence μ n , namely λ , is known to exist, whose support is contained in the set of all 2 × 2 rank one stochastic matrices. In a previous paper, we considered 0 < r ≤ 1 2 and obtained λ explicitly. We showed that λ is supported countably on many points, each with positive λ -mass. Of course, the case 0 < r ≤ 1 2 is tractable, but the case r > 1 2 is very challenging. Considering the extreme nontriviality of this case, we stick to a very special such r, namely, r = 5 − 1 2 (the reciprocal of the golden ratio), briefly mention the challenges in this nontrivial case, and completely identify λ for a very special situation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. An Analysis of an Open Source Binomial Random Variate Generation Algorithm †.
- Author
-
Cicirello, Vincent A.
- Subjects
BINOMIAL distribution ,PROBABILITY theory ,ALGORITHMS ,OPEN source software ,RANDOMIZATION (Statistics) - Abstract
The binomial distribution is the probability distribution of the number of successes for a sequence of n independent trials with success probability p. Efficiently generating binomial random variates is important in many modeling and simulation applications, such as in medicine, risk management, and fraud and anomaly detection, among others. A variety of algorithms exist for generating binomial random variates. This paper concerns the algorithm chosen for ρ μ , an open source Java library for efficient randomization, which uses a hybrid of two existing binomial random variate algorithms: the BTPE Algorithm (Binomial, Triangle, Parallelogram, Exponential) and the inverse transform for cases that BTPE cannot handle. BTPE uses rejection sampling, and BTPE's authors originally provided an analytical formula for the expected number of iterations in terms of n and p. That expression is complicated to interpret in practical contexts. I explore BTPE by instrumenting ρ μ 's implementation to empirically analyze its acceptance/rejection behavior to gain further insights into its runtime performance. Although the number of iterations depends upon n and p, my experiments show that the average number of iterations is always under two, and that the average number of random uniform variates required to generate a single random binomial is under four (two per iteration). Thus, when analyzing the runtime of a simulation algorithm that includes steps generating random binomials, one can consider such steps to have a constant runtime. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Limit Distributions for the Estimates of the Digamma Distribution Parameters Constructed from a Random Size Sample.
- Author
-
Kudryavtsev, Alexey and Shestakov, Oleg
- Subjects
NEGATIVE binomial distribution ,STATISTICAL sampling ,SAMPLE size (Statistics) ,CONTINUOUS distributions ,BINOMIAL distribution ,ASYMPTOTIC normality ,GAMMA distributions - Abstract
In this paper, we study a new type of distribution that generalizes distributions from the gamma and beta classes that are widely used in applications. The estimators for the parameters of the digamma distribution obtained by the method of logarithmic cumulants are considered. Based on the previously proved asymptotic normality of the estimators for the characteristic index and the shape and scale parameters of the digamma distribution constructed from a fixed-size sample, we obtain a statement about the convergence of these estimators to the scale mixtures of the normal law in the case of a random sample size. Using this result, asymptotic confidence intervals for the estimated parameters are constructed. A number of examples of the limit laws for sample sizes with special forms of negative binomial distributions are given. The results of this paper can be widely used in the study of probabilistic models based on continuous distributions with an unbounded non-negative support. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Comparative assessment of different familial aggregation methods in the context of large and unstructured pedigrees
- Author
-
Peter P. Pramstaller, Cristian Pattaro, Christian X. Weichenberger, Johannes Rainer, and Francisco S. Domingues
- Subjects
Statistics and Probability ,Computer science ,Pedigree chart ,Context (language use) ,Penetrance ,Familial hypercholesterolemia ,Machine learning ,computer.software_genre ,Biochemistry ,Bioconductor ,03 medical and health sciences ,medicine ,Null distribution ,Humans ,Molecular Biology ,030304 developmental biology ,Statistical hypothesis testing ,0303 health sciences ,Molecular Epidemiology ,Molecular epidemiology ,business.industry ,030302 biochemistry & molecular biology ,Genetics and Population Analysis ,Family aggregation ,Genetic Variation ,medicine.disease ,Original Papers ,Computer Science Applications ,Pedigree ,Computational Mathematics ,Binomial Distribution ,Computational Theory and Mathematics ,Artificial intelligence ,business ,computer ,Software - Abstract
Motivation Familial aggregation analysis is an important early step for characterizing the genetic determinants of phenotypes in epidemiological studies. To facilitate this analysis, a collection of methods to detect familial aggregation in large pedigrees has been made available recently. However, efficacy of these methods in real world scenarios remains largely unknown. Here, we assess the performance of five aggregation methods to identify individuals or groups of related individuals affected by a Mendelian trait within a large set of decoys. We investigate method performance under a representative set of combinations of causal variant penetrance, trait prevalence and number of affected generations in the pedigree. These methods are then applied to assess familial aggregation of familial hypercholesterolemia and stroke, in the context of the Cooperative Health Research in South Tyrol (CHRIS) study. Results We find that in some situations statistical hypothesis testing with a binomial null distribution achieves performance similar to methods that are based on kinship information, while kinship based methods perform better when information is available on fewer generations. Potential case families from the CHRIS study are reported and the results are discussed taking into account insights from the performance assessment. Availability and implementation The familial aggregation analysis package is freely available at the Bioconductor repository, http://www.bioconductor.org/packages/FamAgg. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2018
24. bayNorm: Bayesian gene expression recovery, imputation and normalisation for single cell RNA-sequencing data
- Author
-
Philipp Thomas, François Bertaux, Claire Stefanelli, Wenhao Tang, Vahid Shahrezaei, Malika Saint, Samuel Marguerat, The Leverhulme Trust, and Engineering & Physical Science Research Council (EPSRC)
- Subjects
Statistics and Probability ,Normalization (statistics) ,Computer science ,Bioinformatics ,Bayesian probability ,Inference ,Gene Expression ,computer.software_genre ,Biochemistry ,03 medical and health sciences ,Bayes' theorem ,0302 clinical medicine ,Prior probability ,Imputation (statistics) ,Molecular Biology ,In Situ Hybridization, Fluorescence ,01 Mathematical Sciences ,030304 developmental biology ,0303 health sciences ,business.industry ,Sequence Analysis, RNA ,Gene Expression Profiling ,Pattern recognition ,Bayes Theorem ,06 Biological Sciences ,Missing data ,Original Papers ,Computer Science Applications ,Binomial distribution ,Computational Mathematics ,Computational Theory and Mathematics ,RNA ,Data mining ,Artificial intelligence ,08 Information and Computing Sciences ,Single-Cell Analysis ,business ,Likelihood function ,computer ,030217 neurology & neurosurgery ,Software - Abstract
Motivation Normalization of single-cell RNA-sequencing (scRNA-seq) data is a prerequisite to their interpretation. The marked technical variability, high amounts of missing observations and batch effect typical of scRNA-seq datasets make this task particularly challenging. There is a need for an efficient and unified approach for normalization, imputation and batch effect correction. Results Here, we introduce bayNorm, a novel Bayesian approach for scaling and inference of scRNA-seq counts. The method’s likelihood function follows a binomial model of mRNA capture, while priors are estimated from expression values across cells using an empirical Bayes approach. We first validate our assumptions by showing this model can reproduce different statistics observed in real scRNA-seq data. We demonstrate using publicly available scRNA-seq datasets and simulated expression data that bayNorm allows robust imputation of missing values generating realistic transcript distributions that match single molecule fluorescence in situ hybridization measurements. Moreover, by using priors informed by dataset structures, bayNorm improves accuracy and sensitivity of differential expression analysis and reduces batch effect compared with other existing methods. Altogether, bayNorm provides an efficient, integrated solution for global scaling normalization, imputation and true count recovery of gene expression measurements from scRNA-seq data. Availability and implementation The R package ‘bayNorm’ is publishd on bioconductor at https://bioconductor.org/packages/release/bioc/html/bayNorm.html. The code for analyzing data in this article is available at https://github.com/WT215/bayNorm_papercode. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2019
25. iRSpot-Pse6NC: Identifying recombination spots in Saccharomyces cerevisiae by incorporating hexamer composition into general PseKNC
- Author
-
Guo-Qing Liu, Feng-Biao Guo, Kuo-Chen Chou, Wei Chen, Hao Lin, Hui Yang, and Wang-Ren Qiu
- Subjects
0301 basic medicine ,Recombination hotspot ,Computer science ,SVM ,Saccharomyces cerevisiae ,Computational biology ,Random hexamer ,Applied Microbiology and Biotechnology ,Genome ,03 medical and health sciences ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Recombination, Genetic ,biology ,Computational Biology ,Cell Biology ,Sequence Analysis, DNA ,biology.organism_classification ,Support vector machine ,Binomial distribution ,030104 developmental biology ,5-step rules ,Key hexamers ,PseKNC ,Homologous recombination ,Recombination ,Recombination spot ,Webserver ,Algorithms ,Software ,Developmental Biology ,Research Paper - Abstract
Meiotic recombination caused by meiotic double-strand DNA breaks. In some regions the frequency of DNA recombination is relatively higher, while in other regions the frequency is lower: the former is usually called "recombination hotspot", while the latter the "recombination coldspot". Information of the hot and cold spots may provide important clues for understanding the mechanism of genome revolution. Therefore, it is important to accurately predict these spots. In this study, we rebuilt the benchmark dataset by unifying its samples with a same length (131 bp). Based on such a foundation and using SVM (Support Vector Machine) classifier, a new predictor called "iRSpot-Pse6NC" was developed by incorporating the key hexamer features into the general PseKNC (Pseudo K-tuple Nucleotide Composition) via the binomial distribution approach. It has been observed via rigorous cross-validations that the proposed predictor is superior to its counterparts in overall accuracy, stability, sensitivity and specificity. For the convenience of most experimental scientists, the web-server for iRSpot-Pse6NC has been established at http://lin-group.cn/server/iRSpot-Pse6NC, by which users can easily obtain their desired result without the need to go through the detailed mathematical equations involved.
- Published
- 2018
26. Iterative learning fault-tolerant control for networked batch processes with event-triggered transmission strategy and data dropouts.
- Author
-
Tang, Jirui and Sheng, Li
- Subjects
ITERATIVE learning control ,FAULT tolerance (Engineering) ,DATA transmission systems ,BINOMIAL distribution ,STOCHASTIC systems ,LINEAR matrix inequalities - Abstract
This paper studies the iterative learning fault-tolerant control (ILFTC) problem for networked batch processes with event-triggered transmission strategy and data dropouts. During the transmission of input signal, the event-triggered mechanism is adopted to reduce the number of updated data items. The data dropouts are assumed to obey the Bernoulli random binary distribution. The objective of this paper is to design a state feedback controller such that the system is fault-tolerant and satisfies the robust performance requirement. By combining 2D stochastic system theory and linear matrix inequality (LMI) technique, some sufficient conditions are given to ensure the existence of the designed controller. Finally, an example of nozzle pressure control is utilized to verify the availability of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. RMSE-minimizing confidence intervals for the binomial parameter
- Author
-
Feng, Kexin, Leemis, Lawrence M., and Sasinowska, Heather
- Published
- 2022
- Full Text
- View/download PDF
28. µσ² -Beta and µσ² -Beta Binomial Regression Models.
- Author
-
CEPEDA-CUERVO, EDILBERTO
- Subjects
REGRESSION analysis ,BINOMIAL distribution ,BETA distribution ,SCHOOL absenteeism ,PARAMETERIZATION - Abstract
Copyright of Colombian Journal of Statistics / Revista Colombiana de Estadística is the property of Universidad Nacional de Colombia and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
29. Decision-theoretic designs for a series of trials with correlated treatment effects using the Sarmanov multivariate beta-binomial distribution
- Author
-
Nigel Stallard, Siew Wan Hee, and Nicholas R. Parsons
- Subjects
Statistics and Probability ,Biometry ,Bayesian decision theory ,Population ,01 natural sciences ,law.invention ,Special Issue: ISCB 2016 ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Randomized controlled trial ,Frequentist inference ,law ,Prior probability ,Statistics ,Econometrics ,Humans ,030212 general & internal medicine ,0101 mathematics ,education ,QA ,correlated trials ,Mathematics ,Statistical hypothesis testing ,Bayes estimator ,education.field_of_study ,Clinical Trials as Topic ,Models, Statistical ,General Medicine ,R1 ,3. Good health ,Binomial distribution ,Binomial Distribution ,Treatment Outcome ,Beta-binomial distribution ,backward induction ,bivariate beta distribution ,Sarmanov beta‐binomial ,Multivariate Analysis ,Statistics, Probability and Uncertainty ,Research Paper - Abstract
The motivation for the work in this article is the setting in which a number of treatments are available for evaluation in phase II clinical trials and where it may be infeasible to try them concurrently because the intended population is small. This paper introduces an extension of previous work on decision‐theoretic designs for a series of phase II trials. The program encompasses a series of sequential phase II trials with interim decision making and a single two‐arm phase III trial. The design is based on a hybrid approach where the final analysis of the phase III data is based on a classical frequentist hypothesis test, whereas the trials are designed using a Bayesian decision‐theoretic approach in which the unknown treatment effect is assumed to follow a known prior distribution. In addition, as treatments are intended for the same population it is not unrealistic to consider treatment effects to be correlated. Thus, the prior distribution will reflect this. Data from a randomized trial of severe arthritis of the hip are used to test the application of the design. We show that the design on average requires fewer patients in phase II than when the correlation is ignored. Correspondingly, the time required to recommend an efficacious treatment for phase III is quicker.
- Published
- 2018
30. How well do RNA-Seq differential gene expression tools perform in a complex eukaryote? A case study in Arabidopsis thaliana
- Author
-
Kimon, Froussios, Nick J, Schurch, Katarzyna, Mackinnon, Marek, Gierliński, Céline, Duc, Gordon G, Simpson, and Geoffrey J, Barton
- Subjects
Binomial Distribution ,Sequence Analysis, RNA ,Arabidopsis ,Gene Expression ,RNA-Seq ,Original Papers ,Software - Abstract
Motivation RNA-seq experiments are usually carried out in three or fewer replicates. In order to work well with so few samples, differential gene expression (DGE) tools typically assume the form of the underlying gene expression distribution. In this paper, the statistical properties of gene expression from RNA-seq are investigated in the complex eukaryote, Arabidopsis thaliana, extending and generalizing the results of previous work in the simple eukaryote Saccharomyces cerevisiae. Results We show that, consistent with the results in S.cerevisiae, more gene expression measurements in A.thaliana are consistent with being drawn from an underlying negative binomial distribution than either a log-normal distribution or a normal distribution, and that the size and complexity of the A.thaliana transcriptome does not influence the false positive rate performance of nine widely used DGE tools tested here. We therefore recommend the use of DGE tools that are based on the negative binomial distribution. Availability and implementation The raw data for the 17 WT Arabidopsis thaliana datasets is available from the European Nucleotide Archive (E-MTAB-5446). The processed and aligned data can be visualized in context using IGB (Freese et al., 2016), or downloaded directly, using our publicly available IGB quickload server at https://compbio.lifesci.dundee.ac.uk/arabidopsisQuickload/public_quickload/ under ‘RNAseq>Froussios2019’. All scripts and commands are available from github at https://github.com/bartongroup/KF_arabidopsis-GRNA. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2018
31. Estimating error models for whole genome sequencing using mixtures of Dirichlet-multinomial distributions
- Author
-
David J. Winter, Rachel S. Schwartz, Steven H. Wu, Donald F. Conrad, and Reed A. Cartwright
- Subjects
0301 basic medicine ,Statistics and Probability ,Mutation rate ,DNA Copy Number Variations ,Genotyping Techniques ,Genomics ,Biology ,Sensitivity and Specificity ,Biochemistry ,Dirichlet distribution ,03 medical and health sciences ,symbols.namesake ,Statistics ,Humans ,Copy-number variation ,Molecular Biology ,Models, Statistical ,Whole Genome Sequencing ,Genome, Human ,Genetics and Population Analysis ,Original Papers ,Computer Science Applications ,Binomial distribution ,Computational Mathematics ,030104 developmental biology ,Computational Theory and Mathematics ,symbols ,Probability distribution ,Multinomial distribution ,Statistical Distributions - Abstract
Motivation: Accurate identification of genotypes is an essential part of the analysis of genomic data, including in identification of sequence polymorphisms, linking mutations with disease and determining mutation rates. Biological and technical processes that adversely affect genotyping include copy-number-variation, paralogous sequences, library preparation, sequencing error and reference-mapping biases, among others. Results: We modeled the read depth for all data as a mixture of Dirichlet-multinomial distributions, resulting in significant improvements over previously used models. In most cases the best model was comprised of two distributions. The major-component distribution is similar to a binomial distribution with low error and low reference bias. The minor-component distribution is overdispersed with higher error and reference bias. We also found that sites fitting the minor component are enriched for copy number variants and low complexity regions, which can produce erroneous genotype calls. By removing sites that do not fit the major component, we can improve the accuracy of genotype calls. Availability and Implementation: Methods and data files are available at https://github.com/CartwrightLab/WuEtAl2017/ (doi:10.5281/zenodo.256858). Contact: cartwright@asu.edu Supplementary information: Supplementary data is available at Bioinformatics online.
- Published
- 2017
32. Sequence-based predictive modeling to identify cancerlectins
- Author
-
Wei Chen, Hong-Yan Lai, Hua Tang, Hao Lin, and Xin-Xin Chen
- Subjects
0301 basic medicine ,Support Vector Machine ,SVM ,Genomics ,Machine learning ,computer.software_genre ,03 medical and health sciences ,Chen ,cancerlectins ,Lectins ,Neoplasms ,Humans ,Amino Acid Sequence ,binomial distribution ,Databases, Protein ,Sequence ,biology ,business.industry ,optimal tripeptides ,Reproducibility of Results ,Feature description ,biology.organism_classification ,Support vector machine ,Binomial distribution ,030104 developmental biology ,ROC Curve ,Oncology ,Christian ministry ,Artificial intelligence ,business ,computer ,Jackknife resampling ,Algorithms ,Research Paper - Abstract
// Hong-Yan Lai 1 , Xin-Xin Chen 1 , Wei Chen 1, 2 , Hua Tang 3 , Hao Lin 1 1 Key Laboratory for Neuro-Information of Ministry of Education, School of Life Science and Technology, Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China 2 Department of Physics, School of Sciences, and Center for Genomics and Computational Biology, North China University of Science and Technology, Tangshan, Tangshan, China 3 Department of Pathophysiology, Southwest Medical University, Luzhou, China Correspondence to: Hua Tang, email: Tanghua771211@aliyun.com Hao Lin, email: hlin@uestc.edu.cn Keywords: cancerlectins, binomial distribution, optimal tripeptides, SVM Received: January 18, 2017 Accepted: February 24, 2017 Published: March 07, 2017 ABSTRACT Lectins are a diverse type of glycoproteins or carbohydrate-binding proteins that have a wide distribution to various species. They can specially identify and exclusively bind to a certain kind of saccharide groups. Cancerlectins are a group of lectins that are closely related to cancer and play a major role in the initiation, survival, growth, metastasis and spread of tumor. Several computational methods have emerged to discriminate cancerlectins from non-cancerlectins, which promote the study on pathogenic mechanisms and clinical treatment of cancer. However, the predictive accuracies of most of these techniques are very limited. In this work, by constructing a benchmark dataset based on the CancerLectinDB database, a new amino acid sequence-based strategy for feature description was developed, and then the binomial distribution was applied to screen the optimal feature set. Ultimately, an SVM-based predictor was performed to distinguish cancerlectins from non-cancerlectins, and achieved an accuracy of 77.48% with AUC of 85.52% in jackknife cross-validation. The results revealed that our prediction model could perform better comparing with published predictive tools.
- Published
- 2017
33. A novel WGF-LN based edge driven intelligence for wearable devices in human activity recognition.
- Author
-
Menaka, S. R., Prakash, M., Neelakandan, S., and Radhakrishnan, Arun
- Subjects
SUPERVISED learning ,FEATURE extraction ,DEEP learning ,MACHINE learning ,HUMAN activity recognition ,S-matrix theory ,SCATTER diagrams ,BINOMIAL distribution - Abstract
Human activity recognition (HAR) is one of the key applications of health monitoring that requires continuous use of wearable devices to track daily activities. The most efficient supervised machine learning (ML)-based approaches for predicting human activity are based on a continuous stream of sensor data. Sensor data analysis for human activity recognition using conventional algorithms and deep learning (DL) models shows promising results, but evaluating their ambiguity in decision-making is still challenging. In order to solve these issues, the paper proposes a novel Wasserstein gradient flow legonet WGF-LN-based human activity recognition system. At first, the input data is pre-processed. From the pre-processed data, the features are extracted using Haar Wavelet mother- Symlet wavelet coefficient scattering feature extraction (HS-WSFE). After that, the interest features are selected from the extracted features using (Binomial Distribution integrated-Golden Eagle Optimization) BD-GEO. The important features are then post-processed using the scatter plot matrix method. Obtained post-processing features are finally given into the WGF-LN for classifying human activities. From these experiments, the results can be obtained and showed the efficacy of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Bi-Univalent Functions Based on Binomial Series-Type Convolution Operator Related with Telephone Numbers.
- Author
-
Bayram, Hasan, Vijaya, Kaliappan, Murugusundaramoorthy, Gangadharan, and Yalçın, Sibel
- Subjects
TELEPHONE numbers ,TELEPHONE operators ,UNIVALENT functions ,BINOMIAL distribution ,ANALYTIC functions - Abstract
This paper introduces two novel subclasses of the function class Σ for bi-univalent functions, leveraging generalized telephone numbers and Binomial series through convolution. The exploration is conducted within the domain of the open unit disk. We delve into the analysis of initial Taylor-Maclaurin coefficients | a 2 | and | a 3 | , deriving insights and findings for functions belonging to these new subclasses. Additionally, Fekete-Szegö inequalities are established for these functions. Furthermore, the study unveils a range of new subclasses of Σ , some of which are special cases, yet have not been previously explored in conjunction with telephone numbers. These subclasses emerge as a result of hybrid-type convolution operators. Concluding from our results, we present several corollaries, which stand as fresh contributions in the domain of involution numbers involving hybrid-type convolution operators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Sharper Concentration Inequalities for Median-of-Mean Processes.
- Author
-
Teng, Guangqiang, Li, Yanpeng, Tian, Boping, and Li, Jie
- Subjects
EMPIRICAL research ,MOTHERS ,BINOMIAL distribution - Abstract
The Median-of-Mean (MoM) estimation is an efficient statistical method for handling data with contamination. In this paper, we propose a variance-dependent MoM estimation method using the tail probability of a binomial distribution. The bound of this method is better than the classical Hoeffding method under mild conditions. This method is then used to study the concentration of variance-dependent MoM empirical processes and sub-Gaussian intrinsic moment norm. Finally, we give the bound of the variance-dependent MoM estimator with distribution-free contaminated data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Reliability of Ensemble Climatological Forecasts.
- Author
-
Huang, Zeqing, Zhao, Tongtiegang, Tian, Yu, Chen, Xiaohong, Duan, Qingyun, and Wang, Hao
- Subjects
PROBABILITY density function ,CUMULATIVE distribution function ,GAMMA distributions ,PARETO distribution ,BINOMIAL distribution ,FORECASTING ,EXTREME value theory ,DISTRIBUTION (Probability theory) ,CENSORING (Statistics) - Abstract
Ensemble climatological forecasts play a critical part in benchmarking the predictive performance of hydroclimatic forecasts. Accounting for the skewness and censoring characteristics of hydroclimatic variables, ensemble climatological forecasts can be generated by the log, Box‐Cox and log‐sinh transformations, by the combinations of the Bernoulli distribution with the Gaussian, Gamma, log‐normal, generalized extreme value, generalized logistic and Pearson type III distributions and by the non‐parametric resampling, empirical cumulative distribution function and kernel density estimation methods. This paper is concentrated on the reliability of the 12 types of ensemble climatological forecasts. Specifically, mathematical formulations are presented and large‐sample tests are devised to verify the forecast reliability for the Multi‐Source Weighted‐Ensemble Precipitation version 2 across the globe. Climatological forecasts of monthly precipitation over 18,425 grid cells are generated for 30 years under leave‐one‐year‐out cross validation, leading to 6,633,000 (12 × 18425 × 30) sets of ensemble climatological forecasts. The results point out that the reliability of climatological forecasts considerably varies across the 12 methods, particularly in regions with high hydroclimatic variability. One observation is that climatological forecasts tend to deviate from the distributions of observations when there is inadequate flexibility to fit precipitation data. Another observation is that ensemble spreads can be overly wide when there exist overfits of sample‐specific noises in cross validation. Through the tests of global precipitation, the robustness of the log‐sinh transformation and the Bernoulli‐Gamma distribution is highlighted. Overall, the investigations can serve as a guidance on the uses of transformations, distributions and non‐parametric methods in generating climatological forecasts. Plain Language Summary: Ensemble climatological forecasts have been extensively used as the benchmark to evaluate forecast skill. That is, forecasts generated by a certain forecasting model are skillful when they outperform climatological forecasts and otherwise they are not. In practice, ensemble climatological forecasts are generated by different methods, including the log, Box‐Cox and log‐sinh transformations, the combinations of the Bernoulli distribution with the Gaussian, Gamma, log‐normal, generalized extreme value, generalized logistic and Pearson type III distributions and the non‐parametric resampling, empirical cumulative distribution function and kernel density estimation methods. It is important to investigate pros and cons of different types of climatological forecasts. Focusing on the reliability, that is, statistical consistency between forecasts and observations, this paper has devised large‐sample tests of global monthly precipitation. The results show that owing to hydroclimatic variability, different types of climatological forecasts exhibit varying characteristics of reliability. On the one hand, climatological forecasts can deviate from observations when there is inadequate flexibility to fit precipitation data, especially for the Bernoulli‐Gaussian distribution. On the other hand, ensemble spreads can be too wide when there exist overfits of sample‐specific noises in cross validation. Among the 12 methods, the robustness of the log‐sinh transformation and the Bernoulli‐Gamma distribution is highlighted. Key Points: Climatological forecasts can be generated by using data transformations, statistical distributions and non‐parametric methodsThe reliability of climatological forecasts generated by different methods is shown to vary considerably in large‐sample testsThe robustness of log‐sinh transformation and Bernoulli‐Gamma distribution is illustrated through the tests of global precipitation [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. PHOTON STATISTICS AND NON-LOCAL PROPERTIES OF A TWO-QUBIT-FIELD SYSTEM IN THE EXCITED NEGATIVE BINOMIAL DISTRIBUTION.
- Author
-
ALMARASHI, Abdullah M., ABDEL-KHALEK, Sayed, and KUNDU, Debasis
- Subjects
NEGATIVE binomial distribution ,BINOMIAL distribution ,QUANTUM entropy ,PHOTONS ,PHOTON counting ,FISHER information - Abstract
In this paper, a quantum scheme for a two-qubit system (2QS) and field initially prepared in the excited negative binomial distribution is presented. The field photon statistics is detected from the evolution of the Mandel parameter, while the evolution of von Neumann entropy detects the nonlocal correlation between the 2QS and radiation field. The concurrence is used to detect the qubit-qubit entanglement during the time evolution. The dynamical properties of single-qubit and two-qubit quantum Fisher information are investigated. We visualize the number of photon excitations on the field in negative binomial states with influence of photon success probability. A connection is provided between the dynamical behaviors of these statistical quantities. We have found that the proposed quantities are strongly influenced by the number of excited photons of the field in negative binomial states and photon success probability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain.
- Author
-
Peng, Yudong, Guo, Longchuan, and Meng, Qinghua
- Subjects
AUTONOMOUS underwater vehicles ,NONLINEAR systems ,BACKSTEPPING control method ,REMOTE submersibles ,PROBABILITY theory ,BINOMIAL distribution ,ADAPTIVE control systems - Abstract
In this paper, an underwater robot system with nonlinear characteristics is studied by a backstepping method. Based on the state preservation problem of an Autonomous Underwater Vehicle (AUV), this paper applies the backstepping probabilistic gain controller to the nonlinear system of the AUV for the first time. Under the comprehensive influence of underwater resistance, turbulence, and driving force, the motion of the AUV has strong coupling, strong nonlinearity, and an unpredictable state. At this time, the system's output feedback can solve the problem of an unmeasurable state. In order to achieve a good control effect and extend the cruising range of the AUV, first, this paper will select the state error to make it a new control objective. The system's control is transformed into the selection of system parameters, which greatly simplifies the degree of calculation. Second, this paper introduces the concept of a stochastic backstepping control strategy, in which the robot's actuators work discontinuously. The actuator works only when there is a random disturbance, and the control effect is not diminished. Finally, the backstepping probabilistic gain controller is designed according to the nonlinear system applied to the simulation model for verification, and the final result confirms the effect of the controller design. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Memory sampled-data control design for attitude stabilization of uncertain spacecraft with randomly missing measurements.
- Author
-
Moorthy, Janani, Balasubramani, Visakamoorthi, Palanisamy, Muthukumar, and Hur, Sung-ho
- Subjects
- *
ARTIFICIAL satellite attitude control systems , *SPACE vehicles , *LINEAR matrix inequalities , *BINOMIAL distribution , *MATRIX inequalities , *INTEGRAL inequalities , *STABILITY criterion - Abstract
• Memory sampled-data control is applied for the uncertain rigid spacecraft model. • Randomly missing measurement in the system output obeys the Bernoulli distribution. • It tolerates uncertainty and disturbances using the H ∞ technique. • A novel looped and delay-dependent LKF is constructed for the proposed systems. • It gives the minimum disturbance attenuation level compared with existing works. This paper focuses on stabilizing the attitude of the rigid spacecraft under the sampled-data control design technique with constant communication delay. The system dynamics with model uncertainty (perturbation) and missing measurements are considered. The aim is to design a desirable attitude control that ensures the stabilization of the rigid spacecraft with an optimal H ∞ performance level. A novel Lyapunov–Krasovskii functional is developed, incorporating looped characteristics, for the proposed study on the rigid spacecraft model by developing relevant new terms. Making use of the Lyapunov function as well as free-matrix-based integral inequality, sufficient stability criteria are established to assure the stability of the rigid spacecraft model. Furthermore, the desired sampled-data control gain matrices are acquired through the solution of linear matrix inequalities, which guarantees the asymptotic stability of the rigid spacecraft model. Finally, the numerical simulation demonstrates the efficacy of the theoretical study proposed for the rigid spacecraft model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Switching-like event-triggered control of uncertain NCSs under delay distribution and DoS attacks.
- Author
-
Xie, Minxia and Yin, Xiuxia
- Subjects
DENIAL of service attacks ,BINOMIAL distribution ,DATA packeting ,STABILITY criterion ,TIME-varying systems ,FUZZY neural networks ,MATRIX inequalities ,LINEAR matrix inequalities - Abstract
The paper addresses the switching-like event-triggered control for uncertain networked control systems with time-varying delay under DoS attacks. First of all, to reduce the communication burden, a switching-like event-triggered mechanism is designed to automatically select the trigger condition according to whether the system is under DoS attacks, which have the advantage of reducing the number of data packets transmitted. Secondly, unlike the traditional assumption of time-varying delay, here it satisfies the condition that the probability is known, and combines the networked control systems to propose a novel time-delay system model, which can obtain a larger upper bound on the delay. Then, by using both the Lyapunov functional method and linear matrix inequality technique, we obtain sufficient conditions of uncertain networked control systems to achieve exponentially stable in the mean square sense. Furthermore, under the common limitations of the maximum continuous packet losses caused by the DoS attacks and delay, the stability criterion is derived, which can be used to estimate the communication parameters and security controller gain. Finally, through two simulation examples, the larger upper bound of time delay, less trigger times, faster convergence rate are obtained, which verify the validity of our theoretical analysis. • An event-triggered mechanism is designed which can switch the trigger threshold. • A new delay model based on Bernoulli distribution is introduced. • Lyapunov function method and linear matrix inequality techniques are used. • The uncertain networked control systems remain robust exponentially stable in the mean square sense under DoS attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Evaluation of Natural Robustness of Best Constant Weights to Random Communication Breakdowns.
- Author
-
Kenyeres, Martin, Kenyeres, Jozef, and Burget, Radim
- Subjects
WIRELESS sensor networks ,FAULT tolerance (Engineering) ,ROBUST control ,ENERGY consumption ,BINOMIAL distribution - Abstract
One of the most crucial aspects of an algorithm design for the wireless sensors networks is the failure tolerance. A high natural robustness and an effectively bounded execution time are factors that can significantly optimize the overall energy consumption and therefore, a great emphasis is laid on these aspects in many applications from the area of the wireless sensor networks. This paper addresses the robustness of the optimized Best Constant weights of Average Consensus with a stopping criterion (i.e. the algorithm is executed in a finite time) and their five variations with a lower mixing parameter (i.e. slower variants) to random communication breakdowns modeled as a stochastic event of a Bernoulli distribution. We choose three metrics, namely the deviation of the least precise final estimates from the average, the convergence rate expressed as the number of the iterations for the consensus, and the deceleration of each initial setup, in order to evaluate the robustness of various initial setups of Best Constant weights under a varying failure probability and over 30 random geometric graphs of either a strong or a weak connectivity. Our contribution is to find the most robust initial setup of Best Constant weights according to numerical experiments executed in Matlab. Finally, the experimentally obtained results are discussed, compared to the results from the error-free executions, and our conclusions are compared with the conclusions from related papers. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Polyester: simulating RNA-seq datasets with differential transcript expression
- Author
-
Ben Langmead, Jeffrey T. Leek, Alyssa C. Frazee, and Andrew E. Jaffe
- Subjects
Computer science ,Chromosomes, Human, Pair 22 ,genetic processes ,Word error rate ,RNA-Seq ,computer.software_genre ,01 natural sciences ,Biochemistry ,Bioconductor ,010104 statistics & probability ,Software ,Protein Isoforms ,Differential (infinitesimal) ,Regulation of gene expression ,0303 health sciences ,education.field_of_study ,High-Throughput Nucleotide Sequencing ,Original Papers ,Computer Science Applications ,Europe ,Polyester ,Binomial Distribution ,Computational Mathematics ,Computational Theory and Mathematics ,Data mining ,Algorithms ,Statistics and Probability ,Sequence analysis ,Population ,Computational biology ,Set (abstract data type) ,03 medical and health sciences ,Humans ,natural sciences ,0101 mathematics ,education ,Molecular Biology ,030304 developmental biology ,Gene transcript ,Sequence Analysis, RNA ,business.industry ,Gene Expression Profiling ,Computational Biology ,Chromosome ,RNA ,Statistical model ,Genetics, Population ,Gene Expression Regulation ,Haplotypes ,business ,computer - Abstract
Motivation: Statistical methods development for differential expression analysis of RNA sequencing (RNA-seq) requires software tools to assess accuracy and error rate control. Since true differential expression status is often unknown in experimental datasets, artificially constructed datasets must be utilized, either by generating costly spike-in experiments or by simulating RNA-seq data. Results: Polyester is an R package designed to simulate RNA-seq data, beginning with an experimental design and ending with collections of RNA-seq reads. Its main advantage is the ability to simulate reads indicating isoform-level differential expression across biological replicates for a variety of experimental designs. Data generated by Polyester is a reasonable approximation to real RNA-seq data and standard differential expression workflows can recover differential expression set in the simulation by the user. Availability and implementation: Polyester is freely available from Bioconductor (http://bioconductor.org/). Contact: jtleek@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2015
43. Event-Triggered Consensus of General Linear Multiagent Systems With Data Sampling and Random Packet Losses.
- Author
-
Wang, Fei, Wen, Guoguang, Peng, Zhaoxia, Huang, Tingwen, and Yu, Yongguang
- Subjects
MULTIAGENT systems ,LINEAR matrix inequalities ,LINEAR systems ,BINOMIAL distribution ,STATISTICAL sampling ,STOCHASTIC systems - Abstract
This paper investigates the event-triggered consensus of linear multiagent systems with periodic data sampling mechanisms, where random packet losses are taken into account. The random packet losses occur in communication links based on a certain probability, and it is subject to the Bernoulli distribution. A novel distributed control protocol is designed based on the combined measurement to achieve the mean square consensus. By using the Riccati inequalities and linear matrix inequalities, an event-triggered condition with fewer parameters is also designed to reduce the information updating number. The interaction among the control gain matrix, sampling interval, and packet losses probability is used to describe the consensus conditions. The maximum sampling interval is presented explicitly. It is shown that the advantages of the proposed event-triggered strategy with the data sampling mechanism can avoid the Zeno behavior of the systems and continuous monitoring of the states. The simulations are provided to verify the proposed control strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Process Capability and Performance Indices for Discrete Data.
- Author
-
Alevizakos, Vasileios
- Subjects
PROCESS capability ,BINOMIAL distribution ,NEGATIVE binomial distribution - Abstract
Process capability and performance indices (PCIs and PPIs) are used in industry to provide numerical measures for the capability and performance of several processes. The majority of the literature refers to PCIs and PPIs for continuous data. The aim of this paper is to compute the classical indices for discrete data following Poisson, binomial or negative binomial distribution using various transformation techniques. A simulation study under different situations of a process and comparisons with other existing PCIs for discrete data are also presented. The methodology of computing the indices is easy to use, and as a result, one can have an assessment of the process capability and performance without difficulty. Three examples are further provided to illustrate the application of the transformation techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Binomial-discrete Erlang-truncated exponential mixture and its application in cancer disease.
- Author
-
El-Alosey, Alaa R. and Eledum, Hussein
- Subjects
UNCERTAINTY (Information theory) ,CUMULATIVE distribution function ,DISTRIBUTION (Probability theory) ,PROBABILITY theory ,MAXIMUM likelihood statistics ,ORDER statistics ,WEIBULL distribution ,BINOMIAL distribution - Abstract
Among diseases, cancer exhibits the fastest global spread, presenting a substantial challenge for patients, their families, and the communities they belong to. This paper is devoted to modeling such a disease as a special case. A newly proposed distribution called the binomial-discrete Erlang-truncated exponential (BDETE) is introduced. The BDETE is a mixture of binomial distribution with the number of trials (parameter n ) taken after a discrete Erlang-truncated exponential distribution. A comprehensive mathematical treatment of the proposed distribution and expressions of its density, cumulative distribution function, survival function, failure rate function, Quantile function, moment generating function, Shannon entropy, order statistics, and stress-strength reliability, are provided. The distribution's parameters are estimated using the maximum likelihood method. Two real-world lifetime count data sets from the cancer disease, both of which are right-skewed and over-dispersed, are fitted using the proposed BDETE distribution to evaluate its efficacy and viability. We expect the findings to become standard works in probability theory and its related fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Adaptive event-triggered control for almost sure stability for vehicle platooning under interference and stochastic attacks.
- Author
-
Li, Zhicheng, Zhao, Hui, Wang, Yang, Ren, Yu, Chen, Zhilie, and Chen, Chao
- Subjects
ADAPTIVE control systems ,BINOMIAL distribution ,DENIAL of service attacks ,WIRELESS channels ,STABILITY criterion - Abstract
This paper considers the adaptive event-triggered strategy and controller design of vehicle platooning for stochastic attacks and interferences in communication channels. Bernoulli distribution and Markovian distribution Denial of Service models are introduced in this paper. In designing the controller, Aiming at the stochastic jamming attacks, the stability criterion is presented to guide the controller for almost sure string stability, and meanwhile aiming at the special safety requirements and the reduction of system interferences, the asymmetry event-triggered strategy framework is presented to adapt the transmission environment and the different safety requirements, which is designed to balance the principal concern in different situations. Finally, an example is introduced to demonstrate the controller performances of the vehicle platooning with the Bernoulli distribution and Markovian distribution DoS models, which implies that the presented methods are effective. • Considering the different safety requirements of the vehicle platooning and the influence of the SINR, the DoS attack probabilistic model is built by the analysis of the wireless communication channel under stochastic jamming attacks and interference among vehicles. • Considering the stability of the system and the jamming attacks, the controller design method can be refreshed in different communication environments and safety requirements to reduce the shock to the mechanical system of the vehicle and avoid the collision. • Considering the safety and focusing on the characteristic of the platooning system itself, we present the adaptive event-triggered method, which can further reduce the communication frequency, which is another different from previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Optimal distribution-free concentration for the log-likelihood function of Bernoulli variables.
- Author
-
Ren, Zhonggui
- Subjects
CONCENTRATION functions ,BINOMIAL distribution - Abstract
This paper aims to establish distribution-free concentration inequalities for the log-likelihood function of Bernoulli variables, which means that the tail bounds are independent of the parameters. Moreover, Bernstein's and Bennett's inequalities with optimal constants are obtained. The simulation study shows significant improvements over the previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Discrete quasiprobability distributions involving Bernoulli polynomials.
- Author
-
Almutairi, Bander
- Subjects
BERNOULLI polynomials ,BINOMIAL distribution ,BERNOULLI numbers ,POISSON distribution - Abstract
The aim of this short paper is to present a new family of discrete densities with two parameters based on Bernoulli numbers and polynomials. We use the properties of such numbers in order to compute the first moments and the density of a finite sum of such independent variables. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Forecasting binary outcomes in soccer.
- Author
-
Mattera, Raffaele
- Subjects
SOCCER tournaments ,FORECASTING ,SOCCER ,SPORTS statistics ,BINOMIAL distribution ,BINARY sequences ,SOCCER fans ,FUTUROLOGISTS - Abstract
Several studies deal with the development of advanced statistical methods for predicting football match results. These predictions are then used to construct profitable betting strategies. Even if the most popular bets are based on whether one expects that a team will win, lose, or draw in the next game, nowadays a variety of other outcomes are available for betting purposes. While some of these events are binary in nature (e.g. the red cards occurrence), others can be seen as binary outcomes. In this paper we propose a simple framework, based on score-driven models, able to obtain accurate forecasts for binary outcomes in soccer matches. To show the usefulness of the proposed statistical approach, two experiments to the English Premier League and to the Italian Serie A are provided for predicting red cards occurrence, Under/Over and Goal/No Goal events. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Extended Dissipative Filtering for Persistent Dwell-Time Switched Systems With Packet Dropouts.
- Author
-
Shen, Hao, Xing, Mengping, Yan, Huaicheng, and Park, Ju H.
- Subjects
BINOMIAL distribution ,DISCRETE-time systems ,FILTERS & filtration ,INFORMATION storage & retrieval systems ,MATHEMATICAL decoupling - Abstract
This paper mainly investigates the problem of extended dissipative filter design for switched discrete-time systems where the variation of switching signals among subsystems is governed by the persistent dwell-time strategy. Considering that in practical applications, the congestion of the communication channels between systems and filters may cause data loss, therefore the packet dropouts are taken into account to make the issue under consideration more general. Meanwhile, it is generally known that the mode information of the system may sometimes not be accessed, so we consider the design of a unified filter including mode-dependent and mode-independent filters simultaneously. A white sequence that obeys the Bernoulli distribution is used to express the random switching of the filters. The main purpose of this paper is to find a suitable filter design method which ensures that the resulting error system is exponentially mean-square stable and extended dissipative. Sufficient conditions are established to make sure the solvability of the addressed problem. By using appropriate decoupling method, some conditions that could be readily solved are obtained. In order to explain the correctness and effectiveness of the proposed method, two illustrated examples are given in the final part of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.