412 results on '"Bayes' rule"'
Search Results
2. Motivated Belief Updating and Rationalization of Information.
- Author
-
Drobner, Christoph and Goerg, Sebastian J.
- Subjects
DECISION making ,BEHAVIORAL economics ,TASK performance - Abstract
We study belief updating about relative performance in an ego-relevant task. Manipulating the perceived ego relevance of the task, we show that subjects substantially overweight positive information relative to negative information because they derive direct utility from holding positive beliefs. This finding provides a behavioral explanation why and how overconfidence can evolve in the presence of objective information. Moreover, we document that subjects who receive more negative information downplay the ego relevance of the task. These findings suggest that subjects use two alternative strategies to protect their ego when presented with objective information. This paper was accepted by Marie Claire Villeval, behavioral economics and decision analysis. Funding: The authors gratefully acknowledge financial support from the ExperimenTUM. Supplemental Material: The online appendices and data files are available at https://doi.org/10.1287/mnsc.2023.02537. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Semi‐supervised Gaussian mixture modelling with a missing‐data mechanism in R.
- Author
-
Lyu, Ziyang, Ahfock, Daniel, Thompson, Ryan, and McLachlan, Geoffrey J.
- Subjects
- *
SUPERVISED learning , *GAUSSIAN mixture models , *COVARIANCE matrices , *MISSING data (Statistics) , *ERROR rates - Abstract
Summary: Semi‐supervised learning is being extensively applied to estimate classifiers from training data in which not all the labels of the feature vectors are available. We present gmmsslm, an R package for estimating the Bayes' classifier from such partially classified data in the case where the feature vector has a multivariate Gaussian (normal) distribution in each of the pre‐defined classes. Our package implements a recently proposed Gaussian mixture modelling framework that incorporates a missingness mechanism for the missing labels in which the probability of a missing label is represented via a logistic model with covariates that depend on the entropy of the feature vector. Under this framework, it has been shown that the accuracy of the Bayes' classifier formed from the Gaussian mixture model fitted to the partially classified training data can even have lower error rate than if it were estimated from the sample completely classified. This result was established in the particular case of two Gaussian classes with a common covariance matrix. Here we focus on the effective implementation of an algorithm for multiple Gaussian classes with arbitrary covariance matrices. A strategy for initialising the algorithm is discussed and illustrated. The new package is demonstrated on some real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Bayesian Selective Median Filtering for Reduction of Impulse Noise in Digital Color Images.
- Author
-
Chukka, Demudu Naidu, Meka, James Stephen, Setty, S. Pallam, and Choppala, Praveen Babu
- Subjects
- *
BURST noise , *CHOICE (Psychology) , *PROBABILITY measures , *PEERS , *PIXELS , *KALMAN filtering - Abstract
The focus of this paper is impulse noise reduction in digital color images. The most popular noise reduction schemes are the vector median filter and its many variants that operate by minimizing the aggregate distance from one pixel to every other pixel in a chosen window. This minimizing operation determines the most confirmative pixel based on its similarity to the chosen window and replaces the central pixel of the window with the determined one. The peer group filters, unlike the vector median filters, determine a set of pixels that are most confirmative to the window and then perform filtering over the determined set. Using a set of pixels in the filtering process rather than one pixel is more helpful as it takes into account the full information of all the pixels that seemingly contribute to the signal. Hence, the peer group filters are found to be more robust to noise. However, the peer group for each pixel is computed deterministically using thresholding schemes. A wrong choice of the threshold will easily impair the filtering performance. In this paper, we propose a peer group filtering approach using principles of Bayesian probability theory and clustering. Here, we present a method to compute the probability that a pixel value is clean (not corrupted by impulse noise) and then apply clustering on the probability measure to determine the peer group. The key benefit of this proposal is that the need for thresholding in peer group filtering is completely avoided. Simulation results show that the proposed method performs better than the conventional vector median and peer group filtering methods in terms of noise reduction and structural similarity, thus validating the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Artificial neural network for safety information dissemination in vehicle-to-internet networks
- Author
-
Ramesh B. Koti, Mahabaleshwar S. Kakkasageri, and Rajani S. Pujar
- Subjects
bayes' rule ,dynamic clustering ,levenberg–marquardt algorithm ,safety information ,software agent ,vehicular ad hoc network ,Telecommunication ,TK5101-6720 ,Electronics ,TK7800-8360 - Abstract
In vehicular networks, diverse safety information can be shared among vehicles through internet connections. In vehicle-to-internet communications, vehicles on the road are wirelessly connected to different cloud networks, thereby accelerating safety information exchange. Onboard sensors acquire traffic-related information, and reliable intermediate nodes and network services, such as navigational facilities, allow to transmit safety information to distant target vehicles and stations. Using vehicle-to-network communications, we minimize delays and achieve high accuracy through consistent connectivity links. Our proposed approach uses intermediate nodes with two-hop separation to forward information. Target vehicle detection and routing of safety information are performed using machine learning algorithms. Compared with existing vehicle-to-internet solutions, our approach provides substantial improvements by reducing latency, packet drop, and overhead.
- Published
- 2023
- Full Text
- View/download PDF
6. Artificial neural network for safety information dissemination in vehicle‐to‐internet networks.
- Author
-
Koti, Ramesh B., Kakkasageri, Mahabaleshwar S., and Pujar, Rajani S.
- Subjects
VEHICULAR ad hoc networks ,INFORMATION networks ,MACHINE learning ,INFORMATION dissemination ,INTERNET access - Abstract
In vehicular networks, diverse safety information can be shared among vehicles through internet connections. In vehicle‐to‐internet communications, vehicles on the road are wirelessly connected to different cloud networks, thereby accelerating safety information exchange. Onboard sensors acquire traffic‐related information, and reliable intermediate nodes and network services, such as navigational facilities, allow to transmit safety information to distant target vehicles and stations. Using vehicle‐to‐network communications, we minimize delays and achieve high accuracy through consistent connectivity links. Our proposed approach uses intermediate nodes with two‐hop separation to forward information. Target vehicle detection and routing of safety information are performed using machine learning algorithms. Compared with existing vehicle‐to‐internet solutions, our approach provides substantial improvements by reducing latency, packet drop, and overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Research on the synergistic development and operation mechanism of vocational education and innovative development concepts in universities based on a separate equilibrium game model.
- Author
-
Jiang Wen
- Subjects
separating equilibrium games ,six-tuple sets ,bayes’ rule ,discrete problems ,linear complementarity ,97d60 ,Mathematics ,QA1-939 - Abstract
Using a separating equilibrium game model, this paper defines the ratio of cooperators to total individuals for the level of school-enterprise cooperation. We use Bayes’ rule to modify the strategy type and solve the vocational education discrete problem by describing the set of six tuples in the separating equilibrium game model under incomplete information. Transforming the vectors in the symmetric positive definite matrix into complementary linear vectors, it is found that the level of cooperation in the split equilibrium game model increases by about 45%. The reorganization, of course, teaching content in higher education according to the split equilibrium game model can effectively mobilize the learning enthusiasm of most students and provide new ideas for the innovative development of the higher education business.
- Published
- 2024
- Full Text
- View/download PDF
8. A probabilistic formalisation of contextual bias: From forensic analysis to systemic bias in the criminal justice system.
- Author
-
Cuellar, Maria, Mauro, Jacqueline, and Luby, Amanda
- Subjects
CRIMINAL justice system ,CRIMINAL trials ,FORENSIC sciences - Abstract
Researchers have found evidence of contextual bias in forensic science, but the discussion of contextual bias is currently qualitative. We formalise existing empirical research and show quantitatively how biases can be propagated throughout the legal system, all the way up to the final determination of guilt in a criminal trial. We provide a probabilistic framework for describing how information is updated in a forensic analysis setting by using the ratio form of Bayes' rule. We analyse results from empirical studies using this framework and employ simulations to demonstrate how bias can be compounded where experiments do not exist. We find that even minor biases in the earlier stages of forensic analysis can lead to large, compounded biases in the final determination of guilt in a criminal trial. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Acoustic Classification of Mosquitoes using Convolutional Neural Networks Combined with Activity Circadian Rhythm Information
- Author
-
Jaehoon Kim, Jeongkyu Oh, and Tae-Young Heo
- Subjects
artificial intelligence ,convolutional neural network (cnn) ,mosquitoes classification ,a priori probability ,bayes’ rule ,Technology - Abstract
Many researchers have used sound sensors to record audio data from insects, and used these data as inputs of machine learning algorithms to classify insect species. In image classification, the convolutional neural network (CNN), a well-known deep learning algorithm, achieves better performance than any other machine learning algorithm. This performance is affected by the characteristics of the convolution filter (ConvFilter) learned inside the network. Furthermore, CNN performs well in sound classification. Unlike image classification, however, there is little research on suitable ConvFilters for sound classification. Therefore, we compare the performances of three convolution filters, 1D-ConvFilter, 3×1 2D-ConvFilter, and 3×3 2D-ConvFilter, in two different network configurations, when classifying mosquitoes using audio data. In insect sound classification, most machine learning researchers use only audio data as input. However, a classification model, which combines other information such as activity circadian rhythm, should intuitively yield improved classification results. To utilize such relevant additional information, we propose a method that defines this information as a priori probabilities and combines them with CNN outputs. Of the networks, VGG13 with 3×3 2D-ConvFilter showed the best performance in classifying mosquito species, with an accuracy of 80.8%. Moreover, adding activity circadian rhythm information to the networks showed an average performance improvement of 5.5%. The VGG13 network with 1D-ConvFilter achieved the highest accuracy of 85.7% with the additional activity circadian rhythm information.
- Published
- 2021
- Full Text
- View/download PDF
10. A Formal Framework for Knowledge Acquisition: Going beyond Machine Learning.
- Author
-
Hössjer, Ola, Díaz-Pachón, Daniel Andrés, and Rao, J. Sunil
- Subjects
- *
MACHINE learning , *KNOWLEDGE acquisition (Expert systems) , *CAUSAL inference , *STATISTICAL models - Abstract
Philosophers frequently define knowledge as justified, true belief. We built a mathematical framework that makes it possible to define learning (increasing number of true beliefs) and knowledge of an agent in precise ways, by phrasing belief in terms of epistemic probabilities, defined from Bayes' rule. The degree of true belief is quantified by means of active information I + : a comparison between the degree of belief of the agent and a completely ignorant person. Learning has occurred when either the agent's strength of belief in a true proposition has increased in comparison with the ignorant person ( I + > 0 ), or the strength of belief in a false proposition has decreased ( I + < 0 ). Knowledge additionally requires that learning occurs for the right reason, and in this context we introduce a framework of parallel worlds that correspond to parameters of a statistical model. This makes it possible to interpret learning as a hypothesis test for such a model, whereas knowledge acquisition additionally requires estimation of a true world parameter. Our framework of learning and knowledge acquisition is a hybrid between frequentism and Bayesianism. It can be generalized to a sequential setting, where information and data are updated over time. The theory is illustrated using examples of coin tossing, historical and future events, replication of studies, and causal inference. It can also be used to pinpoint shortcomings of machine learning, where typically learning rather than knowledge acquisition is in focus. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Building insights on true positives vs. false positives: Bayes' rule.
- Author
-
Robinson, Alexander, Keller, L. Robin, and del Campo, Cristina
- Subjects
GROUP problem solving ,FALSE positive error ,ANALYTICAL skills ,COVID-19 pandemic ,TEACHING methods ,DECISION making - Abstract
COVID‐19 pandemic policies requiring disease testing provide a rich context to build insights on true positives versus false positives. Our main contribution to the pedagogy of data analytics and statistics is to propose a method for teaching updating of probabilities using Bayes' rule reasoning to build understanding that true positives and false positives depend on the prior probability. Our instructional approach has three parts. First, we show how to construct and interpret raw frequency data tables, instead of using probabilities. Second, we use dynamic visual displays to develop insights and help overcome calculation avoidance or errors. Third, we look at graphs of positive predictive values and negative predictive values for different priors. The learning activities we use include lectures, in‐class discussions and exercises, breakout group problem solving sessions, and homework. Our research offers teaching methods to help students understand that the veracity of test results depends on the prior probability as well as helps students develop fundamental skills in understanding probabilistic uncertainty alongside higher‐level analytical and evaluative skills. Beyond learning to update the probability of having the disease given a positive test result, our material covers naïve estimates of the positive predictive value, the common mistake of ignoring the disease's base rate, debating the relative harm from a false positive versus a false negative, and creating a new disease test. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. How to Train Novices in Bayesian Reasoning.
- Author
-
Büchter, Theresa, Eichler, Andreas, Steib, Nicole, Binder, Karin, Böcherer-Linder, Katharina, Krauss, Stefan, and Vogel, Markus
- Subjects
- *
MEDICAL laws , *CONDITIONAL probability , *FORMATIVE evaluation , *PRIMARY audience , *APPLIED sciences - Abstract
Bayesian Reasoning is both a fundamental idea of probability and a key model in applied sciences for evaluating situations of uncertainty. Bayesian Reasoning may be defined as the dealing with, and understanding of, Bayesian situations. This includes various aspects such as calculating a conditional probability (performance), assessing the effects of changes to the parameters of a formula on the result (covariation) and adequately interpreting and explaining the results of a formula (communication). Bayesian Reasoning is crucial in several non-mathematical disciplines such as medicine and law. However, even experts from these domains struggle to reason in a Bayesian manner. Therefore, it is desirable to develop a training course for this specific audience regarding the different aspects of Bayesian Reasoning. In this paper, we present an evidence-based development of such training courses by considering relevant prior research on successful strategies for Bayesian Reasoning (e.g., natural frequencies and adequate visualizations) and on the 4C/ID model as a promising instructional approach. The results of a formative evaluation are described, which show that students from the target audience (i.e., medicine or law) increased their Bayesian Reasoning skills and found taking part in the training courses to be relevant and fruitful for their professional expertise. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Bernoulli's golden theorem in retrospect: error probabilities and trustworthy evidence.
- Author
-
Spanos, Aris
- Subjects
FREQUENTIST statistics ,RANDOM variables ,LAW of large numbers ,STATISTICAL models ,INVERSE problems ,PROBABILITY theory - Abstract
Bernoulli's 1713 golden theorem is viewed retrospectively in the context of modern model-based frequentist inference that revolves around the concept of a prespecified statistical model M θ x , defining the inductive premises of inference. It is argued that several widely-accepted claims relating to the golden theorem and frequentist inference are either misleading or erroneous: (a) Bernoulli solved the problem of inference 'from probability to frequency', and thus (b) the golden theorem cannot justify an approximate Confidence Interval (CI) for the unknown parameter θ , (c) Bernoulli identified the probability P A with the relative frequency 1 n ∑ k = 1 n x k of event A as a result of conflating f (x 0 | θ) with f (θ | x 0) , where x 0 denotes the observed data, and (d) the same 'swindle' is currently perpetrated by the p value testers. In interrogating the claims (a)–(d), the paper raises several foundational issues that are particularly relevant for statistical induction as it relates to the current discussions on the replication crises and the trustworthiness of empirical evidence, arguing that: [i] The alleged Bernoulli swindle is grounded in the unwarranted claim θ ^ n x 0 ≃ θ ∗ , for a large enough n, where θ ^ n X is an optimal estimator of the true value θ ∗ of θ. [ii] Frequentist error probabilities are not conditional on hypotheses (H
0 and H1 ) framed in terms of an unknown parameter θ since θ is neither a random variable nor an event. [iii] The direct versus inverse inference problem is a contrived and misplaced charge since neither conditional distribution f (x 0 | θ) and f (θ | x 0) exists (formally or logically) in model-based ( M θ x ) frequentist inference. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
14. Edge detection with mixed noise based on maximum a posteriori approach.
- Author
-
Shi, Yuying, Liu, Zijin, Wang, Xiaoying, and Zhang, Jinping
- Subjects
EDGE detection (Image processing) ,NOISE ,IMAGE processing ,EDGES (Geometry) ,RANDOM noise theory - Abstract
Edge detection is an important problem in image processing, especially for mixed noise. In this work, we propose a variational edge detection model with mixed noise by using Maximum A-Posteriori (MAP) approach. The novel model is formed with the regularization terms and the data fidelity terms that feature different mixed noise. Furthermore, we adopt the alternating direction method of multipliers (ADMM) to solve the proposed model. Numerical experiments on a variety of gray and color images demonstrate the efficiency of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Beta distribution-based knock probability map learning and spark timing control for SI engines.
- Author
-
Zhao, Kai and Shen, Tielong
- Subjects
SPARK ignition engines ,BETA distribution ,PROBABILITY theory ,UNCERTAINTY - Abstract
In gasoline engines, the spark timing is often advanced to increase fuel economy under certain heavy load engine operating conditions. As a compromise between the risk of knock and the power output, spark timing is regulated at the boundary where a low knock probability is tolerated. Due to the stochasticity of binary knock events, it is necessary to have a large number of engine cycles for probability estimations, which can slow down the response speed of a controller to operating condition changes. To speed up the spark timing regulation and to reduce the spark timing variance, in this article, a knock probability feedforward map learning method and a spark timing control method are proposed under a unified framework. A learning method that applies the beta distribution is the key contribution of this work. The beta distribution in the map learning part is used to describe knock probabilities with uncertainties and to determine the next engine operating condition for sampling and map learning. In the spark timing method, the beta distribution is applied in the conventional control method to adjust the control gains. The proposed methods are experimentally validated on a test bench equipped with a production Toyota 1.8 L, 4-cylinder SI engine. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Beta-Distribution-Based Knock Probability Estimation, Control Scheme, and Experimental Validation for SI Engines.
- Author
-
Zhao, Kai, Wu, Yuhu, and Shen, Tielong
- Subjects
SPARK ignition engines ,BETA distribution ,PROBABILITY theory ,DISTRIBUTION (Probability theory) ,ENERGY consumption - Abstract
The fuel efficiency and power output of spark ignition (SI) engines are closely related to the spark timing. Advancing the spark timing is usually used as an approach to increase the efficiency. However, under some operating conditions, advanced spark timing can trigger abnormal combustion, which causes knocking. To avoid cylinder damage and to increase the engine efficiency, feedback control, which addresses the knocking phenomenon as a stochastic process, is required. In this brief, a Bayesian estimate of knock probability is used to replace the maximum likelihood estimate in a likelihood-ratio-based knock control strategy. The beta distribution is used to represent the distribution of the knock probability estimate based on the independent and identically distributed property of knock events. The proposed control algorithm is validated on a full-scale test bench with a production SI engine and is compared with the conventional spark advance control approach and the maximum-likelihood-based approach. The results show that the proposed approach is able to control and maintain a knock probability close to the target and introduce a low dispersion of spark timing after convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Belief updating: does the 'good-news, bad-news' asymmetry extend to purely financial domains?
- Author
-
Barron, Kai
- Abstract
Bayes' statistical rule remains the status quo for modeling belief updating in both normative and descriptive models of behavior under uncertainty. Some recent research has questioned the use of Bayes' rule in descriptive models of behavior, presenting evidence that people overweight 'good news' relative to 'bad news' when updating ego-relevant beliefs. In this paper, we present experimental evidence testing whether this 'good-news, bad-news' effect is present in a financial decision making context (i.e. a domain that is important for understanding much economic decision making). We find no evidence of asymmetric updating in this domain. In contrast, in our experiment, belief updating is close to the Bayesian benchmark on average. However, we show that this average behavior masks substantial heterogeneity in individual updating behavior. We find no evidence in support of a sizeable subgroup of asymmetric updators. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Statistical Learning Model of the Sense of Agency
- Author
-
Shiro Yano, Yoshikatsu Hayashi, Yuki Murata, Hiroshi Imamizu, Takaki Maeda, and Toshiyuki Kondo
- Subjects
sense of agency ,statistical learning ,online learning ,Bayes' rule ,stochastic gradient descent ,Psychology ,BF1-990 - Abstract
A sense of agency (SoA) is the experience of subjective awareness regarding the control of one's actions. Humans have a natural tendency to generate prediction models of the environment and adapt their models according to changes in the environment. The SoA is associated with the degree of the adaptation of the prediction models, e.g., insufficient adaptation causes low predictability and lowers the SoA over the environment. Thus, identifying the mechanisms behind the adaptation process of a prediction model related to the SoA is essential for understanding the generative process of the SoA. In the first half of the current study, we constructed a mathematical model in which the SoA represents a likelihood value for a given observation (sensory feedback) in a prediction model of the environment and in which the prediction model is updated according to the likelihood value. From our mathematical model, we theoretically derived a testable hypothesis that the prediction model is updated according to a Bayesian rule or a stochastic gradient. In the second half of our study, we focused on the experimental examination of this hypothesis. In our experiment, human subjects were repeatedly asked to observe a moving square on a computer screen and press a button after a beep sound. The button press resulted in an abrupt jump of the moving square on the screen. Experiencing the various stochastic time intervals between the action execution (button-press) and the consequent event (square jumping) caused gradual changes in the subjects' degree of their SoA. By comparing the above theoretical hypothesis with the experimental results, we concluded that the update (adaptation) rule of the prediction model based on the SoA is better described by a Bayesian update than by a stochastic gradient descent.
- Published
- 2020
- Full Text
- View/download PDF
19. All tests are imperfect: Accounting for false positives and false negatives using Bayesian statistics
- Author
-
Song S. Qian, Jeanine M. Refsnider, Jennifer A. Moore, Gunnar R. Kramer, and Henry M. Streby
- Subjects
Conditional probability ,False negative ,Uncertainty ,False positive ,Bayes' rule ,Statistics ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Tests with binary outcomes (e.g., positive versus negative) to indicate a binary state of nature (e.g., disease agent present versus absent) are common. These tests are rarely perfect: chances of a false positive and a false negative always exist. Imperfect results cannot be directly used to infer the true state of the nature; information about the method's uncertainty (i.e., the two error rates and our knowledge of the subject) must be properly accounted for before an imperfect result can be made informative. We discuss statistical methods for incorporating the uncertain information under two scenarios, based on the purpose of conducting a test: inference about the subject under test and inference about the population represented by test subjects. The results are applicable to almost all tests. The importance of properly interpreting results from imperfect tests is universal, although how to handle the uncertainty is inevitably case-specific. The statistical considerations not only will change the way we interpret test results, but also how we plan and carry out tests that are known to be imperfect. Using a numerical example, we illustrate the post-test steps necessary for making the imperfect test results meaningful.
- Published
- 2020
- Full Text
- View/download PDF
20. On applicability of mathematical scaling and normalization in applied problem solving
- Author
-
A. I. Dolgov and D. V. Marshakov
- Subjects
scaling ,normalization ,data analysis ,applicability of formulas ,artificial neural network ,bayes’ rule ,Materials of engineering and construction. Mechanics of materials ,TA401-492 - Abstract
Introduction. The applicability of mathematical scaling and normalization in solving various applied problems is analyzed. The best known formulas often used along the theoretical and experimental studies are considered. The purpose of this work is to identify the properties of mathematical scaling and rationing.Materials and Methods. The errors obtained under using the mathematical scaling and normalization formulas are considered via specific computational examples. Based on a comparative evaluation of the ratio of the degree of magnitude of the initial and resulting values (as well as the ratio of the degree of difference of these values), the correctness of the results obtained which significantly effects the final values is estimated.Research Results. The analysis leads to the conclusion that some known mathematical scaling and normalization formulas possess properties that are ignored in theory and practice.Discussion and Conclusions. The results obtained allow avoiding erroneous decisions caused by the use of invalid scaling and normalization formulas under solving problems in theory and practice of economics, administrative management, medicine, and plenty of other fields.
- Published
- 2018
- Full Text
- View/download PDF
21. Bayesian geomorphology.
- Author
-
Korup, Oliver
- Subjects
GEOMORPHOLOGY ,SURFACE of the earth ,EARTH scientists ,GEOMORPHOLOGISTS ,LANDFORMS ,HYDROLOGY - Abstract
Summary: The rapidly growing amount and diversity of data are confronting us more than ever with the need to make informed predictions under uncertainty. The adverse impacts of climate change and natural hazards also motivate our search for reliable predictions. The range of statistical techniques that geomorphologists use to tackle this challenge has been growing, but rarely involves Bayesian methods. Instead, many geomorphic models rely on estimated averages that largely miss out on the variability of form and process. Yet seemingly fixed estimates of channel heads, sediment rating curves or glacier equilibrium lines, for example, are all prone to uncertainties. Neighbouring scientific disciplines such as physics, hydrology or ecology have readily embraced Bayesian methods to fully capture and better explain such uncertainties, as the necessary computational tools have advanced greatly. The aim of this article is to introduce the Bayesian toolkit to scientists concerned with Earth surface processes and landforms, and to show how geomorphic models might benefit from probabilistic concepts. I briefly review the use of Bayesian reasoning in geomorphology, and outline the corresponding variants of regression and classification in several worked examples. © 2020 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Perfect regular equilibrium.
- Author
-
Jung, Hanjoon Michael
- Subjects
EQUILIBRIUM ,NASH equilibrium ,CONDITIONAL probability ,DEFINITIONS ,PROBLEM solving - Abstract
We extend the solution concept of perfect Bayesian equilibrium to general games that allow a continuum of types and strategies. In finite games, a perfect Bayesian equilibrium is weakly consistent and a subgame perfect Nash equilibrium. In general games, however, it might not satisfy these criteria. To solve this problem, we revise the definition of perfect Bayesian equilibrium by replacing Bayes' rule with regular conditional probability. The revised solution concept is referred to as perfect regular equilibrium. We present the conditions that ensure the existence of this equilibrium. Then we show that every perfect regular equilibrium is always weakly consistent and a subgame perfect Nash equilibrium, and is equivalent to a simple version of perfect Bayesian equilibrium in a finite game. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Statistical Learning Model of the Sense of Agency.
- Author
-
Yano, Shiro, Hayashi, Yoshikatsu, Murata, Yuki, Imamizu, Hiroshi, Maeda, Takaki, and Kondo, Toshiyuki
- Subjects
STATISTICAL learning ,STATISTICAL models ,PREDICTION models ,MATHEMATICAL models - Abstract
A sense of agency (SoA) is the experience of subjective awareness regarding the control of one's actions. Humans have a natural tendency to generate prediction models of the environment and adapt their models according to changes in the environment. The SoA is associated with the degree of the adaptation of the prediction models, e.g., insufficient adaptation causes low predictability and lowers the SoA over the environment. Thus, identifying the mechanisms behind the adaptation process of a prediction model related to the SoA is essential for understanding the generative process of the SoA. In the first half of the current study, we constructed a mathematical model in which the SoA represents a likelihood value for a given observation (sensory feedback) in a prediction model of the environment and in which the prediction model is updated according to the likelihood value. From our mathematical model, we theoretically derived a testable hypothesis that the prediction model is updated according to a Bayesian rule or a stochastic gradient. In the second half of our study, we focused on the experimental examination of this hypothesis. In our experiment, human subjects were repeatedly asked to observe a moving square on a computer screen and press a button after a beep sound. The button press resulted in an abrupt jump of the moving square on the screen. Experiencing the various stochastic time intervals between the action execution (button-press) and the consequent event (square jumping) caused gradual changes in the subjects' degree of their SoA. By comparing the above theoretical hypothesis with the experimental results, we concluded that the update (adaptation) rule of the prediction model based on the SoA is better described by a Bayesian update than by a stochastic gradient descent. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Trip chain based usage patterns analysis of the round-trip carsharing system: A case study in Beijing.
- Author
-
Feng, Xiaoyan, Sun, Huijun, Wu, Jianjun, Liu, Zhiyuan, and Lv, Ying
- Subjects
- *
CAR sharing , *CASE studies , *PRICE increases , *TIME travel , *RENTAL trucks - Abstract
• This study explores the multi-dimensional features of the round-trip carsharing usage pattern. • Stop time thresholds are determined by considering different rental time. • The impact of price incentives on carsharing usage is discussed. Users' usage of carsharing and parking spaces has obvious peak hours. • Hotspots in the spatial distribution of activities are identified. In recent years, the concept of carsharing is rapidly gaining popularity in China, and the round-trip carsharing has become a common mode. However, few studies have revealed the role of round-trip carsharing in users' travel. In this study, the round-trip GPS data provided by a carsharing company in Beijing, China is used to analyze the users' usage patterns based on their trip chains. Through the extraction and analysis of trip information, all trip chains are grouped into three clusters, each of which has a different usage pattern. Then the consumption features and the shared car pick-up and return time of these three patterns are discussed. Further, the Bayes' rule is used to predict the activity purpose, and the proportion and spatial distribution of different purposes are analyzed. Results reveal that the carsharing program presents multiple usage patterns to meet the different travel needs of users. Price incentives like coupons, discounts, and packages can attract more shared car trips. Users' demand for price incentives increases with longer travel distance and time. Also, users' usage of vehicles and parking spaces has obvious peak hours. The spatial distribution of user activities has distinctly different hotspots. This paper can be beneficial for operators to set a reasonable pricing plan and provide better services. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Normal-gamma distribution–based stochastic knock probability control scheme for spark-ignition engines.
- Author
-
Zhao, Kai and Shen, Tielong
- Subjects
SPARK ignition engines ,PROBABILITY theory ,ENGINES ,GAUSSIAN distribution ,PARAMETER estimation ,STOCHASTIC control theory - Abstract
Spark timing, one of the essential parameters to control combustion in spark-ignition gasoline engines, is often advanced to optimize the power output and fuel economy. An overly advanced spark timing, or equivalently a large spark advance, however, can lead to severe knocking under heavy load engine operating conditions. In a trade-off between engine damage avoidance and power enhancement, the knock probability has to be regulated at a low percentage. Based on the observation that the logarithm of the knock intensity under steady operating conditions follows a normal distribution, in this research, a Bayesian knock probability estimation method is proposed using the normal-gamma distribution and the observed knock intensity. Based on the estimation, a spark advance control algorithm is also developed. The proposed knock probability control algorithm is validated on a full-scale test bench with a production spark-ignition engine. The results show that the proposed method is capable of regulating the knock probability to be close to the target percentage. With different parameter settings, the controller can further be configured to behave more aggressively or conservatively in knock probability estimation and regulation. In comparison with the conventional controller and the maximum likelihood–based controller, and in the tip-in/tip-out test, the proposed method also presents a quick response to transient engine operating conditions and a low spark advance dispersion after the spark advance converges close to the borderline. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Measuring and enhancing the transferability of hidden Markov models for dynamic travel behavioral analysis.
- Author
-
Xiong, Chenfeng, Yang, Di, Ma, Jiaqi, Chen, Xiqun, and Zhang, Lei
- Subjects
BEHAVIORAL assessment ,DYNAMIC models ,HIDDEN Markov models ,DEMAND forecasting ,MARKOV processes ,CHOICE of transportation - Abstract
As an emerging dynamic modeling method that incorporates time-dependent heterogeneity, hidden Markov models (HMM) are receiving increased research attention with regards to travel behavior modeling and travel demand forecasting. This paper focuses on the model transferability of HMM. Based on a series of transferability and goodness-of-fit measures, it finds that HMMs have a superior performance in predicting future transportation mode choice, compared to conventional choice models. Aimed at further enhancing its transferability, this paper proposes a Bayesian conditional recalibration approach that maps the model prediction directly to the context data. Compared to traditional model transferring methods, the proposed approach does not assume fixed parameterization and recalibrates the utilities and the prediction directly. A comparison between the proposed approach and the traditional transfer-scaling favors our approach, with higher goodness-of-fit. This paper fills the gap in understanding the transferability of HMM and proposes a practical method that enables potential applications of HMM. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Let the Data Speak? On the Importance of Theory‐Based Instrumental Variable Estimations.
- Author
-
Grossmann, Volker and Osikominu, Aderonke
- Subjects
LEAST squares - Abstract
In absence of randomized‐controlled experiments, identification is often aimed via instrumental variable (IV) strategies, typically two‐stage least squares estimations. According to Bayes' rule, however, under a low ex ante probability that a hypothesis is true (e.g. that an excluded instrument is partially correlated with an endogenous regressor), the interpretation of the estimation results may be fundamentally flawed. This paper argues that rigorous theoretical reasoning is key to design credible identification strategies, the foremost, finding candidates for valid instruments. We discuss prominent IV analyses from the macro‐development literature to illustrate the potential benefit of structurally derived IV approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Limits of the Application of Bayesian Modeling to Perception.
- Author
-
Luccio, Riccardo
- Abstract
The general lines of Bayesian modeling (BM) in the study of perception are outlined here. The main thesis argued here is that BM works well only in the so-called secondary processes of perception, and in particular in cases of imperfect discriminability between stimuli, or when a judgment is required, or in cases of multistability. In cases of "primary processes," on the other hand, it is often arbitrary and anyway superfluous, as with the laws of Gestalt. However, it is pointed out that in these latter cases, simpler and more well-established methodologies already exist, such as signal detection theory and individual choice theory. The frequent recourse to arbitrary values of a priori probabilities is also open to question. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Multiview gait-based gender classification through pose-based voting.
- Author
-
Isaac, Ebenezer R.H.P., Elias, Susan, Rajagopalan, Srinivasan, and Easwarakumar, K.S.
- Subjects
- *
FISHER discriminant analysis , *GAIT in humans , *SUPPORT vector machines , *MULTIMODAL user interfaces , *ARTIFICIAL legs , *GENDER , *VOTING - Abstract
• A close-to-ideal performance is achieved for gait-based gender classification. • The robust design of the proposed method allows it to work with occluded frames. • Elliptic Fourier descriptors are explored for an alternative feature set. • LDA with Bayes' rule is taken as an alternative to SVM for classification. • The CASIA-B and TUM-GAID datasets are considered for experimental evaluation. The incorporation of soft biometrics can significantly increase the performance of hard biometric systems. Gender is found to be the most popular soft biometric that can be derived from human gait. The state of the art produces accuracies up to 98% but are constrained by the necessity of requiring a complete gait cycle to function properly. We propose to remove this requirement using pose-based voting (PBV) – a method which treats every frame as a labeled instance. Linear discriminant analysis (LDA) is used in conjunction with the Bayes' rule for classification as an alternative to the popular support vector machine (SVM). The robust design of this technique facilitates the system to cope with partially occluded gait cycles with minimal loss in classification accuracy. Furthermore, when multiple cycles are taken to account, the error becomes negligibly small. We also investigate the applicability of elliptic Fourier descriptors and and depth gait histograms for the gait-based gender classification problem. The efficiency of our approach is evaluated using all view angles of the CASIA-B gait database and the TUM-GAID database under prescribed test conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Good news and bad news are still news: experimental evidence on belief updating.
- Author
-
Coutts, Alexander
- Subjects
BELIEF & doubt ,BAYESIAN analysis ,INFORMATION asymmetry ,CONFIRMATION bias ,CONSERVATISM - Abstract
Bayesian updating remains the benchmark for dynamic modeling under uncertainty within economics. Recent theory and evidence suggest individuals may process information asymmetrically when it relates to personal characteristics or future life outcomes, with good news receiving more weight than bad news. I examine information processing across a broad set of contexts: (1) ego relevant, (2) financially relevant, and (3) non value relevant. In the first two cases, information about outcomes is valenced, containing either good or bad news. In the third case, information is value neutral. In contrast to a number of previous studies I do not find differences in belief updating across valenced and value neutral settings. Updating across all contexts is asymmetric and conservative: the former is influenced by sequences of signals received, a new variation of confirmation bias, while the latter is driven by non-updates. Despite this, posteriors are well approximated by those calculated using Bayes' rule. Most importantly these patterns are present across all contexts, cautioning against the interpretation of asymmetric updating or other deviations from Bayes' rule as being motivated by psychological biases. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Acoustic Classification of Mosquitoes using Convolutional Neural Networks Combined with Activity Circadian Rhythm Information
- Author
-
Tae-Young Heo, Jeongkyu Oh, and Jaehoon Kim
- Subjects
Statistics and Probability ,convolutional neural network (CNN) ,Technology ,Computer Networks and Communications ,Computer science ,Speech recognition ,bayes’ rule ,IJIMAI ,artificial intelligence ,Convolutional neural network ,convolutional neural network (cnn) ,Computer Science Applications ,a priori probability ,mosquitoes classification ,Signal Processing ,Computer Vision and Pattern Recognition ,Circadian rhythm - Abstract
Many researchers have used sound sensors to record audio data from insects, and used these data as inputs of machine learning algorithms to classify insect species. In image classification, the convolutional neural network (CNN), a well-known deep learning algorithm, achieves better performance than any other machine learning algorithm. This performance is affected by the characteristics of the convolution filter (ConvFilter) learned inside the network. Furthermore, CNN performs well in sound classification. Unlike image classification, however, there is little research on suitable ConvFilters for sound classification. Therefore, we compare the performances of three convolution filters, 1D-ConvFilter, 3×1 2D-ConvFilter, and 3×3 2D-ConvFilter, in two different network configurations, when classifying mosquitoes using audio data. In insect sound classification, most machine learning researchers use only audio data as input. However, a classification model, which combines other information such as activity circadian rhythm, should intuitively yield improved classification results. To utilize such relevant additional information, we propose a method that defines this information as a priori probabilities and combines them with CNN outputs. Of the networks, VGG13 with 3×3 2D-ConvFilter showed the best performance in classifying mosquito species, with an accuracy of 80.8%. Moreover, adding activity circadian rhythm information to the networks showed an average performance improvement of 5.5%. The VGG13 network with 1D-ConvFilter achieved the highest accuracy of 85.7% with the additional activity circadian rhythm information.
- Published
- 2021
32. Gait Verification System Through Multiperson Signature Matching for Unobtrusive Biometric Authentication.
- Author
-
Isaac, Ebenezer R. H. P., Elias, Susan, Rajagopalan, Srinivasan, and Easwarakumar, K. S.
- Abstract
The unobtrusive nature of gait facilitates the development of optimal biometric authentication systems. Recent approaches on video-analytic gait authentication show excellent results but their implementations are threshold-based which trade off a set amount of FAR (false acceptance rate) to produce an acceptable FRR (false rejection rate). The proposed multiperson signature mapping (MSM) approach overcomes this drawback with a design that substantially decreases the FAR of the authentication system without having to increase the FRR. This technique removes the need of an empirically adjusted threshold. The state-of-the-art algorithms mostly prefer the nearest neighbor (NN) classifier where the Euclidean distance calculated from the extracted feature hyperplane is taken as the similarity measure. Our study proves that the Bayes' rule applied over the extracted feature set provides a much better performance compared to the conventional NN approach. The MSM is applied on top of template-based gait recognition algorithms to produce an efficient gait authentication system. The method is evaluated on four different gait templates including the popular Gait Energy Image (GEI) and its variation with the genetic template segmentation (GTS). The study analyzes the performance across different clothing and carrying conditions. The deployment of the gait authentication system for practical application is explained in detail. Experimental results with the CASIA-B gait database depict the potential of our proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Bayesian virtual sensing in structural dynamics.
- Author
-
Kullaa, J.
- Subjects
- *
BAYESIAN analysis , *SENSOR networks , *STRUCTURAL dynamics , *STRUCTURAL health monitoring , *VIBRATION measurements , *ALGORITHMS - Abstract
Structural monitoring and control utilize vibration measurements acquired by a sensor network. Combined empirical and analytical virtual sensing is introduced to estimate full-field dynamic response of a structure using a limited number of sensors. Bayesian empirical virtual sensing technique is developed to obtain less noisy estimates of sensor data. Then, analytical virtual sensing utilizes the expansion algorithm to compute the full-field response. If the sensor noise is known, virtual sensors are more accurate than the corresponding physical measurements with any number of sensors in the network. Often, the measurement error has to be estimated. The upper bound of the sensor noise variance is derived, and the effect of the noise estimation error on the accuracy of virtual sensors is studied. Numerical simulations are performed for a structure subject to unknown random excitation in order to validate the proposed virtual sensing algorithms. Displacement and strain sensor networks with different numbers of sensors and different noise models are studied. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Sampling for disease absence—deriving informed monitoring from epidemic traits.
- Author
-
Bourhis, Yoann, Gottwald, Timothy R., Lopez-Ruiz, Francisco J., Patarapuwadol, Sujin, and van den Bosch, Frank
- Subjects
- *
EPIDEMICS , *SAMPLING (Process) , *PATHOGENIC microorganisms , *CITRUS canker , *PROBABILITY density function - Abstract
Highlights • We show what incidence/prevalence a disease can have when all monitoring rounds return only negative samples. • Samples repeated in time are properly accounted for using a simple epidemic model. • An approximation of the sampling model is provided for practical use. • It is exemplified deriving appropriate monitoring programs for 3 epidemics. Abstract Monitoring for disease requires subsets of the host population to be sampled and tested for the pathogen. If all the samples return healthy, what are the chances the disease was present but missed? In this paper, we developed a statistical approach to solve this problem considering the fundamental property of infectious diseases: their growing incidence in the host population. The model gives an estimate of the incidence probability density as a function of the sampling effort, and can be reversed to derive adequate monitoring patterns ensuring a given maximum incidence in the population. We then present an approximation of this model, providing a simple rule of thumb for practitioners. The approximation is shown to be accurate for a sample size larger than 20, and we demonstrate its use by applying it to three plant pathogens: citrus canker, bacterial blight and grey mould. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Impact of initial ensembles on posterior distribution of ensemble-based assimilation methods.
- Author
-
Jahanbakhshi, Saman, Pishvaie, Mahmoud Reza, and Boozarjomehry, Ramin Bozorgmehry
- Subjects
- *
RESERVOIRS , *PERMEABILITY , *PARAMETERS (Statistics) , *STATISTICAL hypothesis testing , *HYPOTHESIS - Abstract
Abstract In this study, impact of initial ensembles on posterior distribution of ensemble-based assimilation methods is statistically analyzed. Along with, sampling performance as well as uncertainty quantification of these methods are compared in terms of their ability to accurately and consistently evaluate unknown reservoir model parameters and reservoir future performance. For this purpose, a synthetic test problem, which is a small but highly nonlinear reservoir model under two-phase flow, is utilized. Subsequently, different initial ensemble sets are considered and are updated through the assimilation process performed on the test problem using ensemble-based assimilation methods. Afterwards, consistency of the updated ensemble sets of permeability field as well as consistency of the predicted ensemble sets of cumulative water production have been assessed to reveal dependence of assimilation methods to the initial ensembles. Results of multi-sample hypothesis tests clearly disclose the inconsistencies between different updated ensemble sets of permeability field, and also between different predicted ensemble sets of cumulative water production. This is probably due to spurious correlations among the updated ensemble members within each set. Furthermore, the effects of ensemble size and type of the observation data on the associated inconsistency are investigated. Highlights • Performance of the ensemble-based assimilation methods is compared. • Dependence of assimilation methods to initial ensemble is examined. • Hypothesis tests are used to disclose inconsistency between different ensemble sets. • Effects of ensemble size and observation type on the inconsistency are analyzed. • A synthetic 1D reservoir model is considered for assessment purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Biased Learning Creates Overconfidence.
- Author
-
Ni, Xuanming, Wu, Chen, and Zhao, Huimin
- Abstract
The aim of this paper is to develop a multi-period economic model to interpret how the people become overconfident by a biased learning that people tend to attribute the success to their abilities and failures to other factors. The authors suppose that the informed trader does not know the distribution of the precision of his private signal and updates his belief on the distribution of the precision of his knowledge by Bayer's rule. The informed trader can eventually recognize the value of the precision of his knowledge after an enough long time biased learning, but the value is overestimated which leads him to be overconfident. Furthermore, based on the definition on the luckier trader who succeeds the same times but has the larger variance of the knowledge, the authors find that the luckier the informed trader is, the more overconfident he will be; the smaller the biased learning factor is, the more overconfident the informed trader is. The authors also obtain a linear equilibrium which can explain some anomalies in financial markets, such as the high observed trading volume and excess volatility. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Simulation of Bayes' rule by means of Monte Carlo method.
- Author
-
Gangur, Mikuláš and Svoboda, Milan
- Subjects
- *
SIMULATION methods & models , *MONTE Carlo method , *BAYESIAN analysis , *RANDOM numbers - Abstract
Summary: This contribution shows a simple implementation of Monte Carlo simulation method when presenting Bayes' rule. The implementation is carried out in the environment of Microsoft Excel spreadsheets by means of a generator of random numbers. The empiric results gained by simulation serve to confirm the correctness of the chosen procedures in calculations using Bayes' formula. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Die Formel von Bayes: Kognitionspsychologische Grundlagen und empirische Untersuchungen zur Bestimmung von Teilmenge-Grundmenge-Beziehungen.
- Author
-
Böcherer-Linder, Katharina, Eichler, Andreas, and Vogel, Markus
- Abstract
Copyright of JMD: Journal für Mathematik-Didaktik is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
39. Exposing image resampling forgery by using linear parametric model.
- Author
-
Qiao, Tong, Zhu, Aichun, and Retraint, Florent
- Subjects
RESAMPLING (Statistics) ,BAYES' estimation ,STATISTICAL hypothesis testing ,PROBABILITY theory ,LIKELIHOOD ratio tests - Abstract
Resampling forgery generally refers to as the technique that utilizes interpolation algorithm to maliciously geometrically transform a digital image or a portion of an image. This paper investigates the problem of image resampling detection based on the linear parametric model. First, we expose the periodic artifact of one-dimensional 1-D) resampled signal. After dealing with the nuisance parameters, together with Bayes' rule, the detector is designed based on the probability of residual noise extracted from resampled signal using linear parametric model. Subsequently, we mainly study the characteristic of a resampled image. Meanwhile, it is proposed to estimate the probability of pixels' noise and establish a practical Likelihood Ratio Test (LRT). Comparison with the state-of-the-art tests, numerical experiments show the relevance of our proposed algorithm with detecting uncompressed/compressed resampled images. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Remarks on kernel Bayes' rule.
- Author
-
Johno, Hisashi, Nakamoto, Kazunori, Saigo, Tatsuhiko, and Shiraishi, Hiroshi
- Subjects
- *
BAYES' theorem , *HILBERT space , *KERNEL functions , *NONPARAMETRIC statistics , *MATHEMATICAL analysis - Abstract
The kernel Bayes' rule has been proposed as a nonparametric kernelbased method to realize Bayesian inference in reproducing kernel Hilbert spaces. However, we demonstrate both theoretically and experimentally that the way of incorporating the prior in the kernel Bayes' rule is unnatural. In particular, we show that under some reasonable conditions, the posterior in the kernel Bayes' rule is completely unaffected by the prior, which seems to be irrelevant in the context of Bayesian inference. We consider that this phenomenon is in part due to the fact that the assumptions in the kernel Bayes' rule do not hold in general. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Bayesian virtual sensing for full-field dynamic response estimation.
- Author
-
Kullaa, Jyrki
- Subjects
VIBRATION measurements ,SENSOR networks ,BAYESIAN analysis ,RESPONSE rates ,NOISE measurement ,STRUCTURAL dynamics - Abstract
Structural monitoring and control utilize vibration measurements acquired by a sensor network. Combined empirical and analytical virtual sensing is introduced to estimate full-field dynamic response of a structure using a limited number of sensors. First, empirical virtual sensing techniques are applied to obtain less noisy estimates of sensor data. Then, analytical virtual sensing utilizes the finite element model and the empirical estimates to compute the full-field response. The effect of the noise estimation error on the accuracy of virtual sensors is studied and the upper bound of the noise variance is derived. Numerical simulations are performed for a structure subject to unknown random excitation in order to validate and compare different virtual sensing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. Development of virtual sensors to increase the sensitivity to damage.
- Author
-
Kullaa, Jyrki
- Subjects
SENSOR networks ,STRUCTURAL health monitoring ,DAMAGE models ,DAMPING capacity ,NOISE control - Abstract
Structural health monitoring is based on output-only vibration measurements acquired by a sensor network. In order to get an early warning of structural failure, high accuracy of measurement data is required. Empirical virtual sensing techniques can be applied to reduce the measurement error. The signal of each sensor can estimated using the current sensor network data. Due to hardware redundancy, the estimate is more accurate than the actual measurement. Two different estimates are studied: (1) minimum mean square error (MMSE) estimate, or the prior mean, and (2) Bayesian estimate, or the posterior mean. The measurement data are replaced with the estimated data in a damage detection algorithm. Numerical simulations were performed for a structure subject to unknown random excitation. Damage detection was based on the data from virtual sensors. Both algorithms outperformed the reference method with no virtual sensing. Algorithm 2 gave better results than algorithm 1 but it requires that the measurement error is known, whereas algorithm 1 needs no additional information. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. The Evidential Basis of Decision Making in Plant Disease Management.
- Author
-
Hughes, Gareth
- Abstract
The evidential basis for disease management decision making is provided by data relating to risk factors. The decision process involves an assessment of the evidence leading to taking (or refraining from) action on the basis of a prediction. The primary objective of the decision process is to identify-at the time the decision is made-the control action that provides the best predicted end-of-season outcome, calculated in terms of revenue or another appropriate metric. Data relating to disease risk factors may take a variety of forms (e.g., continuous, discrete, categorical) on measurement scales in a variety of units. Log10-likelihood ratios provide a principled basis for the accumulation of evidence based on such data and allow predictions to be made via Bayesian updating of prior probabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Evaluation of water quality with application of Bayes' rule and entropy weight method.
- Author
-
Sahoo, Mrunmayee M., Patra, K.C., Swain, J.B., and Khatua, K.K.
- Subjects
- *
WATER quality monitoring , *POPULATION , *LIVESTOCK , *BAYES' theorem , *ENTROPY - Abstract
The quality of water of the river has a significant impact on human population and livestock in basin area. Six water quality indicators are monitored from the gauging stations of Brahmani River to assess the changing trends in the quality of water. The conventional Aggregative Index Evaluation method is applied to know the overall water quality index for its intended use. Further, in this study, Bayes’ rule is applied for comprehensive assessment. The likelihood estimates are obtained from the normal distribution and is pre-owned for posterior probability calculation through Bayes’ rule. Finally, the indicator weights are estimated by Shannon’s entropy weight method. The consequences of analysis specified that the indicator, CODMn, is affecting the quality of water more in Panposh downstream. Biological oxygen demand, TA as CaCO3, NH4–N and Nitrate-N are closely related to domestic pollution and agricultural non-point source pollution. The overall quality of water is improved during dry seasons than during wet seasons due to the dilution of pollutants. Comprehensive evaluation indicates that the water is acceptable for second grade surface source protection zones for centralised drinking water. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
45. Modeling carrying capacity and management for monuments condition in tourism regions Case study: Kangavar’s Anahita Temple
- Author
-
faranak seyf -o- dini and mahmud shurche
- Subjects
tourism carrying capacities ,adaptive ecosystem management ,bayes’ rule ,multiple attribute decision-making ,anahita temple ,Management. Industrial management ,HD28-70 ,Management of special enterprises ,HD62.2-62.8 - Abstract
Tourism locations are required to develop a general management plan that is consistent with visitor carrying capacities. This paper describes a carrying capacity modeling system that allows regions’ tourism managers to quantitatively determine whether the current state of a region’s tourism condition is in compliance with those standards or not. The modeling system uses an ex-post adaptive monuments management (AMM) model to determine whether the current state of a tourism region complies with the physical and social carrying capacities. Also the manager requires knowledge of the tourism carrying capacity, which in this article is determined by the use of physical, real and effective carrying capacities. The multiple attribute scoring test of capacity (MASTEC) identifies the best management action for achieving compliance. The AMM model addresses potential errors that can occur when inferring a monument state from resource/social conditions. The AMM model minimizes the likelihood of such decision errors by using Bayes’ rule to determine the state of tourism regions. The MASTEC method allows a manager to identify the best management action for bringing an incompliant monument into compliance with carrying capacities. Limits of Acceptable Change and Visitor Impact Management, and multiple attribute decision making maximizes the manager’s expected utility function subject to stochastic carrying capacity. Anahita Temple was selected to showcase the usage of these models. Several factors limit the ability of tourism regions to implement the carrying capacity modeling system. Using a spatial decision support tool to implement the modeling system lessens some of these limitations.
- Published
- 2011
- Full Text
- View/download PDF
46. Bayesian probability estimates are not necessary to make choices satisfying Bayes’ rule in elementary situations
- Author
-
Artur eDomurat, Olga eKowalczuk, Katarzyna eIdzikowska, Zuzanna eBorzymowska, and Marta eNowak-Przygodzka
- Subjects
Bayesian inference ,Heuristics ,choices ,ecological rationality ,Bayes’ rule ,Binary hypothesis ,Psychology ,BF1-990 - Abstract
This paper has two aims. First, we investigate how often people make choices conforming to Bayes’ rule when natural sampling is applied. Second, we show that using Bayes’ rule is not necessary to make choices satisfying Bayes’ rule. Simpler methods, even fallacious heuristics, might prescribe correct choices reasonably often under specific circumstances. We considered elementary situations with binary sets of hypotheses and data. We adopted an ecological approach and prepared two-stage computer tasks resembling natural sampling. Probabilistic relations were to be inferred from a set of pictures, followed by a choice between the data which was made to maximize a chance for a preferred outcome. Using Bayes’ rule was deduced indirectly from choices.Study 1 (N=60) followed a 2 (gender: female vs. male) x 2 (education: humanities vs. pure sciences) between-subjects factorial design with balanced cells, and a number of correct choices as a dependent variable. Choices satisfying Bayes’ rule were dominant. To investigate ways of making choices more directly, we replicated Study 1, adding a task with a verbal report. In Study 2 (N=76) choices conforming to Bayes’ rule dominated again. However, the verbal reports revealed use of a new, non-inverse rule, which always renders correct choices, but is easier than Bayes’ rule to apply. It does not require inversing conditions (transforming P(H) and P(D|H) into P(H|D)) when computing chances). Study 3 examined efficiency of the three fallacious heuristics (pre-Bayesian, representativeness, and evidence-only) in producing choices concordant with Bayes’ rule. Computer-simulated scenarios revealed that the heuristics produce correct choices reasonably often under specific base rates and likelihood ratios. Summing up we conclude that natural sampling leads to most choices conforming to Bayes’ rule. However, people tend to replace Bayes’ rule with simpler methods, and even use of fallacious heuristics may be satisfactorily efficient.
- Published
- 2015
- Full Text
- View/download PDF
47. The Fermi paradox, Bayes’ rule, and existential risk management.
- Author
-
Miller, James D. and Felton, D.
- Subjects
FERMI'S paradox ,STRATEGIC planning ,ARTIFICIAL intelligence ,FLOOD risk ,BAYESIAN analysis - Abstract
How should the Fermi paradox affect an estimate of humankind’s likelihood and best means of long-term survival? A significant probability that many other civilizations have been in our situation but failed to become spacefaring increases the probability that our optimal existential risk strategies are costly, likely to fail, likely to leave traces if they do fail, and might require talents that mankind has but that other scientifically advanced species lack. The Fermi paradox implies that we should seek scientific data based on astronomical observations not accessible to civilizations that lived in the distant past, and that we should create machines to flood our galaxy with radio signals conditional on our civilization’s collapse. Our ability to use Bayesian updating on the Fermi paradox reduces the chance that aliens exist but are hiding from us because of their desire to not interfere in our development: giving us a false understanding of the fate of intelligent life in the universe would cloud our understanding of existential risks. The paradox also provides clues as to types of trap that might destroy us. The possibility that our universe is fine-tuned not only for life but also for the Fermi paradox magnifies these results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. DOWODZENIE HIPOTEZ ZA POMOCĄ CZYNNIKA BAYESOWSKIEGO (BAYES FACTOR): PRZYKŁADY UŻYCIA W BADANIACH EMPIRYCZNYCH.
- Author
-
Domurat, Artur and Białek, Michał
- Subjects
BAYES' theorem ,NULL hypothesis ,INFERENTIAL statistics - Abstract
Copyright of Decyzje is the property of Decyzje and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2016
- Full Text
- View/download PDF
49. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.
- Author
-
Chan, Stephanie C. Y., Niv, Yael, and Norman, Kenneth A.
- Subjects
- *
BAYES' theorem , *PREFRONTAL cortex , *SCHEMAS (Psychology) , *REINFORCEMENT learning , *EPISODIC memory - Abstract
The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of schemas,' in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Assessing forensic evidence by computing belief functions.
- Author
-
KERKVLIET, TIMBER and MEESTER, RONALD W. J.
- Subjects
- *
PROBABILITY theory , *DEMPSTER-Shafer theory , *FORENSIC sciences , *BAYES' theorem , *AXIOMS , *PRACTICE of law , *JURISPRUDENCE - Abstract
We first discuss certain problems with the classical probabilistic approach for assessing forensic evidence, in particular its inability to distinguish between lack of belief and disbelief, and its inability to model complete ignorance within a given population. We then discuss Shafer belief functions, a generalization of probability distributions, which can deal with both these objections. We use a calculus of belief functions which does not use themuch criticized Dempster rule of combination, but only the very natural Dempster-Shafer conditioning. We then apply this calculus to some classical forensic problems like the various island problems and the problem of parental identification. If we impose no prior knowledge apart from assuming that the culprit or parent belongs to a given population (something which is possible in our setting), then our answers differ from the classical ones when uniform or other priors are imposed. We can actually retrieve the classical answers by imposing the relevant priors, so our set-up can and should be interpreted as a generalization of the classicalmethodology, allowingmore flexibility. We show how our calculus can be used to develop an analogue of Bayes' rule, with belief functions instead of classical probabilities. We also discuss consequences of our theory for legal practice. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.