285 results
Search Results
2. A proposal for a Riemannian face space and application to atypical vs. typical face similarities.
- Author
-
Townsend, James T., Fu, Hao-Lun, Hsieh, Cheng-Ju, and Yang, Cheng-Ta
- Subjects
- *
NON-Euclidean geometry , *RIEMANNIAN manifolds , *FACE perception , *RIEMANNIAN geometry , *VECTOR spaces - Abstract
• We introduce Riemannian Face Manifolds (RFM) for face geometry. • RFM shifts focus to non-Euclidean geometries in face study. • We assert that RFM is a powerful tool for studying face perception and recognition. Two intriguing papers of the late 1990's and early 2000s by J. Tanaka and colleagues put forth the hypothesis that a repository of face memories can be viewed as a vector space where points in the space represent faces and each of these is surrounded by an attractor field. This hypothesis broadens the thesis of T. Valentine that face space is constituted of feature vectors in a finite dimensional vector space (e.g., Valentine, 2001). The attractor fields in the atypical part of face space are broader and stronger than those in typical face regions. This notion makes the substantiated prediction that a morphed midway face between a typical and atypical parent will be perceptually more similar to the atypical face. We propose an alternative interpretation that takes a more standard geometrical approach but also departs from the popular types of metrics assumed in almost all multidimensional scaling studies. Rather we propose a theoretical structure based on our earlier investigations of non-Euclidean and especially, Riemannian Face Manifolds (e.g., Townsend, Solomon, & Spencer-Smith, 2001). We assert that this approach avoids some of the issues involved in the gradient theme by working directly with the type of metric inherently associated with the face space. Our approach emphasizes a shift towards a greater emphasis on non-Euclidean geometries, especially Riemannian manifolds, integrating these geometric concepts with processing-oriented modeling. We note that while fields like probability theory, stochastic process theory, and mathematical statistics are commonly studied in mathematical psychology, there is less focus on areas like topology, non-Euclidean geometry, and functional analysis. Therefore, both to elevate comprehension as well as to propagate the latter topics as critical for our present and future enterprises, our exposition moves forward in a highly tutorial fashion, and we embed the material in its proper historical context. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Lexicographic Majority.
- Author
-
Petri, Henrik
- Subjects
- *
BOUNDED rationality , *HEURISTIC - Abstract
This paper explores a relationship between lexicographic and majority preferences as a novel explanation of preference cycles in choice. Already May (1954) notes that, among subjects in his experiment who did not display a (majority) preference cycle, a vast majority ordered alternatives according to an attribute that they found overridingly important, suggesting that a lexicographic heuristic was used. Our model, Lexicographic Majority, reconciles these findings by providing a unified framework for lexicographic and simple majority preferences. We justify lexicographic majority preferences by providing an axiomatization in terms of behavioral properties. • A model called Lexicographic Majority (LM) is proposed and axiomatized. • Preference cycles arises endogenously within the LM framework. • We characterize LM as a subclass of weighted majority preferences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. On the (non-) reliance on algorithms—A decision-theoretic account.
- Author
-
Sinclair-Desgagné, Bernard
- Subjects
- *
AMBIGUITY , *RECOMMENDER systems , *CONTROL (Psychology) , *DECISION making , *ALGORITHMS , *INFORMATION resources management , *AVERSION - Abstract
A wealth of empirical evidence shows that people display opposite behaviors when deciding whether to rely on an algorithm, even if it is inexpensive to do so and using the algorithm should enhance their own performance. This paper develops a formal theory to explain some of these conflicting facts and submit new testable predictions. Drawing from decision analysis, I invoke two key notions: the 'value of information' and the 'value of control'. The value of information matters to users of algorithms like recommender systems and prediction machines, which essentially provide information. I find that ambiguity aversion or a subjective cost of employing an algorithm will tend to decrease the value of algorithmic information, while repeated exposure to an algorithm might not always increase this value. The value of control matters to users who may delegate decision making to an algorithm. I model how, under partial delegation, imperfect understanding of what the algorithm actually does (so the algorithm is in fact a black box) can cause algorithm aversion. Some possible remedies are formulated and discussed. • This paper initiates a formal decision-theoretic approach to make sense of the empirical evidence concerning people's attitudes towards algorithms. • This approach exploits two fundamental notions: the value of information and the value of control. • Ambiguity aversion will tend to decrease the value of algorithmic information; repeated exposure to algorithms may not increase it. • A first model of 'black box' algorithms is developed to analyze the value of keeping versus delegating control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Two peas in a pod: Discounting models as a special case of the VARMAX.
- Author
-
Vanhasbroeck, Niels, Loossens, Tim, and Tuerlinckx, Francis
- Subjects
- *
AUTOREGRESSIVE models , *DYNAMICAL systems , *TIME series analysis , *FEASIBILITY studies , *RESEARCH personnel - Abstract
In this paper, we establish a formal connection between two dynamic modeling approaches that are often taken to study affect dynamics. More specifically, we show that the exponential discounting model can be rewritten to a specific case of the VARMAX, thereby shedding light on the underlying similarities and assumptions of the two models. This derivation has some important consequences for research. First, it allows researchers who use discounting models in their studies to use the tools established within the broader time series literature to evaluate the applicability of their models. Second, it lays bare some of the implicit restrictions discounting models put on their parameters and, therefore, provides a foundation for empirical testing and validation of these models. One of these restrictions concerns the exponential shape of the discounting function that is often assumed in the affect dynamical literature. As an alternative, we briefly introduce the quasi-hyperbolic discounting function. • When investigating dynamical systems, it is important to understand how different models relate to each other. • We analytically demonstrate that discounting models are nested within autoregressive models. • This demonstration exposes some assumptions of the discounting model regarding the dynamic structure of the data. • These assumptions should be tested in empirical studies to assess the viability of discounting models. • One assumption to test is exponential decay, for which we propose a quasi-hyperbolic alternative. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. How averaging individual curves transforms their shape: Mathematical analyses with application to learning and forgetting curves.
- Author
-
Murre, Jaap M.J.
- Subjects
- *
MATHEMATICAL analysis , *DISTRIBUTION (Probability theory) , *LOGARITHMIC functions , *EXPONENTIAL functions , *CURVES - Abstract
This paper demonstrates how averaging over individual learning and forgetting curves gives rise to transformed averaged curves. In an earlier paper (Murre and Chessa, 2011), we already showed that averaging over exponential functions tends to give a power function. The present paper expands on the analyses with exponential functions. Also, it is shown that averaging over power functions tends to give a log power function. Moreover, a general proof is given how averaging over logarithmic functions retains that shape in a specific manner. The analyses assume that the learning rate has a specific statistical distribution, such as a beta, gamma, uniform, or half-normal distribution. Shifting these distributions to the right, so that there are no low learning rates (censoring), is analyzed as well and some general results are given. Finally, geometric averaging is analyzed, and its limits are discussed in remedying averaging artefacts. • Averaging over individual learning (or forgetting, etc.) curves changes their shape, where exponential functions may change into power functions and power functions into log power functions. • Averaging over logarithmic functions retains the shape. • For exponential and power functions, geometric averaging retains the shape. • Geometric averaging over logarithmic functions does not retain that shape. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. True contextuality in a psychophysical experiment.
- Author
-
Cervantes, Víctor H. and Dzhafarov, Ehtibar N.
- Subjects
- *
HUMAN behavior , *VISUAL perception , *STATISTICAL reliability , *RANDOM variables , *QUANTUM mechanics , *HIGH-dimensional model representation - Abstract
Recent crowdsourcing experiments have shown that true contextuality of the kind found in quantum mechanics can also be present in human behavior. In these experiments simple human choices were aggregated over large numbers of respondents, with each respondent dealing with a single context (set of questions asked). In this paper we present experimental evidence of contextuality in individual human behavior, in a psychophysical experiment with repeated presentations of visual stimuli in randomly varying conteXts (arrangements of stimuli). The analysis is based on the Contextuality-by-Default (CbD) theory whose relevant aspects are reviewed in the paper. CbD allows one to detect contextuality in the presence of direct influences, i.e., when responses to the same stimuli have different distributions in different contexts. The experiment presented is also the first one in which contextuality is demonstrated for responses that are not dichotomous, with five options to choose among. CbD requires that random variables representing such responses be dichotomized before they are subjected to contextuality analysis. A theorem says that a system consisting of all possible dichotomizations of responses has to be contextual if these responses violate a certain condition, called nominal dominance. In our experiment nominal dominance was violated in all data sets, with very high statistical reliability established by bootstrapping. • Contextuality is established in an experiment with a within-subject design. • The choices made in this experiment were 5-optional. • The analysis is based on the Contextuality-by-Default theory. • A necessary condition for noncontextuality of such systems is nominal dominance. • This condition was violated for all three participants. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Assessment structures in psychological testing.
- Author
-
Heller, Jürgen
- Subjects
- *
PSYCHOLOGICAL tests , *DATA structures , *DYNAMIC testing , *SPACE (Architecture) - Abstract
Based on the notion of an assessment structure introduced by Falmagne (2015) the present paper characterizes the so-called quasi ordinal assessment spaces. It establishes a one-to-one correspondence to certain binary relations on an extended item set, containing a positive instance and a negative instance for each of the items. This result allows for building quasi ordinal assessment spaces from data through a generalized version of a well-established procedure, known as item tree analysis. Analyzing data from psychological testing constitutes a prototypical application that calls for this generalization of knowledge spaces, as it allows for describing dependencies between positive and negative answers to the items in a questionnaire. Empirical application of the approach is illustrated for a published data set on the Woodworth Psychoneurotic Inventory. • Assessment structures generalize knowledge structures. • They form structures appropriate for application in psychological diagnostics. • The paper provides a theoretical basis for building these structures from data. • The development focuses on quasi ordinal assessment structures. • The main result generalizes the well-known Birkhoff Theorem. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. A geometric framework for modeling dynamic decisions among arbitrarily many alternatives.
- Author
-
Kvam, Peter D.
- Subjects
- *
GEOMETRIC modeling , *GEOMETRIC approach , *DYNAMIC models , *DECISION making , *RANDOM walks - Abstract
Multiple-choice and continuous-response tasks pose a number of challenges for models of the decision process, from empirical challenges like context effects to computational demands imposed by choice sets with a large number of outcomes. This paper develops a general framework for constructing models of the cognitive processes underlying both inferential and preferential choice among any arbitrarily large number of alternatives. This geometric approach represents the alternatives in a choice set along with a decision maker's beliefs or preferences in a "decision space," simultaneously capturing the support for different alternatives along with the similarity relations between the alternatives in the choice set. Support for the alternatives (represented as vectors) shifts over time according to the dynamics of the belief / preference state (represented as a point) until a stopping rule is met (state crosses a hyperplane) and the corresponding selection is made. This paper presents stopping rules that guarantee optimality in multi-alternative inferential choice, minimizing response time for a desired level of accuracy, as well as methods for constructing the decision space, which can result in context effects when applied to preferential choice. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Safeguarding against bad luck when attempting to discredit a state-trace model.
- Author
-
Bamber, Donald
- Subjects
- *
FORTUNE , *INDEPENDENT variables - Abstract
A state-trace model for a natural phenomenon proposes that there is a causal "bottleneck" that makes the joint causal effects of two independent variables look like the effect of a single independent variable. Imagine that a state-trace model has been formulated, but the model seems implausible and we wish to find empirical evidence that it is wrong. How can we do that? Given reasonable monotonicity assumptions, a state-trace model is wrong if a point on one state trace is delta-discordant with a point on another state trace. [We say that two points (x , y) and (x ′ , y ′) in the plane are delta-discordant if Δ x = x ′ − x and Δ y = y ′ − y are nonzero and have opposite signs.] So, to show the model wrong, we need to find a pair of delta-discordant points lying on different state traces. One method for finding two such points is to observe multiple points on each of two state traces with the hope that, among those points, there will be a delta-discordant pair. But this approach can have bad luck and fail to find delta-discordant pairs even though they exist. A new approach to "locating" a delta-discordant pair of points, an approach that is much less dependent on luck, is formulated in this paper. In addition, the paper describes a statistical test for evaluating whether the "located" points are really delta-discordant. • Imagine that we want to discredit a particular state-trace model. • We can do that by finding delta-discordant points on different state traces. • But, even if such points exist, we may fail to find them. • We need a systematic method for finding delta-discordant points. • This paper describes such a method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Mathematical self-determination theory II: Affine space representation.
- Author
-
Ünlü, Ali
- Subjects
- *
SELF-determination theory , *REPRESENTATION theory , *ISOMORPHISM (Mathematics) , *AFFINE algebraic groups - Abstract
Self-determination theory is a well-established theory of motivation. This theory provides for fundamental concepts related to human motivation, including self-determination. The mathematization of this theory has been envisaged in a series of two papers by the author. The first paper entitled "Mathematical self-determination theory I: Real representation" addressed the representation of the theory in reals. This second paper is in continuation of it. The representation of the first part allows to abstract the results in more general mathematical structures, namely, affine spaces. The simpler real representation is reobtained as a special instance. We take convexity as the pivotal starting point to generalize the whole exposition and represent self-determination theory in abstract affine spaces. This includes the affine space analogs of the notions of internal locus, external locus, and impersonal locus, of regulated and graded motivation, and self-determination. We also introduce polar coordinates in Euclidean affine motivation spaces to study self-determination on radial and angular line segments. We prove the distributivity of the lattice of general self-determination in the affine space formulation. The representation in an affine space is free in the choice of primitives. However, the different representations, in reals or affine, are shown to be unique up to canonical isomorphism. The aim of this paper is to extend on the results obtained in the first paper, thereby to further lay the mathematical foundations of self-determination motivation theory. • Mathematization of self-determination theory. • Combinatorial definitions and characterizations of self-determination. • Algebraic self-determination. • Self-determination theory in reals. • Self-determination theory in affine space. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Mathematical self-determination theory I: Real representation.
- Author
-
Ünlü, Ali
- Subjects
- *
SELF-determination theory , *EXTRINSIC motivation , *INTRINSIC motivation - Abstract
In two parts, MSDT1 this paper and MSDT2 the follow-up paper, we treat the topic of mathematical self-determination theory. MSDT1 considers the real representation, MSDT2 the affine space representation. The aim of the two papers is to lay the mathematical foundations of self-determination motivation theory. Self-determination theory was proposed by Deci and Ryan, which is a popular theory of motivation. The fundamental concepts are extrinsic and intrinsic motivation, amotivation, their type of regulation, locus of causality, and especially, self-determination. First, we give a geometric description of its concepts for the regulated case (no amotivation), as the unit 1-simplex. Thereby, we derive a symmetric definition of self-determination. Second, we extend the geometric description to the regulated and unregulated case, based on a more general ternary model, in internal motivation, external motivation, and amotivation. We define gradations of amotivation (and motivation), as 1-simplexes parallel to the unit 1-simplex. The ternary representation implies the types of strong, weak, and general self-determination, as partial orders on the motivation space. Third, we study the order, lattice, and algebraic properties of self-determination. In a version of polar coordinates, strong self-determination turns out to be a complete lattice on angular line segments, weak self-determination is a complete lattice on radial line segments, and general self-determination entails a complete lattice on the entire motivation space. In addition, the modified polar coordinates are employed to obtain necessary and sufficient conditions for strong, weak, and general self-determination. We propose measures for the strength of an ordinal dependency in self-determination, which are partial metrics on the motivation space. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A new type of polytomous surmise system.
- Author
-
Wang, Bo, Li, Jinjin, Chen, Zhuoheng, Xu, Bochi, and Xie, Xiaoxian
- Subjects
- *
COLLECTIONS - Abstract
Doignon and Falmagne (1985) introduced a surmise system, which generalized the precedence relation, allowing multiple possible learning paths for an item. Heller (2021) took into account precedence relations on an extended set of (virtual) items and further generalized quasi-ordinal knowledge spaces to polytomous items. Wang et al. (2022) proposed CD-polytomous knowledge space and provided its corresponding polytomous surmise system. Following these developments and drawing upon the so-called extended polytomous knowledge structure, this paper presents two concepts: weak polytomous structure and extended surmise system. Via setting up a Galois connection, a one-to-one correspondence is established between the collection of all extended surmise functions and the collection of certain weak polytomous structures. This paper also comprehensively discusses the relationships among the precedence relations, the polytomous surmise systems, and the extended surmise systems. • A new type of polytomous surmise system is proposed. • The concept of weak polytomous knowledge structure is provided. • The Galois connection is established. • The concepts associated with the extended surmise system are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Exploring well-gradedness in polytomous knowledge structures.
- Author
-
Wang, Bo and Li, Jinjin
- Subjects
- *
STRUCTURAL analysis (Engineering) , *PROGRESSIVE collapse , *THEORY of knowledge , *FACTORIAL experiment designs - Abstract
Enhancing learning effectiveness and comprehension, well-gradedness plays a crucial role in knowledge structure theory by establishing a systematic and progressive knowledge system. Extensive research has been conducted in this domain, resulting in significant findings. This paper explores the properties of well-gradedness in polytomous knowledge structures, shedding light on both classical confirmations and exceptional cases. A key characteristic of well-gradedness is the presence of adjacent elements within a non-empty family that exhibit a distance of 1. The study investigates various manifestations of well-gradedness, including its discriminative properties and its manifestation in discriminative factorial polytomous structures. Furthermore, intriguing deviations from classical standards in minimal polytomous states are uncovered, revealing unexpected behaviors. • Provide a characterization of well-gradedness for general families. • Introduce the concept of strong discriminative property. • Manifestation of well-gradedness in polytomous knowledge structures. • Demonstrate that factorial polytomous structures are well-graded. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Quantum like modeling of decision making: Quantifying uncertainty with the aid of Heisenberg–Robertson inequality.
- Author
-
Bagarello, Fabio, Basieva, Irina, Pothos, Emmanuel M., and Khrennikov, Andrei
- Subjects
- *
QUANTUM theory , *MATHEMATICAL models of decision making , *HEISENBERG uncertainty principle , *MATHEMATICAL inequalities , *HERMITIAN operators - Abstract
This paper contributes to quantum-like modeling of decision making (DM) under uncertainty through application of Heisenberg’s uncertainty principle (in the form of the Robertson inequality). In this paper we apply this instrument to quantify uncertainty in DM performed by quantum-like agents. As an example, we apply the Heisenberg uncertainty principle to the determination of mutual interrelation of uncertainties for “incompatible questions” used to be asked in political opinion pools. We also consider the problem of representation of decision problems, e.g., in the form of questions, by Hermitian operators, commuting and noncommuting, corresponding to compatible and incompatible questions respectively. Our construction unifies the two different situations (compatible versus incompatible mental observables), by means of a single Hilbert space and of a deformation parameter which can be tuned to describe these opposite cases. One of the main foundational consequences of this paper for cognitive psychology is formalization of the mutual uncertainty about incompatible questions with the aid of Heisenberg’s uncertainty principle implying the mental state dependence of (in)compatibility of questions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. Quantum effect logic in cognition.
- Author
-
Jacobs, Bart
- Subjects
- *
COGNITION , *QUANTUM logic , *SENSORY perception , *PSYCHOLOGY , *MATHEMATICAL reformulation - Abstract
This paper illustrates applications of a new, modern version of quantum logic in quantum cognition. The new logic uses ‘effects’ as predicates, instead of the more restricted interpretation of predicates as projections — which is used so far in this area. Effect logic involves states and predicates, validity and conditioning, and also state and predicate transformation via channels. The main aim of this paper is to demonstrate the usefulness of this effect logic in quantum cognition, via many high-level reformulations of standard examples. The usefulness of the logic is greatly increased by its implementation in the programming language Python . [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. A variation of the cube model for best–worst choice.
- Author
-
Mallahi-Karai, Keivan and Diederich, Adele
- Subjects
- *
WIENER processes , *CUBES , *GEOMETRIC modeling , *AXIOMS - Abstract
In this paper, we propose a dynamical model for the best–worst choice task. The proposed model is a modification of the multi-episode Cube model proposed and studied in so-called (Mallahi-Karai and Diederich, 2019, 2021). This model postulates that best–worst choice (or more generally, ranking) task is the outcome of sequential choices made in a number of episodes. The underlying model is a multivariate Wiener process with drift issued from a point in the unit cube, where episodes are defined in terms of a sequence of stopping times. This model can also be extended to an attention-switching framework in a standard way. • Dynamical model for the best–worst scaling. • Partial and complete rankings. • Variation of the Cube model for multiple alternatives. • multivariate Wiener process with drift in higher dimensions [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. On delineating forward- and backward-graded knowledge structures from fuzzy skill maps.
- Author
-
Xu, Bochi, Li, Jinjin, Sun, Wen, and Wang, Bo
- Subjects
- *
THEORY of knowledge - Abstract
Forward-graded and backward-graded structures of knowledge are two important classes of knowledge structures. Spoto and Stefanutti (2020) establish necessary and sufficient conditions for skill maps to delineate these structures. We introduce fuzzy skills to describe varying levels of proficiency in skills and extend the theoretical results of Spoto and Stefanutti (2020) for delineating forward- and backward-graded knowledge structures using fuzzy skill maps. The paper establishes necessary and sufficient conditions for fuzzy skill maps to delineate a backward-graded simple closure space, a forward-graded knowledge space, and a forward-graded simple closure space. Furthermore, the competence-based local independence model (CBLIM) with fuzzy skills is introduced and its unidentifiability is discussed. • Generalize the results of Spoto and Stefanutti (2020) to the field of fuzzy skills. • Find the necessary and sufficient conditions for BG and FG closure spaces. • Find the necessary and sufficient conditions for FG knowledge spaces. • Discuss the unidentifiability of CBLIM with fuzzy skills. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Structure of single-peaked preferences.
- Author
-
Karpov, Alexander
- Subjects
- *
STATISTICAL sampling , *CIRCLE , *OPTICAL disks - Abstract
The paper studies a variety of domains of preference orders that are closely related to single-peaked preferences. We develop recursive formulas for the number of single-peaked preference profiles and the number of preference profiles that are single-peaked on a circle. The number of Arrow's single-peaked preference profiles is found for three, four, and five alternatives. Random sampling applications are discussed. For restricted tier preference profiles, a forbidden subprofiles characterization and an exact enumeration formula are obtained. It is also shown that each Fishburn's preference profile is single-peaked on a circle preference profile, and Fishburn's preference profiles cannot be characterized by forbidden subprofiles. • The number of single-peaked, single-peaked on a circle, Arrow's single-peaked preference profiles is calculated. • Restricted tier preference profiles are characterized via forbidden subprofiles. • It is shown that Fishburn's domain is a subset of single-peaked on a circle domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A necessary and sufficient condition for unique skill assessment.
- Author
-
Heller, Jürgen, Anselmi, Pasquale, Stefanutti, Luca, and Robusto, Egidio
- Subjects
- *
KNOWLEDGE base , *CORE competencies , *PROBLEM solving , *PROBLEM sets (Education) ,PROBLEM solving ability testing - Abstract
The skill-based extension of the theory of knowledge structures forms the framework for addressing the problem of whether it is possible to uniquely assess the skills underlying the solution behavior exhibited on some set of items. Technically speaking, the paper strives for characterizing the so-called conjunctive skill functions, assigning to each item a subset of skills sufficient for solving it, that allow for singling out a unique state of a given competence structure. While previously proposed properties turn out to be either sufficient, or necessary within this setting, the paper provides a necessary and sufficient condition. Possible extensions of this characterization to more general skill functions are discussed. The conclusions cover suggestions on how to extend a test so that it allows for unique skill assessment, as well as implications for the identifiability of probabilistic models defined on top of the deterministic framework. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. The use of action functionals within the quantum-like paradigm.
- Author
-
Haven, Emmanuel and Khrennikov, Andrei
- Subjects
- *
FUNCTIONALS , *QUANTUM theory , *FINANCIAL planning , *DECISION making , *ARBITRAGE - Abstract
Arbitrage is a key concept in the theory of asset pricing and it plays a crucial role in financial decision making. The concept of the curvature of so-called ‘fibre bundles’ can be used to define arbitrage. The concept of ‘action’ can play an important role in the definition of arbitrage. In this paper, we connect the probabilities emerging from a (non) zero linear action with so-called risk neutral probabilities. The paper also shows how arbitrage/non arbitrage can be well defined within a quantum-like paradigm. We also discuss briefly the behavioural dimension of arbitrage. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. A step-by-step tutorial on using the cognitive architecture ACT-R in combination with fMRI data.
- Author
-
Borst, Jelmer P. and Anderson, John R.
- Subjects
- *
COGNITIVE ability , *FUNCTIONAL magnetic resonance imaging , *MAGNETIC resonance imaging of the brain , *STATISTICAL correlation , *SOURCE code - Abstract
The cognitive architecture ACT-R is at the same time a psychological theory and a modeling framework for constructing cognitive models that adhere to the principles of the theory. ACT-R can be used in combination with fMRI data in two different ways: (1) fMRI data can be used to evaluate and constrain models in ACT-R by means of predefined Region-of-Interest (ROI) analysis, and (2) predictions from ACT-R models can be used to locate neural correlates of model processes and representations by means of model-based fMRI analysis. In this paper we provide a step-by-step tutorial on both approaches. Note that this tutorial neither teaches the ACT-R theory in any detail, nor fMRI analysis, but explains how ACT-R can be used in combination with fMRI data. To this end, we provide all data and computer code necessary to run the ACT-R model, carry out the analyses, and recreate the figures in the paper. As an example dataset we use a relatively simple algebra task. In the first section, we develop an ACT-R model of this task and fit it to behavioral data. In the second section, we apply a predefined ROI-analysis to evaluate the model using fMRI data. In the third section, we use model-based fMRI analysis to locate the following processes in the brain: retrieval of mathematical facts from memory, working memory updates, motor responses, and visually encoding the problems. After working through this tutorial, the reader will have learned what can be achieved with the two different analysis methods and how they are conducted; the example code can then be adapted to a new dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. On the correspondence between granular polytomous spaces and polytomous surmising functions.
- Author
-
Ge, Xun
- Abstract
By modifying the concept of polytomous surmise functions, this paper introduces polytomous surmising functions. Then, it is shown that there is a one-to-one correspondence f between granular polytomous spaces and polytomous surmising functions where polytomous surmising functions cannot be replaced with polytomous surmise functions. This result gives a correction for a correspondence between granular polytomous spaces and polytomous surmise functions. As an application of the correspondence f , this paper demonstrates that the pair (f , f − 1) of mappings forms a Galois connection where all granular polytomous spaces and all polytomous surmising functions are closed elements of this Galois connection. • The polytomous surmising function is introduced. • Set up a one-to-one correspondence defined on granular polytomous spaces. • Set up a Galois connection around granular polytomous spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Meaningfulness as a “Principle of Theory Construction”.
- Author
-
Falmagne, Jean-Claude and Doble, Christopher
- Subjects
- *
PYTHAGOREAN theorem , *PSYCHOPHYSICS , *BEER-Lambert law , *MATHEMATICAL symmetry , *FUNCTIONAL equations - Abstract
In 1959, Duncan Luce published the famous paper entitled “ On the possible psychophysical laws. ” The results presented here were inspired by that paper and by the ensuing controversy centered on the invariance concept of “meaningfulness.” We give a formal definition of this concept. As we define it, the condition of meaningfulness is quite potent. In its context, relatively weak additional conditions may suffice for the derivation of precise scientific or geometric laws. We give several examples of such derivations, including the Pythagorean Theorem and Beer’s Law. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. What are we estimating when we fit Stevens’ power law?
- Author
-
Bernasconi, Michele and Seri, Raffaello
- Subjects
- *
POWER law (Mathematics) , *MATHEMATICAL functions , *NONLINEAR analysis , *PERTURBATION theory , *PSYCHOPHYSICS , *REPRESENTATION (Psychoanalysis) - Abstract
Estimates of the Stevens’ power law model are often based on the averaging over individuals of experiments conducted at the individual level. In this paper we suppose that each individual generates responses to stimuli on the basis of a model proposed by Luce and Narens, sometimes called separable representation model, featuring two distinct perturbations, called psychophysical and subjective weighting function, that may differ across individuals. Exploiting the form of the estimator of the exponent of Stevens’ power law, we obtain an expression for this parameter as a function of the original two functions. The results presented in the paper help clarifying several well-known paradoxes arising with Stevens’ power laws, including the range effect, i.e. the fact that the estimated exponent seems to depend on the range of the stimuli, the location effect, i.e. the fact that it depends on the position of the standard within the range, and the averaging effect, i.e. the fact that power laws seem to fit better data aggregated over individuals. Theoretical results are illustrated using data from papers of R. Duncan Luce. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. QTest 2.1: Quantitative testing of theories of binary choice using Bayesian inference.
- Author
-
Zwilling, Christopher E., Cavagnaro, Daniel R., Regenwetter, Michel, Lim, Shiau Hong, Fields, Bryanna, and Zhang, Yixin
- Subjects
- *
GRAPHICAL user interfaces , *DECISION theory , *FREEWARE (Computer software) , *SOURCE code , *PARALLEL programming - Abstract
This stand-alone tutorial gives an introduction to the QTest 2.1 public domain software package for the specification and statistical analysis of certain order-constrained probabilistic choice models. Like its predecessors, QTest 2.1 allows a user to specify a variety of probabilistic models of binary responses and to carry out state-of-the-art frequentist order-constrained hypothesis tests within a Graphical User Interface (GUI). QTest 2.1 automatizes the mathematical characterization of so-called "random preference models", adds some parallel computing capabilities, and, most importantly, adds tools for Bayesian inference and model selection. In this tutorial, we provide an in-depth introduction to the Bayesian features: We review order-constrained Bayesian p -values, DIC and Bayes factors, building on the data, models, and prior QTest based frequentist data analyses of an earlier (frequentist) tutorial by Regenwetter et al. (2014). • This paper provides a tutorial for the NSF -funded open-access public-domain QTEST 2.1 software. • QTEST 2.1 provides frequentist and Bayesian order-constrained inference for models of binary responses. • Relative to QTEST 1.0 , it adds Bayesian p values, DIC , and Bayes factors for model fitting and model selection. • Version 2.1 automates the mathematical characterization of "random preference models." • QTEST 2.1 has a stand alone Graphical User Interface (GUI) and Matlab source code versions, with the latter offering more features. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. On universality of classical probability with contextually labeled random variables: Response to A. Khrennikov.
- Author
-
Dzhafarov, Ehtibar N. and Kon, Maria
- Subjects
- *
RANDOM variables , *PROBABILITY theory - Abstract
Abstract In his constructive and well-informed commentary, Andrei Khrennikov acknowledges a privileged status of classical probability theory with respect to statistical analysis. He also sees advantages offered by the Contextuality-by-Default theory, notably, that it "demystifies quantum mechanics by highlighting the role of contextuality," and that it can detect and measure contextuality in inconsistently connected systems. He argues, however, that classical probability theory may have difficulties in describing empirical phenomena if they are described entirely in terms of observable events. We disagree: contexts in which random variables are recorded are as observable as the variables' values. Khrennikov also argues that the Contextuality-by-Default theory suffers the problem of non-uniqueness of couplings. We disagree that this is a problem: couplings are all possible ways of imposing counterfactual joint distributions on random variables that de facto are not jointly distributed. The uniqueness of modeling experiments by means of quantum formalisms brought up by Khrennikov is achieved for the price of additional, substantive assumptions. This is consistent with our view of quantum theory as a special-purpose generator of classical probabilities. Khrennikov raises the issue of "mental signaling," by which he means inconsistent connectedness in behavioral systems. Our position is that it is as inherent to behavioral systems as their stochasticity. Highlights • This paper is a reply to Andrei Khrennikov's critical commentary. • Khrennikov's commentary is constructive and well-informed. • Our responses serve to further elucidate the Contextuality-by-Default theory. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Shrinkage priors for Bayesian penalized regression.
- Author
-
van Erp, Sara, Oberski, Daniel L., and Mulder, Joris
- Subjects
- *
PARAMETER estimation , *UNCERTAINTY (Information theory) , *DISTRIBUTION (Probability theory) , *REGRESSION analysis - Abstract
Abstract In linear regression problems with many predictors, penalized regression techniques are often used to guard against overfitting and to select variables relevant for predicting an outcome variable. Recently, Bayesian penalization is becoming increasingly popular in which the prior distribution performs a function similar to that of the penalty term in classical penalization. Specifically, the so-called shrinkage priors in Bayesian penalization aim to shrink small effects to zero while maintaining true large effects. Compared to classical penalization techniques, Bayesian penalization techniques perform similarly or sometimes even better, and they offer additional advantages such as readily available uncertainty estimates, automatic estimation of the penalty parameter, and more flexibility in terms of penalties that can be considered. However, many different shrinkage priors exist and the available, often quite technical, literature primarily focuses on presenting one shrinkage prior and often provides comparisons with only one or two other shrinkage priors. This can make it difficult for researchers to navigate through the many prior options and choose a shrinkage prior for the problem at hand. Therefore, the aim of this paper is to provide a comprehensive overview of the literature on Bayesian penalization. We provide a theoretical and conceptual comparison of nine different shrinkage priors and parametrize the priors, if possible, in terms of scale mixture of normal distributions to facilitate comparisons. We illustrate different characteristics and behaviors of the shrinkage priors and compare their performance in terms of prediction and variable selection in a simulation study. Additionally, we provide two empirical examples to illustrate the application of Bayesian penalization. Finally, an R package bayesreg is available online (https://github.com/sara-vanerp/bayesreg) which allows researchers to perform Bayesian penalized regression with novel shrinkage priors in an easy manner. Highlights • Various shrinkage priors have distinctive theoretical characteristics. • Most priors have a similar prediction accuracy unless p > n. • Different shrinkage priors vary in variable selection accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Meta-inductive prediction based on Attractivity Weighting: Mathematical and empirical performance evaluation.
- Author
-
Thorn, Paul D. and Schurz, Gerhard
- Subjects
- *
STATISTICAL weighting , *PERFORMANCE evaluation - Abstract
Abstract In the present paper, we present mathematical and empirical results concerning the performance of a meta-inductive prediction method known as Attractivity Weighting. The mathematical results show that Attractivity Weighting is endowed with important guarantees concerning its worst-case short run and long run performance. In addition to these guarantees which hold for all logically possible environments, simulations applied to data describing real-world environments suggest that the short run performance of carefully selected forms of Attractivity Weighting is generally very good, both in environments in which one-reason prediction methods are optimal and in environments in which weighting methods are optimal. In both sorts of environment, Attractivity Weighting approximates the performance of that available prediction method that is optimal in that environment. Highlights • Presents new mathematical and simulation-based results for paired comparison tasks. • Argues for the virtues of Attractivity Weighting (AW). • AW is guaranteed to match the success of the best available cue, in the long run. • Other well known methods lack such a guarantee. • The simulation studies suggest that the short run performance of AW is excellent. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Bidirectional constraint satisfaction in rational strategic decision making.
- Author
-
Bhatia, Sudeep and Golman, Russell
- Subjects
- *
CONSTRAINT satisfaction , *DECISION making , *GAME theory , *BIDIRECTIONAL associative memories (Computer science) , *NEURAL circuitry , *NASH equilibrium - Abstract
Abstract We describe the properties of a constraint satisfaction network that is able to reason and decide rationally in strategic games. We use the structure of Bidirectional Associative Memory (BAM), a minimal two-layer recurrent neural network, and assume that network layers represent self and other strategies, whereas connection weights encode best responses. We apply BAM to finite-strategy two-player games, and show that network activation in the long run is restricted to the set of rationalizable strategies. The network is not guaranteed to reach a stable activation state, but any pure strategy profile that constitutes a stable state in the network must be a pure strategy Nash equilibrium. We illustrate the properties of the network using the traveler's dilemma, the rock–paper–scissors game, and coordination games. The network's behavior also depends on starting activation states, and we show how biases in these starting states can resolve equilibrium selection problems. Strategic decision making is a key part of complex social behavior, and our results illustrate how bidirectional constraint satisfaction networks can perform rational computations in this domain. Highlights • Bidirectional associative memory is used to study game theoretic decision making. • The network makes decisions through constraint satisfaction. • Long run activation is restricted to the set of rationalizable strategies. • Any pure strategy profile in a stable state must be a pure strategy Nash equilibrium. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Rotational-symmetry in a 3D scene and its 2D image.
- Author
-
Sawada, Tadamasa and Zaidi, Qasim
- Subjects
- *
THREE-dimensional imaging , *ROTATIONAL motion , *IMAGING systems , *MATHEMATICAL symmetry , *GEOMETRY - Abstract
Abstract A 3D shape of an object is N -fold rotational-symmetric if the shape is invariant for 360/ N degree rotations about an axis. Human observers are sensitive to the 2D rotational-symmetry of a retinal image, but they are less sensitive than they are to 2D mirror-symmetry, which involves invariance to reflection across an axis. Note that perception of the mirror-symmetry of a 2D image and a 3D shape has been well studied, where it has been shown that observers are sensitive to the mirror-symmetry of a 3D shape, and that 3D mirror-symmetry plays a critical role in the veridical perception of a 3D shape from its 2D image. On the other hand, the perception of rotational-symmetry, especially 3D rotational-symmetry, has received very little study. In this paper, we derive the geometrical properties of 2D and 3D rotational-symmetry and compare them to the geometrical properties of mirror-symmetry. Then, we discuss perceptual differences between mirror- and rotational-symmetry based on this comparison. We found that rotational-symmetry has many geometrical properties that are similar to the geometrical properties of mirror-symmetry, but note that the 2D projection of a 3D rotational-symmetrical shape is more complex computationally than the 2D projection of a 3D mirror-symmetrical shape. This computational difficulty could make the human visual system less sensitive to the rotational-symmetry of a 3D shape than its mirror-symmetry. Highlights • Geometrical properties of 3D rotational-symmetry are analyzed. • 3D rotational- and mirror-symmetries have many common geometrical properties. • A 2D projection of a 3D rotational-symmetrical shape is computationally complex. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Characterization of geodesics in the small in multidimensional psychological spaces.
- Author
-
López-González, Olivia, Sánchez-Larios, Hérica, and Guillén-Burguete, Servio
- Subjects
- *
GEODESICS , *DIMENSION reduction (Statistics) , *PROBABILITY theory , *PSYCHOMETRICS , *MANIFOLDS (Mathematics) , *VECTORS (Calculus) - Abstract
Dzhafarov and Colonius (1999) proposed a theory of subjective Fechnerian distances in a continuous stimulus space of arbitrary dimensionality, where each stimulus is associated with a psychometric function that determines probabilities with which it is discriminated from its infinitesimally close neighboring stimuli. In their theory, the Finslerian metric function F ( x , v ) plays a central role, where x is a point of a manifold M and v ∈ T x M ∖ { 0 } is a nonzero vector in the tangent space at x . Dzhafarov and Colonius (2001) proved that if the Finslerian metric function F ( x , v ) is not convex in the direction of a tangent vector v at x , then there exist polygonal arcs from x to x + v s , with s > 0 sufficiently small, called Fechnerian geodesic arcs in the small for v at x , whose psychometric length is strictly less than that of the straight line segment from x to x + v s . In their paper, the authors pointed out that: “it is important to investigate the problem of Fechnerian geodesics in the small, that is, the existence and properties of an allowable path connecting x to y = x + v s , whose psychometric length tends to the Fechnerian distance G ( x , x + v s ) as s → 0 + .” Consequently, the principal aim of our paper is to characterize the Fechnerian geodesic arcs in the small. We prove that the Fechnerian geodesic arcs in the small for v at x can be obtained from sets H of tangent vectors at x , provided that: (a) the sum of the vectors in H is equal to v , (b) the rays in the directions of the vectors of H pass through extreme points of only one face C x ( v ) of the convex closure of the indicatrix of F at x , and (c) the ray in the direction of v intersects the relative interior of C x ( v ). Also, we prove that the Fechnerian geodesic arcs in the small for v at x determine totally their corresponding face C x ( v ). [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Bayesian stopping.
- Author
-
Douven, Igor
- Subjects
- *
COMPUTER simulation , *ACQUISITION of data , *STATISTICIANS , *THEORY of knowledge - Abstract
Stopping rules are criteria for determining when data collection can or should be terminated, allowing for inferences to be made. While traditionally discussed in the context of classical statistics, Bayesian statisticians have also begun exploring stopping rules. Kruschke proposed a Bayesian stopping rule utilizing the concept of Highest Density Interval, where data collection can cease once enough probability mass (or density) accumulates in a sufficiently small region of parameter space. This paper presents an alternative to Kruschke's approach, introducing the novel concept of Relative Importance Interval and considering the distribution of probability mass within parameter space. Using computer simulations, we compare these proposals to each other and to the widely-used Bayes factor-based stopping method. Our results do not indicate a single superior proposal but instead suggest that different stopping rules may be appropriate under different circumstances. • Shows how an acceptance rule developed in formal epistemology inspires a new Bayesian stopping rule. • Compares the new rule with previously proposed ones using computer simulations. • Shows that the different rules make different speed–accuracy trade-offs and therefore may be called for under different circumstances. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. On Galois connections between polytomous knowledge structures and polytomous attributions.
- Author
-
Ge, Xun
- Subjects
- *
ATTRIBUTION (Social psychology) , *STRUCTURAL analysis (Engineering) , *THEORY of knowledge - Abstract
Polytomous knowledge structure theory (abbr. polytomous KST) was introduced by Stefanutti et al. (2020) and further results on polytomous KST were obtained by Heller (2021). As the interesting work, this paper discusses Galois connections in polytomous KST. In this paper, two derivations between polytomous knowledge structures and polytomous attributions are presented. In addition, this paper gives an explicit characterization to introduce the completeness of polytomous attributions and defines the concept of a complete polytomous knowledge structure by the property that such a polytomous knowledge structure is derived from a complete polytomous attribution. This paper establishes a Galois connection between the collection K of all polytomous knowledge structures and the collection F of all polytomous attributions, where the closed elements are respectively in K the complete polytomous knowledge structures, and in F the complete polytomous attributions. Furthermore, this Galois connection induces a one-to-one correspondence between the two sets of closed elements. Moreover, this Galois connection can also induce a Galois connection between the collection of all granular polytomous knowledge structures and the collection of all granular polytomous attributions. • Present derivations between polytomous structures and polytomous attributions. • Set up Galois connections between polytomous structures and polytomous attributions. • Set up Galois connections around granular polytomous structures (attributions). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. On mimicry among sequential sampling models.
- Author
-
Khodadadi, Arash and Townsend, James T.
- Subjects
- *
STATISTICAL sampling , *EMPIRICAL research , *DATA analysis , *DECISION making , *PROBABILITY theory , *TIME-varying systems - Abstract
Sequential sampling models are widely used in modeling the empirical data obtained from different decision making experiments. Since 1960s, several instantiations of these models have been proposed. A common assumption among these models is that the subject accumulates noisy information during the time course of a decision. The decision is made when the accumulated information favoring one of the responses reaches a decision boundary. Different models, however, make different assumptions about the information accumulation process and the implementation of the decision boundaries. Comparison among these models has proven to be challenging. In this paper we investigate the relationship between several of these models using a theoretical framework called the inverse first passage time problem. This framework has been used in the literature of applied probability theory in investigating the range of the first passage time distributions that can be produced by a stochastic process. In this paper, we use this framework to prove that any Wiener process model with two time-constant boundaries can be mimicked by an independent race model with time-varying boundaries. We also examine the numerical computation of the mimicking boundaries. We show that the mimicking boundaries of the race model are not symmetric. We then propose an equivalent race model in which the boundaries are symmetric and time-constant but the drift coefficients are time-varying. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. Understanding the influence of distractors on workload capacity.
- Author
-
Little, Daniel R., Eidels, Ami, Fific, Mario, and Wang, Tony
- Subjects
- *
INFORMATION processing , *COEFFICIENTS (Statistics) , *REACTION time , *COMPARATIVE studies , *PREDICTION theory - Abstract
In this paper, we analyze the workload capacity of information processing of multidimensional perceptual stimuli. Capacity, which describes how the processing rate of the system changes as the number of stimulus dimensions or attributes is increased, is an important property of information processing systems. Inferences based on one measure of capacity, the capacity coefficient (Townsend and Nozawa, 1995), are typically computed by comparing the processing of single targets , which provide a measure of the baseline processing time of the system, to the processing of a double target . The single targets are typically assumed to be presented alone without any irrelevant distracting information. In this paper, we derive new capacity predictions for situations when distractor information is present. This extension reveals that, with distractors, the value of the capacity coefficient no longer provides unique diagnostic information about the underlying processing system. We further show how to rectify this situation by contrasting distractors of different discriminability. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. From Uniform Expected Utility to Uniform Rank-Dependent Utility: An experimental study.
- Author
-
Vrijdags, A. and Marchant, T.
- Subjects
- *
RANKING (Statistics) , *UNCERTAINTY , *AXIOMS , *SET theory , *MATHEMATICAL analysis - Abstract
This paper presents an experimental investigation of the Uniform Expected Utility (UEU) criterion, a model for ranking sets of uncertain outcomes. We verified whether the two behavioral axioms characterizing UEU, i.e., Averaging and Restricted Independence, are satisfied in a pairwise choice experiment with monetary gains. Our results show that neither of these axioms holds in general. Averaging in particular, appears to be violated on a large scale. On the basis of the current study and a previous one, we can conclude that none of the models for set ranking that have been axiomatically characterized so far is able to model observed choices between sets of possible outcomes in a satisfactory fashion. In this paper we therefore lay out the foundations for a new descriptive model for set ranking: the Uniform Rank-Dependent Utility (URDU) criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. Stochastic transitivity: Axioms and models.
- Author
-
Oliveira, I.F.D., Zehavi, S., and Davidov, O.
- Subjects
- *
AXIOMS , *MATHEMATICAL models of psychology , *MATHEMATICAL models of decision making , *PSYCHOMETRICS , *PSYCHOLOGICAL techniques - Abstract
Abstract Transitivity relations play an important role in specifying models of paired comparisons. While models of paired comparisons have historical origins in psychological models of choice, today they find applications in fields as diverse as economics, computer science and statistics. Typically, transitivity relations are formalized by describing the relationship among choice probabilities of any three items. In this paper we show that stochastic transitivity relations can be expressed globally by means of comparison functions. In particular, we show that if p i j is the probability that item i is chosen over j then we may write p i j = F (μ i − μ j) where μ i and μ j are the merits of item i and j , respectively, and F is a comparison function. For example, when F is a distribution function of a symmetric random variable then the well known linear stochastic order is obtained. Weaker forms of transitivity also admit this formulation with weaker constraints on the class to which F belong. The functional characterizations provide a common mathematical structure and language for studying transitivity relations. They reveal new connections among transitivity models and enable various generalizations of known results. Finally, the functional characterizations provide a foundation for the future development of new classes of statistical models. Highlights • Weak models of choice probability have a global structure described by a comparison function. • Models of choice probability and utility theory are intimately connected. • Stochastic models of transitivity can be obtained from simple axioms. • Equivalent transitivity models have an equivalent comparison function up to an affine transformation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Signed difference analysis: Testing for structure under monotonicity.
- Author
-
Dunn, John C. and Anderson, Laura
- Subjects
- *
MATHEMATICAL models of psychology , *FACTOR analysis , *MATHEMATICAL models of decision making , *PSYCHOMETRICS , *PSYCHOLOGICAL techniques - Abstract
Abstract Signed difference analysis (SDA), introduced by Dunn and James (2003), is used to derive testable consequences from a psychological model in which each dependent variable is presumed to be a monotonically increasing function of a linear or nonlinear combination of latent variables. SDA is based on geometric properties of the combination of latent variables that are preserved under arbitrary monotonic transformation and requires estimation neither of these variables nor of the monotonic functions. The aim of the present paper is to connect SDA to the mathematical theory of oriented matroids. This serves to situate SDA within an existing formalism, to clarify its conceptual foundation, and to solve outstanding conjectures. We describe the theory of oriented matroids as it applies to SDA and derive tests for both linear and nonlinear models. In addition, we show that state-trace analysis is a special case of SDA which we extend to models such as additive conjoint measurement where each dependent variable is the same unspecified monotonic function of a linear combination of latent variables. Lastly, we show how measurement error can be accommodated based on the model-fitting approach developed by Kalish et al. (2016). Highlights • Signed difference analysis is re-framed using the theory of oriented matroids. • State-trace analysis is shown to be a special case of signed difference analysis. • Additive conjoint measurement shown to be a special case of signed difference analysis. • A method to fit models in the presence of measurement error is described. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Strict scalability of choice probabilities.
- Author
-
Ryan, Matthew
- Subjects
- *
SCALABILITY , *MULTINOMIAL distribution , *PROBABILITY theory , *EXTENSION (Logic) , *MONOTONIC functions - Abstract
This paper introduces the concept of strict scalability , which lies between the classical notions of simple scalability (Krantz, 1964; Tversky and Russo, 1969) and monotone scalability (Fishburn, 1973). For binary choices, strict scalability is precisely characterised by the well-known axiom of weak substitutability (at least for countable domains). We also introduce a multinomial extension of weak substitutability that characterises strict scalability for multinomial choice. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Minimum message length inference of the Poisson and geometric models using heavy-tailed prior distributions.
- Author
-
Wong, Chi Kuen, Makalic, Enes, and Schmidt, Daniel F.
- Subjects
- *
MINIMUM message length (Information theory) , *PREDICTION models , *BAYESIAN analysis , *LIKELIHOOD ratio tests , *POISSON processes - Abstract
Minimum message length is a general Bayesian principle for model selection and parameter estimation that is based on information theory. This paper applies the minimum message length principle to a small-sample model selection problem involving Poisson and geometric data models. Since MML is a Bayesian principle, it requires prior distributions for all model parameters. We introduce three candidate prior distributions for the model parameters with both light- and heavy-tails. The performance of the MML methods is compared with objective Bayesian inference and minimum description length techniques based on the normalized maximum likelihood code. Simulations show that our MML approach with a heavy-tail prior distribution performs well in all tests. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Modeling an enactivist multiple-trace memory. ATHENA: A fractal model of human memory.
- Author
-
Briglia, J., Servajean, P., Michalland, A.-H., Brunel, L., and Brouillet, D.
- Subjects
- *
EXPLICIT memory , *COLLECTIVE memory , *MEMORIZATION , *SENSORY conflict , *EPISODIC memory - Abstract
Global-matching models of memory argue that knowledge emerges from the interaction between presented cues and traces of past experiences. But these models generally rely on the use of independent episodic traces, unable to account for global interactions between learned situations (see Versace et al., 2009). Enactivism (Varela, 1993) could theoretically take advantage of an inter-dependent processing of traces to account for abstraction processing using only sensorimotor covariances (Hutto & Myin, 2012), but no mathematical formalization of an enactivist memory has yet been proposed. In this paper, we propose the ATHENA model as an enactivist mathematical formalization of Act-In theories (Versace et al., 2014) within MINERVA2 (Hintzman, 1986) non-specific traces: ATHENA is a fractal model which keeps track of former processes that led to the emergence of knowledge, and is therefore able to process contextual processes (abstraction manipulation). We present three simulations designed to test ATHENA’s ability to construct, learn, and manipulate emergent abstractions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Quantum field inspired model of decision making: Asymptotic stabilization of belief state via interaction with surrounding mental environment.
- Author
-
Bagarello, Fabio, Basieva, Irina, and Khrennikov, Andrei
- Subjects
- *
QUANTUM entanglement , *BOUNDED rationality , *MENTAL models theory (Communication) , *QUANTUM mechanics , *HAMILTON'S equations - Abstract
This paper is devoted to a justification of quantum-like models of the process of decision making based on the theory of open quantum systems, i.e. decision making is considered as decoherence. This process is modeled as interaction of a decision maker, Alice, with a mental (information) environment R surrounding her. Such an interaction generates “dissipation of uncertainty” from Alice’s belief-state ρ ( t ) into R and asymptotic stabilization of ρ ( t ) to a steady belief-state. The latter is treated as the decision state. Mathematically the problem under study is about finding constraints on R guaranteeing such stabilization. We found a partial solution of this problem (in the form of sufficient conditions). We present the corresponding decision making analysis for one class of mental environments, the so-called “almost homogeneous environments”, with the illustrative examples: (a) behavior of electorate interacting with the mass-media “reservoir”; (b) consumers’ persuasion. We also comment on other classes of mental environments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Strict [formula omitted]-Ferrers properties.
- Author
-
Giarlotta, Alfio and Watson, Stephen
- Subjects
- *
OPTICAL fiber networks , *COMBINATORICS , *MATHEMATICAL analysis , *OPTICAL communications , *OPTICAL fibers - Abstract
The transitivity of a preference relation is a traditional tenet of rationality in economic theory. However, several weakenings of transitivity have proven to be extremely useful in applications, giving rise to the notions of interval orders and semiorders among others. Strict ( m , 1 ) -Ferrers properties go in this direction, classifying asymmetric preferences on the basis of their degree of transitivity, which becomes generally weaker as m gets larger. We show that strict ( m , 1 ) -Ferrers properties can be arranged into a poset contained in the reverse ordering of the natural numbers. Our main result completely describes this poset. Although this paper has a combinatorial flavor, the topic of Ferrers properties is suited to applications in economics and psychology, for instance in relation to money-pump phenomena. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. An examination of parallel versus coactive processing accounts of redundant-target audiovisual signal processing.
- Author
-
Yang, Cheng-Ta, Altieri, Nicholas, and Little, Daniel R.
- Subjects
- *
SIGNAL processing , *NEURAL circuitry , *AUDIOVISUAL materials , *MATHEMATICAL models , *COGNITIVE science - Abstract
We ask whether auditory and visual signals are processed using a consistent mental architecture across variable experimental designs. It is well-known that in an auditory-visual task requiring divided attention, responses are often faster for redundant audiovisual targets compared to unisensory targets. Importantly, these redundant-target effects can theoretically be explained by several different mental architectures, which are explored in this paper. These include: independent-race models, parallel interactive models, and coactive models. Earlier results, especially redundant-target processing times which are faster than predicted by the race-model inequality (Miller, 1982), implicated coactivation as a necessary explanation of redundant-target processing. However, this explanation has been recently challenged by demonstrating that violations of the race-model inequality can be explained by violations of the context invariance assumption underlying the race-model inequality (Otto & Mamassian, 2012). We utilized Systems Factorial Technology (Townsend & Nozawa, 1995), regarded as a standard diagnostic tool for inferences about mental architecture, to study redundant-target audiovisual processing. Three experiments were carried out in: a discrimination task (Experiment 1), a simultaneous perceptual matching task (Experiment 2), and a delayed matching task (Experiment 3). The results provide a key set of benchmarks to which we apply several simulations that are consistent with the context invariance explanation not only of the race-model inequality but also of capacity and architecture. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. A generalized extensive structure that is equipped with a right action and its representation.
- Author
-
Matsushita, Yutaka
- Subjects
- *
INTERTEMPORAL choice , *PATIENCE , *COGNITION , *SENSORY perception , *PSYCHOLOGY - Abstract
In intertemporal choice, it has been found that if the receipt time is closer to the present, then people tend to grow increasingly or decreasingly impatient. This paper develops an axiom system to construct a weighted additive model reflecting nonconstant impatience. By presupposing that an increment in duration is subjectively assessed according to the periods at which advancement occurs, we denote the one-period advanced receipt of outcomes by multiplying the outcomes by the increment on the right. By this right multiplication, we can regard the effect of advance as the decomposition into two factors, i.e., the factor of step-by-step advance accompanied by subdivided durations and the factor of advance based on the total duration. First, the conditions for enabling right multiplication are proposed for the Cartesian product of the underlying set of a generalized extensive structure and a set of durations. Second, the properties derived under these conditions yield a right action on the generalized extensive structure. Finally, the weighted additive model is obtained as a representation of the generalized extensive structure equipped with the right action. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. Notes on the polytomous generalization of knowledge space theory.
- Author
-
Wang, Bo, Li, Jinjin, Sun, Wen, and Luo, Daozhong
- Subjects
- *
THEORY of knowledge , *LINEAR orderings , *GENERALIZATION , *ATOMS , *FACTORIALS - Abstract
Stefanutti et al. (2020) and Heller (2021) have recently done significant work on the polytomous extensions of knowledge space theory (KST), which opens the field for systematically generalizing many KST concepts to the polytomous case. Following these developments, the paper provides a first counterexample showing that the assumptions in Heller (2021) do not guarantee component-directed joins to be defined item-wise. This leads to an incomplete characterization of the closed elements of the Galois connection in Proposition 8 of Heller (2021), an issue which is resolved in the present paper. A second counterexample in the paper shows that the equivalence between atoms and ⨆ -irreducible elements of the polytomous structure stated in Stefanutti et al. (2020) may not hold in general. This paper provides theoretical results showing that the equivalence still holds if the response categories form a linear order or the structure happens to be factorial. • Two counterexamples are proposed. • Different arguments are shown for component-directed join to be defined item-wise. • The conditions for ⨆ -irreducible polytomous states to be atoms are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Interval timing: Modelling the break-run-break pattern using start/stop threshold-less drift–diffusion model.
- Author
-
Zwicker, Jason and Rivest, Francois
- Subjects
- *
EXPECTANCY theories , *REWARD (Psychology) - Abstract
Animal interval timing is often studied through the peak interval (PI) procedure. In this procedure, the animal is rewarded for the first response after a fixed delay from the stimulus onset, but on some trials, the stimulus remains and no reward is given. The standard methods and models to analyse the response pattern describe it as break-run-break, a period of low rate response followed by rapid responding, followed by a low rate of response. The study of the pattern has found correlations between start, stop, and duration of the run period that hold across species and experiments. It is commonly assumed that to achieve the statistics with a pacemaker accumulator model, it is necessary to have start and stop thresholds. In this paper, we will develop a new model that varies response rate in relation to the likelihood of event occurrence, as opposed to a threshold, for changing the response rate. The new model reproduced the start and stop statistics that have been observed in 14 different PI experiments from 3 different papers. The developed model is also compared to the two-threshold Time-adaptive Drift–diffusion Model (TDDM), and the latest accumulator model subsuming the scalar expectancy theory (SET) on all 14 datasets. The results show that it is unnecessary to have explicit start and stop thresholds or an internal equivalent to break-run-break states to reproduce the individual trials statistics, the average behaviour, and the break-run-break analysis results. The new model also produces more realistic individual trials compared to TDDM. • Pacemaker-accumulator model without start and stop thresholds. • Analysis of the peak-interval procedure including: individual trial, start and stop, and the average response curve. • Linearly increasing response rate until expected time of reward. • Drift–diffusion model with a probabilistic response for the peak-interval procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Subjective expected utility with signed threshold.
- Author
-
Nakamura, Yutaka
- Subjects
- *
EXPECTED utility , *REGRET - Abstract
This paper generalizes subjective expected utility by incorporating signed threshold, whose positive (respectively, negative) value enhances (respectively, reduces) subjective expected utility of chosen alternative against unchosen one. It can be interpreted, for example, that positivity of the signed threshold reflects domination of rejoicing feeling against regret feeling. Since the signed threshold representation is a special case of skew-symmetric additive (SSA) representation, we prove that in addition to SSA axiomatization, restriction of probabilistic sophistication to pairs of acts which are regret-free separates subjective expected utility and signed threshold. It is assumed that regret-freeness is measured by monetary differences or ex post strength of preferences. • Subjective expected utility (SEU) is generalized to incorporate signed threshold. • The probabilistic sophistication with regret-freeness separates SEU and signed threshold. • A simple regret model by Bell, Loomes, and Sugden is a special case of our model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Nondecomposable Item Response Theory models: Fundamental measurement in psychometrics.
- Author
-
Franco, Vithor Rosa, Laros, Jacob Arie, and Wiberg, Marie
- Subjects
- *
ITEM response theory , *MODEL theory , *PSYCHOMETRICS , *RASCH models , *STATISTICAL reliability - Abstract
The main aim of the current paper is to propose Item Response Theory (IRT) models derived from the nondecomposable measurement theories presented in Fishburn (1974). More specifically, we aim to: (i) present the theoretical basis of the Rasch model and its relations to psychophysics' models of utility; (ii) give a brief exposition on the measurement theories presented in Fishburn (1974, 1975), some of which do not require an additive structure; and (iii) derive IRT models from these measurement theories, as well as Bayesian implementations of these models. We also present two empirical examples to compare how well these IRT models fit to real data. In addition to deriving new IRT models, we also discuss theoretical interpretations regarding the models' capability of generating fundamental measures of the true scores of the respondents. The manuscript ends with prospects for future studies and practical implications. • We discuss the relation between utility and item response theory models. • We propose nondecomposable item response theory models based on Fishburn (1974). • We show how to fit these models using Bayesian methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.