19 results
Search Results
2. Improving mathematics assessment readability: Do large language models help?
- Author
-
Patel, Nirmal, Nagpal, Pooja, Shah, Tirth, Sharma, Aditya, Malvi, Shrey, and Lomas, Derek
- Subjects
PROBLEM solving ,READABILITY (Literary style) ,CONFIDENCE intervals ,NATURAL language processing ,USER interfaces ,MATHEMATICS ,EDUCATIONAL tests & measurements ,VOCABULARY ,DESCRIPTIVE statistics ,STATISTICAL sampling ,SCHOOL children ,ALGORITHMS - Abstract
Background: Readability metrics provide us with an objective and efficient way to assess the quality of educational texts. We can use the readability measures for finding assessment items that are difficult to read for a given grade level. Hard‐to‐read math word problems can put some students at a disadvantage if they are behind in their literacy learning. Despite their math abilities, these students can perform poorly on difficult‐to‐read word problems because of their poor reading skills. Less readable math tests can create equity issues for students who are relatively new to the language of assessment. Less readable test items can also affect the assessment's construct validity by partially measuring reading comprehension. Objectives: This study shows how large language models help us improve the readability of math assessment items. Methods: We analysed 250 test items from grades 3 to 5 of EngageNY, an open‐source curriculum. We used the GPT‐3 AI system to simplify the text of these math word problems. We used text prompts and the few‐shot learning method for the simplification task. Results and Conclusions: On average, GPT‐3 AI produced output passages that showed improvements in readability metrics, but the outputs had a large amount of noise and were often unrelated to the input. We used thresholds over text similarity metrics and changes in readability measures to filter out the noise. We found meaningful simplifications that can be given to item authors as suggestions for improvement. Takeaways: GPT‐3 AI is capable of simplifying hard‐to‐read math word problems. The model generates noisy simplifications using text prompts or few‐shot learning methods. The noise can be filtered using text similarity and readability measures. The meaningful simplifications AI produces are sound but not ready to be used as a direct replacement for the original items. To improve test quality, simplifications can be suggested to item authors at the time of digital question authoring. Lay Description: What is Known About the Subject: Difficult to read math assessment items cause measurement and equity issuesReadability of math word problems negatively correlated with outcomes What Our Paper Adds: GPT‐3 AI system is capable to simplify math word problemsPrompt‐based and few‐shot learning‐based approaches are able to create meaningful simplifications, but with a very low accuracy rateText similarity and readability measures can be used to filter out noisy outputs and discover interesting simplifications What are the Implications for Practitioners: GPT‐3 AI is capable of generating relevant math world problem simplificationsSimplified text can be used as suggestions to question authors during the digital item authoring process [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Homogeneous and heterogeneous multiple representations in equation‐solving problems: An eye‐tracking study.
- Author
-
Malone, Sarah, Altmeyer, Kristin, Vogel, Markus, and Brünken, Roland
- Subjects
ALGORITHMS ,ANALYSIS of variance ,GRAPHIC arts ,LEARNING strategies ,MATHEMATICAL models ,MATHEMATICS ,MULTIMEDIA systems ,MULTIVARIATE analysis ,PROBLEM solving ,RESEARCH ,STATISTICAL sampling ,SCALE analysis (Psychology) ,STATISTICAL hypothesis testing ,STATISTICS ,T-test (Statistics) ,THEORY ,STATISTICAL power analysis ,DATA analysis ,DESCRIPTIVE statistics - Abstract
Multiple external representations (MERs) play an important role in the learning field of mathematics. Whereas the cognitive theory of multimedia learning and the integrative text and picture comprehension model assume that the heterogeneous combination of symbolic and analogous representations fosters learning; the design, functions, and tasks framework holds that learning benefits depend on the specific functions of MERs. The current paper describes a conceptual replication study of one of the few studies comparing single representations, heterogeneous, and homogeneous MERs in the context of mathematics learning. In a balanced incomplete block design, the participants were provided single representations (a graphic, text, or formula) or a heterogeneous (e.g., text + graphic) or homogeneous (text + formula) combination of these to solve linear system of equations problems. In accordance with previous research, performance was superior in conditions providing MERs compared to single‐representation conditions. Moreover, heterogeneous MERs led to time savings over homogeneous MERs which triggered an increase in cognitive load. Contrary to previous research, text was the least fixated representation whereas the graphical representation proved to be most beneficial. With regard to practical implications, experts should be fostered through more challenging homogeneous MERs whereas novices should be supported through the accessible graphic contained in heterogeneous MERs. Lay Description: What is already known about this topic: Providing multiple instead of single representations can foster mathematics learning.However, some combinations of representations hamper the performance of novice learners.The classical multimedia view explains the positive effect of combining text with picture.Recent research suggests the combination of multiple symbolic representations (text and formula). What this paper adds: The study replicated the effect of multiple symbolic representations for equation solving.In contrast to former research, each representation (text, formula, and graphic) was functionally equivalent.Tasks with differently coded representations were solved faster and with less mental effort.Gaze behavior indicated that students mostly used the graphical representation. Implications for practice and/or policy: Beginners should be supported with a graphic when solving equations.Experts should be challenged with a combination of symbolic representations to benefit the most. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. A greedy non‐hierarchical grey wolf optimizer for real‐world optimization.
- Author
-
Akbari, Ebrahim, Rahimnejad, Abolfazl, and Gadsden, Stephen Andrew
- Subjects
ALGORITHMS ,OPTIMAL control theory ,CALCULUS of variations ,MATHEMATICAL analysis ,MATHEMATICS - Abstract
Grey wolf optimization (GWO) algorithm is a new emerging algorithm that is based on the social hierarchy of grey wolves as well as their hunting and cooperation strategies. Introduced in 2014, this algorithm has been used by a large number of researchers and designers, such that the number of citations to the original paper exceeded many other algorithms. In a recent study by Niu et al., one of the main drawbacks of this algorithm for optimizing real‐world problems was introduced. In summary, they showed that GWO's performance degrades as the optimal solution of the problem diverges from 0. In this paper, by introducing a straightforward modification to the original GWO algorithm, that is, neglecting its social hierarchy, the authors were able to largely eliminate this defect and open a new perspective for future use of this algorithm. The efficiency of the proposed method was validated by applying it to benchmark and real‐world engineering problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Beyond jam sandwiches and cups of tea: An exploration of primary pupils' algorithm‐evaluation strategies.
- Author
-
Benton, L., Kalas, I., Saunders, P., Hoyles, C., and Noss, R.
- Subjects
CURRICULUM ,ALGORITHMS ,PROGRAMMING languages ,SCHOOL children ,TASK performance ,DATA analysis software - Abstract
Abstract: The long‐standing debate into the potential benefit of developing mathematical thinking skills through learning to program has been reignited with the widespread introduction of programming in schools across many countries, including England where it is a statutory requirement for all pupils to be taught programming from 5 years old. Algorithm is introduced early in the English computing curriculum, yet there is limited knowledge of how young pupils view this concept. This paper explores pupils' (aged 10–11) understandings of algorithm following their engagement with 1 year of ScratchMaths, a curriculum designed to develop computational and mathematical thinking skills through learning to program. A total of 181 pupils from 6 schools undertook a set of written tasks to assess their interpretations and evaluations of different algorithms that solve the same problem, with a subset of these pupils subsequently interviewed to probe their understandings in greater depth. We discuss the different approaches identified, the evaluation criteria they used, and the aspects of the concept that pupils found intuitive or challenging, such as simplification and abstraction. The paper ends with some reflections on the implications of the research, concluding with a set of recommendations for pedagogy in developing primary pupils' algorithmic thinking. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. A Response Time Process Model for Not‐Reached and Omitted Items.
- Author
-
Lu, Jing and Wang, Chun
- Subjects
MISSING data (Statistics) ,ALGORITHMS ,MATHEMATICS ,HUMAN behavior models ,STANDARDIZED tests - Abstract
Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we proposed to use an innovative item response time model as a cohesive missing data model to account for the two most common item nonresponses: not‐reached items and omitted items. In particular, the new model builds on a behavior process interpretation: a person chooses to skip an item if the required effort exceeds the implicit time the person allocates to the item (Lee & Ying, 2015; Wolf, Smith, & Birnbaum, 1995), whereas a person fails to reach the end of the test due to lack of time. This assumption was verified by analyzing the 2015 PISA computer‐based mathematics data. Simulation studies were conducted to further evaluate the performance of the proposed Bayesian estimation algorithm for the new model and to compare the new model with a recently proposed "speed‐accuracy + omission" model (Ulitzsch, von Davier, & Pohl, 2019). Results revealed that all model parameters could recover properly, and inadequately accounting for missing data caused biased item and person parameter estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. Distributed average computation for multiple time-varying signals with output measurements.
- Author
-
Zhao, Yu, Liu, Yongfang, Duan, Zhisheng, and Wen, Guanghui
- Subjects
QUANTUM computing ,SIGNALS & signaling ,ALGORITHMS ,MATHEMATICS ,GEOMETRY - Abstract
This paper studies the distributed average computation problem for multiple time-varying signals with bounded inputs. Based only on relative output measurements, a pair of continuous algorithms with, respectively, static and adaptive coupling strengths are designed and utilized. From the concept of boundary layer approach, the proposed continuous algorithm with static coupling strengths can asymptotically obtain the average value of the multiple reference signals without chattering phenomenon. Furthermore, for the case of algorithms with adaptive coupling strengths, the calculation errors are uniformly ultimately bounded and exponentially converge to a small adjustable bounded set. Finally, a simulation example is presented to show the validity of the theoretical results. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Decomposable super‐simple NRBIBDs with block size 4 and index 6.
- Author
-
Yu, Huangsheng, Sun, Xianwei, Wu, Dianhua, and Abel, R. Julian R.
- Subjects
BLOCK designs ,COMBINATORIAL designs & configurations ,MATHEMATICS ,MATHEMATICAL analysis ,ALGORITHMS - Abstract
Necessary conditions for the existence of a super‐simple, decomposable, near‐resolvable (v,4,6)‐balanced incomplete block design (BIBD) whose 2‐component subdesigns are both near‐resolvable (v,4,3)‐BIBDs are v≡1 (mod 4) and v≥17. In this paper, we show that these necessary conditions are sufficient. Using these designs, we also establish that the necessary conditions for the existence of a super‐simple near‐resolvable (v,4,3)‐RBIBD, namely v≡1 (mod 4) and v≥9, are sufficient. A few new pairwise balanced designs are also given. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Input-output triggered control using.
- Author
-
Tolić, Domagoj, Sanfelice, Ricardo G., and Fierro, Rafael
- Subjects
ALGORITHMS ,MATHEMATICS theorems ,MATHEMATICS ,BANACH-Tarski paradox ,BAYES' theorem - Abstract
This paper investigates stability of nonlinear control systems under intermittent information. Following recent results in the literature, we replace the traditional periodic paradigm, where the up-to-date information is transmitted and control laws are executed in a periodic fashion, with the event-triggered paradigm. Building on the small gain theorem, we develop input-output triggered control algorithms yielding stable closed-loop systems. In other words, based on the currently available (but outdated) measurements of the outputs and external inputs of a plant, a mechanism triggering when to obtain new measurements and update the control inputs is provided. Depending on the noise in the environment, the developed algorithm yields stable, asymptotically stable, and [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
10. A proximal point method for the sum of maximal monotone operators.
- Author
-
Xiao, Hongying and Zeng, Xueying
- Subjects
MONOTONE operators ,OPERATOR theory ,IMAGE processing ,MATHEMATICS ,ALGORITHMS - Abstract
In this paper, we concentrate on the maximal inclusion problem of locating the zeros of the sum of maximal monotone operators in the framework of proximal point method. Such problems arise widely in several applied mathematical fields such as signal and image processing. We define two new maximal monotone operators and characterize the solutions of the considered problem via the zeros of the new operators. The maximal monotonicity and resolvent of both of the defined operators are proved and calculated, respectively. The traditional proximal point algorithm can be therefore applied to the considered maximal inclusion problem, and the convergence is ensured. Furthermore, by exploring the relationship between the proposed method and the generalized forward-backward splitting algorithm, we point out that this algorithm is essentially the proximal point algorithm when the operator corresponding to the forward step is the zero operator. Copyright © 2013 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. Articulated-Motion-Aware Sparse Localized Decomposition.
- Author
-
Wang, Yupan, Li, Guiqing, Zeng, Zhichao, and He, Huayun
- Subjects
GEOMETRY ,ANGLES ,EDGES (Geometry) ,ALGORITHMS ,MATHEMATICS - Abstract
Compactly representing time-varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection-map-based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state-of-the-art approaches in precisely capturing local articulated motion. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Fast and Exact Root Parity for Continuous Collision Detection.
- Author
-
Wang, Bolun, Ferguson, Zachary, Jiang, Xin, Attene, Marco, Panozzo, Daniele, and Schneider, Teseo
- Subjects
POLYNOMIALS ,ALGORITHMS ,MATHEMATICS ,COLLECTIONS - Abstract
We introduce the first exact root parity counter for continuous collision detection (CCD). That is, our algorithm computes the parity (even or odd) of the number of roots of the cubic polynomial arising from a CCD query. We note that the parity is unable to differentiate between zero (no collisions) and the rare case of two roots (collisions). Our method does not have numerical parameters to tune, has a performance comparable to efficient approximate algorithms, and is exact. We test our approach on a large collection of synthetic tests and real simulations, and we demonstrate that it can be easily integrated into existing simulators. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Maximizing learning without sacrificing the fun: Stealth assessment, adaptivity and learning supports in educational games.
- Author
-
Shute, Valerie, Rahimi, Seyedahmad, Smith, Ginny, Ke, Fengfeng, Almond, Russell, Dai, Chih‐Pu, Kuba, Renata, Liu, Zhichun, Yang, Xiaotong, and Sun, Chen
- Subjects
ADAPTABILITY (Personality) ,ALGORITHMS ,ANALYSIS of covariance ,ANALYSIS of variance ,OUTCOME-based education ,CONFIDENCE intervals ,STATISTICAL correlation ,ENGINEERING ,HIGH school students ,LEARNING strategies ,MATHEMATICS ,PSYCHOMETRICS ,QUESTIONNAIRES ,REGRESSION analysis ,RESEARCH evaluation ,RESEARCH funding ,STATISTICAL sampling ,SCALE analysis (Psychology) ,SCIENCE ,STUDENT attitudes ,SURVEYS ,T-test (Statistics) ,TECHNOLOGY ,VIDEO games ,RANDOMIZED controlled trials ,RELATIVE medical risk ,PRE-tests & post-tests ,DESCRIPTIVE statistics - Abstract
In this study, we investigated the validity of a stealth assessment of physics understanding in an educational game, as well as the effectiveness of different game‐level delivery methods and various in‐game supports on learning. Using a game called Physics Playground, we randomly assigned 263 ninth‐ to eleventh‐grade students into four groups: adaptive, linear, free choice and no‐treatment control. Each condition had access to the same in‐game learning supports during gameplay. Results showed that: (a) the stealth assessment estimates of physics understanding were valid—significantly correlating with the external physics test scores; (b) there was no significant effect of game‐level delivery method on students' learning; and (c) physics animations were the most effective (among eight supports tested) in predicting both learning outcome and in‐game performance (e.g. number of game levels solved). We included student enjoyment, gender and ethnicity in our analyses as moderators to further investigate the research questions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Investigating the association between students' strategy use and mathematics achievement.
- Author
-
Sahin, Nesrin, Dixon, Juli K., and Schoen, Robert C.
- Subjects
STUDENT organizations ,ADDITION (Mathematics) ,MATHEMATICS ,WORD problems (Mathematics) ,ALGORITHMS ,SUBTRACTION (Mathematics) ,ACHIEVEMENT - Abstract
This observational study used data from 270 second‐grade students to investigate the association between students' strategy use for multidigit addition and subtraction and their mathematics achievement. Based on strategies they used during a mathematics interview, students were classified into the following strategy groups: (a) standard algorithm, (b) invented, (c) mixed, and (d) unclassified. We used two‐level hierarchical linear regression to investigate the association between students' strategy use and their performance on a standardized test in mathematics. Results indicated that students in the mixed strategy groups had significantly higher mathematics achievement than those in the standard algorithm and the unclassified groups. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. The Development of Size Sequencing Skills: An Empirical and Computational Analysis.
- Author
-
McGonigle‐Chalmers, Maggie and Kusel, Iain
- Subjects
ABILITY ,ALGORITHMS ,CHILD development ,CHILD behavior ,COGNITION ,COMPUTER input-output equipment ,COMPUTER simulation ,LEARNING strategies ,LOGIC ,MATHEMATICS ,MEMORY ,SENSORY perception ,THOUGHT & thinking ,TRAINING ,EMPIRICAL research ,TASK performance - Abstract
We explore a long‐observed phenomenon in children's cognitive development known as size seriation. It is not until children are around 7 years of age that they spontaneously use a strict ascending or descending order of magnitude to organize sets of objects differing in size. Incomplete and inaccurate ordering shown by younger children has been thought to be related to their incomplete grasp of the mathematical concept of a unit. Piaget first brought attention to children's difficulties in solving ordering and size‐matching tests, but his tasks and explanations have been progressively neglected due to major theoretical shifts in scholarship on developmental cognition. A cogent alternative to his account has never emerged, leaving size seriation and related abilities as an unexplained case of discontinuity in mental growth. In this monograph, we use a new training methodology, together with computational modeling of the data to offer a new explanation of size seriation development and the emergence of related skills. We describe a connected set of touchscreen tasks that measure the abilities of 5‐ and 7‐year‐old children to (a) learn a linear size sequence of five or seven items and (b) identify unique (unit) values within those same sets, such as second biggest and middle‐sized. Older children required little or no training to succeed in the sequencing tasks, whereas younger children evinced trial‐and‐error performance. Marked age differences were found on ordinal identification tasks using matching‐to‐sample and other methods. Confirming Piaget's findings, these tasks generated learning data with which to develop a computational model of the change. Using variables to represent working and long‐term memory (WM and LTM), the computational model represents the information processing of the younger child in terms of a perception‐action feedback loop, resulting in a heuristic for achieving a correct sequence. To explain why older children do not require training on the size task, it was hypothesized that an increase in WM to a certain threshold level provides the information‐processing capacity to allow the participant to start to detect a minimum interval between each item in the selection. The probabilistic heuristic is thus thought to be replaced during a transitional stage by a serial algorithm that guarantees success. The minimum interval discovery has the effect of controlling search for the next item in a principled monotonic direction. Through a minor additional processing step, this algorithm permits relatively easy identification of ordinal values. The model was tested by simulating the perceptual learning and action selection processes thought to be taking place during trial‐and‐error sequencing. Error distributions were generated across each item in the sequence and these were found to correspond to the error patterns shown by 5‐year‐olds. The algorithm that is thought to emerge from successful learning was also tested. It simulated high levels of success on seriation and also on ordinal identification tasks, as shown by 7‐year‐olds. An unexpected finding from the empirical studies was that, unlike adults, the 7‐year‐old children showed marked difficulty when they had to compute ordinal size values in tasks that did not permit the use of the serial algorithm. For example, when required to learn a non‐monotonic sequence where the ordinal values were in a fixed random order such as "second biggest, middle‐sized, smallest, second smallest, biggest," each item has to be found without reference to the "smallest difference" rule used by the algorithm. The difficulty evinced by 7‐year‐olds was consistent with the idea that the information in LTM is integrally tied to the search procedure itself as a search‐and‐stop based on a cumulative tally, as distinct from being accessed from a more permanent and atemporal store of stand‐alone ordinal values in LTM. The implications of this possible constraint in understanding are discussed in terms of further developmental changes. We conclude that the seriation behavior shown by children at around 7 years represents a qualitative shift in their understanding but not in the sense that Piaget first proposed. We see the emergent algorithm as an information‐reducing device, representing a default strategy for how humans come to deal with potentially complex sets of relations. We argue this with regard to counting behaviors in children and also with regard to how linear monotonic devices for resolving certain logical tasks endure into adulthood. Insofar as the monograph reprises any aspect of the Piagetian account, it is in his highlighting of an important cognitive discontinuity in logicomathematical understanding at around the age of 7, and his quest for understanding the transactions with the physical world that lead to it. In this monograph, Maggie McGonigle‐Chalmers and Iain Kusel report an investigation into a phenomenon called size seriation. At around the age of seven years children suddenly become capable of systematically organizing objects in order of size. Using touchscreen tasks, they explore the differences between children of five and seven years when learning seriation tasks, and when trying to identify size relations such as middle‐sized. A computer model simulates the findings and shows how the act of size sequencing itself, together with an increase in memory capacity, creates a new solution for the older child that is not available to the younger child. Taken together, the findings and the model reveal changes in mental functioning that explain spontaneous seriation and how of the concept of a "unit" emerges during development. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Complexities in spatial center derivation.
- Author
-
Murray, Alan T.
- Subjects
MATHEMATICAL analysis ,ALGORITHMS ,NUMERICAL analysis ,FIXED point theory ,MATHEMATICS - Abstract
The center of a spatial object or set of objects is a rather straightforward and well understood descriptor. Beyond providing a statistically oriented interpretation of the middle of something, the center often has much significance in terms of locating services and activities, but may also represent a place of political/historical importance, a site that explains the occurrence of events, or a position that avoids conflict. Even so, precise definition and specification of the spatial center is difficult, as many alternatives exist for such a descriptor in practice. This is because different interpretations of a spatial center are inevitable depending on context, theory, and operational nuances. As a result, a range of approaches may reflect the necessary descriptive flexibility, depending on the type of evaluation being carried out. While legacy mathematical and statistical thinking may point to particular alternatives, spatial significance and bias remain questionable for some approaches as they are not generally well understood. Further, a host of other considerations, such as direct and indirect summary along with homogeneous and heterogeneous attribute distribution, contribute additional levels of detail to choices for best characterizing a center. Definitions and approaches used to specify the center of a spatial object, or set of objects, are detailed to demonstrate the significance of spatial context, and are used to show that expanded interpretations are possible. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. 'Waiting for Carnot': Information and complexity.
- Author
-
Bawden, David and Robinson, Lyn
- Subjects
ALGORITHMS ,CONCEPTUAL structures ,INFORMATION science ,MATHEMATICS ,SYSTEMS theory - Abstract
The relationship between information and complexity is analyzed using a detailed literature analysis. Complexity is a multifaceted concept, with no single agreed definition. There are numerous approaches to defining and measuring complexity and organization, all involving the idea of information. Conceptions of complexity, order, organization, and 'interesting order' are inextricably intertwined with those of information. Shannon's formalism captures information's unpredictable creative contributions to organized complexity; a full understanding of information's relation to structure and order is still lacking. Conceptual investigations of this topic should enrich the theoretical basis of the information science discipline, and create fruitful links with other disciplines that study the concepts of information and complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. Prediction with missing data via Bayesian Additive Regression Trees.
- Author
-
Kapelner, Adam and Bleich, Justin
- Subjects
MISSING data (Statistics) ,STATISTICAL research ,BAYESIAN analysis ,ALGORITHMS ,MATHEMATICS - Abstract
Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2015
- Full Text
- View/download PDF
19. Effect of CYP2C9-VKORC1 Interaction on Warfarin Stable Dosage and Its Predictive Algorithm.
- Author
-
Li, Xi, Liu, Rong, Yan, Han, Tang, Jie, Yin, Ji‐Ye, Mao, Xiao‐Yuan, Yang, Fang, Luo, Zhi‐Yin, Tan, Sheng‐Lan, He, Hui, Chen, Xiao‐Ping, Liu, Zhao‐Qian, Li, Zhi, Zhou, Hong‐Hao, and Zhang, Wei
- Subjects
ALGORITHMS ,CHI-squared test ,CHINESE people ,HISPANIC Americans ,MATHEMATICS ,GENETIC mutation ,PHARMACOGENOMICS ,RACE ,REGRESSION analysis ,RESEARCH funding ,WARFARIN ,WHITE people ,DATA analysis software ,DESCRIPTIVE statistics ,GENOTYPES - Abstract
This study aimed to identify the effect of CYP2C9-VKORC1 interaction on warfarin dosage requirement and its predictive algorithm by investigating four populations. Generalized linear model was used to evaluate the relationship between the interaction and warfarin stable dosage (WSD), whereas multiple linear regression analysis was applied to construct the WSD predictive algorithm. To evaluate the effect of CYP2C9-VKORC1 interaction on the predictive algorithms, we compared the algorithms with and without the interaction. The interaction was significantly associated with WSD in the Chinese and White cohorts (P values<0.05). In the algorithms that considered the interaction, the predictive success rates improved by only 0.12% in the Chinese patients and by a maximum of 0.02% in the White patients under four different CYP2C9 classifications. Thus, VKORC1-CYP2C9 interaction can affect WSD. However, the discrepancy between the predictive results obtained using the predictive algorithm with and without CYP2C9-VKORC1 interaction was negligible and can therefore be disregarded. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.