303 results on '"Model-based analysis"'
Search Results
2. Representing the dynamics of natural marmoset vocal behaviors in frontal cortex.
- Author
-
Li, Jingwen, Aoi, Mikio C., and Miller, Cory T.
- Abstract
Here, we tested the respective contributions of primate premotor and prefrontal cortex to support vocal behavior. We applied a model-based generalized linear model (GLM) analysis that better accounts for the inherent variance in natural, continuous behaviors to characterize the activity of neurons throughout the frontal cortex as freely moving marmosets engaged in conversational exchanges. While analyses revealed functional clusters of neural activity related to the different processes involved in the vocal behavior, these clusters did not map to subfields of prefrontal or premotor cortex, as has been observed in more conventional task-based paradigms. Our results suggest a distributed functional organization for the myriad neural mechanisms underlying natural social interactions and have implications for our concepts of the role that frontal cortex plays in governing ethological behaviors in primates. • Neurons in marmoset monkey PFC and PMC recorded during natural conversations • GLM and PSTH were applied to quantify neural activity in continuous behavior • Model-based approach robustly outperformed more traditional analyses • Neurons in behavior-related functional clusters were distributed throughout PFC/PMC Li and colleagues applied model-based and traditional analyses to characterize single-neuron responses in the frontal cortex while marmosets engaged in their natural conversational exchanges. Results showed that the population supported nearly all facets of this ethological behavior through an anatomically distributed—but functionally modular—pattern of neural activity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Model‐based population pharmacokinetic and exposure response analyses for safety and efficacy of nivolumab as adjuvant treatment in subjects with resected oesophageal or gastroesophageal junction cancer.
- Author
-
Zhao, Yue, Tsujimoto, Akihide, Ide, Takafumi, Zhang, Jenny, Feng, Yan, Gao, Ling, Bello, Akintunde, and Roy, Amit
- Subjects
- *
ESOPHAGOGASTRIC junction , *NIVOLUMAB , *CLINICAL pharmacology , *CONFIDENCE intervals , *PHARMACOKINETICS - Abstract
Aims: Nivolumab is approved as adjuvant treatment in subjects with resected oesophageal or gastroesophageal junction cancer (EC/GEJC) based on results from the pivotal CheckMate 577 trial. We present a model‐based clinical pharmacology profiling and benefit–risk assessment of nivolumab as adjuvant treatment in subjects with resected EC/GEJC supporting a less frequent dosing regimen. Methods: Population pharmacokinetic (popPK) analysis was conducted to characterize nivolumab pharmacokinetics (PK) using clinical data from 1493 subjects from seven monotherapy clinical studies across multiple solid tumours. The exposure‐response (E‐R) analyses included data from 756 patients from CheckMate 577. E‐R relationships for efficacy and safety were characterized by evaluating the relationship between nivolumab exposure and disease‐free survival (DFS) for efficacy; and time to first occurrence of Grade ≥2 immune‐mediated adverse events (Gr2 + IMAEs) for safety. Results: Nivolumab exposure was found to be associated with both DFS and risk of Gr2 + IMAEs. However, the hazard ratios (HRs) (95% confidence interval [CI]) at the 5th and 95th percentiles of nivolumab exposure were similar for DFS and Gr2 + IMAEs, indicating flat E‐R relationships within the exposure range produced by the studied regimen. Model‐predicted probability of DFS and Gr2 + IMAEs were similar between the two regimens of 240 mg every 2 weeks or 480 mg every 4 weeks for 16 weeks followed by 480 mg Q4W up to 1 year. Conclusions: The analyses demonstrated a flat E‐R relationship over the range of exposures produced by the studied regimen and supported the approval of an alternative dosing regimen with less frequent dosing in patients with adjuvant EC/GEJC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Integrated multi-view modeling for reliable machine learning-intensive software engineering.
- Author
-
Husen, Jati H., Washizaki, Hironori, Runpakprakun, Jomphon, Yoshioka, Nobukazu, Tun, Hnin Thandar, Fukazawa, Yoshiaki, and Takeuchi, Hironori
- Subjects
MACHINE learning ,SOFTWARE engineering ,MACHINERY - Abstract
Development of machine learning (ML) systems differs from traditional approaches. The probabilistic nature of ML leads to a more experimentative development approach, which often results in a disparity between the quality of ML models with other aspects such as business, safety, and the overall system architecture. Herein the Multi-view Modeling Framework for ML Systems (M
3 S) is proposed as a solution to this problem. M3 S provides an analysis framework that integrates different views. It is supported by an integrated metamodel to ensure the connection and consistency between different models. To facilitate the experimentative nature of ML training, M3 S provides an integrated platform between the modeling environment and the ML training pipeline. M3 S is validated through a case study and a controlled experiment. M3 S shows promise, but future research needs to confirm its generality. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
5. Augmenting ESM-based Mental Health Assessment using Affective Ising Model
- Author
-
Tongco-Rosario, Gina Rose N., Soriano, Jaymar, Li, Kan, Editor-in-Chief, Li, Qingyong, Associate Editor, Fournier-Viger, Philippe, Series Editor, Hong, Wei-Chiang, Series Editor, Liang, Xun, Series Editor, Wang, Long, Series Editor, Xu, Xuesong, Series Editor, Caro, Jaime, editor, Hagihara, Shigeki, editor, Nishizaki, Shin-ya, editor, Numao, Masayuki, editor, and Suarez, Merlin, editor
- Published
- 2024
- Full Text
- View/download PDF
6. Development of Dual Intake Port Technology in ORC-Based Power Unit Driven by Solar-Assisted Reservoir.
- Author
-
Fatigati, Fabio and Cipollone, Roberto
- Subjects
- *
MECHANICAL efficiency , *HOT water , *DOMESTIC architecture , *WORKING fluids , *SOLAR system , *SOLAR energy - Abstract
The ORC-based micro-cogeneration systems exploiting a solar source to generate electricity and domestic hot water (DHW) simultaneously are a promising solution to reduce CO2 emissions in the residential sector. In recent years, a huge amount of attention was focused on the development of a technological solution allowing improved performance of solar ORC-based systems frequently working under off-design conditions due to the intermittence of the solar source availability and to the variability in domestic hot water demand. The optimization efforts are focused on the improvement of component technology and plant architecture. The expander is retained as the key component of such micro-cogeneration units. Generally, volumetric machines are adopted thanks to their better capability to deal with severe off-design conditions. Among the volumetric expanders, scroll machines are one of the best candidates thanks to their reliability and to their flexibility in managing two-phase working fluid. Their good efficiency adds further interest to place them among the best candidate machines to be considered. Nevertheless, similarly to other volumetric expanders, an additional research effort is needed toward efficiency improvement. The fixed built-in volume ratio, in fact, could produce an unsteady under- or over-expansion during vane filling and emptying, mainly when the operating conditions depart from the designed ones. To overcome this phenomenon, a dual intake port (DIP) technology was also introduced for the scroll expander. Such technology allows widening the angular extension of the intake phase, thus adapting the ratio between the intake and exhaust volume (so called built-in volume ratio) to the operating condition. Moreover, DIP technology allows increasing the permeability of the machine, ensuring a resulting higher mass flow rate for a given pressure difference at the expander side. On the other hand, for a given mass flow rate, the expander intake pressure diminishes with a positive benefit on scroll efficiency. DIP benefits were already proven experimentally and theoretically in previous works by the authors for Sliding Rotary Vane Expanders (SVRE). In the present paper, the impact of the DIP technology was assessed in a solar-assisted ORC-based micro-cogeneration system operating with scroll expanders and being characterized by reduced power (hundreds of W). It was found that the DIP Scroll allows elaboration of a 32% higher mass flow rate for a given pressure difference between intake and expander sides for the application at hand. This leads to an average power increase of 10% and to an improvement of up to 5% of the expander mechanical efficiency. Such results are particularly interesting for micro-cogeneration ORC-based units that are solar-assisted. Indeed, the high variability of hot source and DHW demand makes the operation of the DIP expander at a wide range of operating conditions. The experimental activity conducted confirms the suitability of the DIP expander to exploit as much as possible the thermal power available from a hot source even when at variable temperatures during operation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Model-Based Assessment of the Reference Values of CAVI in Healthy Russian Population and Benchmarking With CAVI0.
- Author
-
Safronova, Tatiana, Kravtsova, Anna, Vavilov, Sergei, Leon, Cristina, Bragina, Anna, Milyagin, Victor, Makiev, Ruslan, Sumin, Alexei, Peskov, Kirill, Sokolov, Victor, and Podzolkov, Valery
- Subjects
REFERENCE values ,RUSSIANS ,PEARSON correlation (Statistics) ,BLOOD pressure ,ARTERIAL diseases - Abstract
BACKGROUND Cardio-ankle vascular index (CAVI) and its modified version (CAVI0) are promising non-invasive markers of arterial stiffness, extensively evaluated primarily in the Japanese population. In this work, we performed a model-based analysis of the association between different population characteristics and CAVI or CAVI0 values in healthy Russian subjects and propose a tool for calculating the range of reference values for both types of indices. METHODS The analysis was based on the data from 742 healthy volunteers (mean age 30.4 years; 73.45% men) collected from a multicenter observational study. Basic statistical analysis [analysis of variance, Pearson's correlation (r), significance tests] and multivariable linear regression were performed in R software (version 4.0.2). Tested covariates included age, sex, BMI, blood pressure, and heart rate (HR). RESULTS No statistically significant difference between healthy men and women were observed for CAVI and CAVI0. In contrast, both indices were positively associated with age (r = 0.49 and r = 0.43, P < 0.001), however, with no clear distinction between subjects of 20–30 and 30–40 years old. Heart rate and blood pressure were also identified as statistically significant predictors following multiple linear regression modeling, but with marginal clinical significance. Finally, the algorithm for the calculation of the expected ranges of CAVI in healthy population was proposed, for a given age category, HR and pulse pressure (PP) values. CONCLUSIONS We have evaluated the quantitative association between various population characteristics, CAVI, and CAVI0 values and established a method for estimating the subject-level reference CAVI and CAVI0 measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Understanding the Dynamics of Endangered Species with System Dynamics Approach
- Author
-
Qudrat-Ullah, Hassan, Abarbanel, Henry D. I., Series Editor, Braha, Dan, Series Editor, Érdi, Péter, Series Editor, Friston, Karl J., Series Editor, Grillner, Sten, Series Editor, Haken, Hermann, Series Editor, Jirsa, Viktor, Series Editor, Kacprzyk, Janusz, Series Editor, Kaneko, Kunihiko, Series Editor, Kelso, Scott, Founding Editor, Kirkilionis, Markus, Series Editor, Kurths, Jürgen, Series Editor, Menezes, Ronaldo, Series Editor, Nowak, Andrzej, Series Editor, Qudrat-Ullah, Hassan, Series Editor, Reichl, Linda, Series Editor, Schuster, Peter, Series Editor, Schweitzer, Frank, Series Editor, Sornette, Didier, Series Editor, and Thurner, Stefan, Series Editor
- Published
- 2023
- Full Text
- View/download PDF
9. Research on the Model-Based Analysis of eVTOL UML-2 Functional Requirements and Performance Requirements
- Author
-
Huang, Yingshan, Zhang, Shuguang, Chu, Nana, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Yan, Liang, editor, and Deng, Yimin, editor
- Published
- 2023
- Full Text
- View/download PDF
10. Experimental and theoretical analysis of a micro-cogenerative solar ORC-based unit equipped with a variable speed sliding rotary vane expander
- Author
-
Fabio Fatigati, Diego Vittorini, Marco Di Bartolomeo, and Roberto Cipollone
- Subjects
Solar ORC-based power unit ,Sliding rotary vane expander ,ORC control ,Domestic micro-cogenerative application ,Model-based analysis ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
A promising solution for the Combined Heat and Power (CHP) micro production is certainly represented by Organic Rankine Cycle (ORC)-based power units. In the domestic appliances with electrical power range of the units below 1 kW, the reduced dimensions of the components represent a critical aspect as well as the need to guarantee a high reliability. When the hot source is represented by solar energy, the optimization of the electricity production keeping insured the thermal energy availability represents an aspect which invites to a proper management of the unit. Solar-based ORC-recovery units frequently work in off-design conditions due to the variability of the hot source and to the Domestic Hot Water (DHW) requirements. For this reason, the design and the selection of the components should be carefully performed. The expander is commonly retained the key component of the unit being the one that mainly affects the behaviour. For the mentioned power ranges, the volumetric expander is the best technological option and, among those available, Sliding Rotary Vane Expander (SVRE) are gaining a sensible interest. At off design conditions, according to permeability theory, the expander intake pressure linearly varies with mass flow rate of the Working Fluid (WF) which is the most suitable and easiest parameter to be changed. This modifies the performances of the unit, both from a thermodynamic and technological point of view. In this paper, the speed variation of the expander is considered as control parameter to restore design expander intake pressure. In order to assess a strategy for the speed variation of the expander, in this paper a comprehensive model of the SVRE is presented when it operates in a solar-driven ORC-based unit. The model is physically based and recovers and widens the permeability theory developed by the authors in previous works. An experimental ORC-based unit was fully instrumented and operated, coupled with a reservoir, usually present when flat plate solar collectors are used, which store the thermal energy which fulfils thermal energy requests and feeds the generating unit. The model was widely validated with the experimental data properly conceived for the purpose. In the unit the expander speed was varied and, thanks to the permeability theory, the relationships between WF flowrate variations, inlet expander pressure and expander speed variation were investigated. The potentiality of a control strategy of the expander revolution speed of the expander was fixed as well as a deeper understanding of the SVRE behaviour and relationships between operating variables. In particular, it was observed that varying the speed from 1000 RPM up to 2000 RPM, the expander behaviour was optimized ensuring proper working condition matching with a (30–100 g/s) flowrate range.
- Published
- 2023
- Full Text
- View/download PDF
11. Standards for model-based early bactericidal activity analysis and sample size determination in tuberculosis drug development.
- Author
-
Mockeliunas, Laurynas, Faraj, Alan, van Wijk, Rob C., Upton, Caryn M., van den Hoogen, Gerben, Diacon, Andreas H., and Simonsson, Ulrika S. H.
- Subjects
DRUG development ,SAMPLE size (Statistics) ,TUBERCULOSIS ,DRUG efficacy ,PHARMACOGENOMICS ,PHARMACOKINETICS - Abstract
Background: A critical step in tuberculosis (TB) drug development is the Phase 2a early bactericidal activity (EBA) study which informs if a new drug or treatment has short-term activity in humans. The aim of this work was to present a standardized pharmacometric model-based early bactericidal activity analysis workflow and determine sample sizes needed to detect early bactericidal activity or a difference between treatment arms. Methods: Seven different steps were identified and developed for a standardized pharmacometric model-based early bactericidal activity analysis approach. Nonlinear mixed effects modeling was applied and different scenarios were explored for the sample size calculations. The sample sizes needed to detect early bactericidal activity given different TTP slopes and associated variability was assessed. In addition, the sample sizes needed to detect effect differences between two treatments given the impact of different TTP slopes, variability in TTP slope and effect differences were evaluated. Results: The presented early bactericidal activity analysis approach incorporates estimate of early bactericidal activity with uncertainty through the model-based estimate of TTP slope, variability in TTP slope, impact of covariates and pharmacokinetics on drug efficacy. Further it allows for treatment comparison or dose optimization in Phase 2a. To detect early bactericidal activity with 80% power and at a 5% significance level, 13 and 8 participants/arm were required for a treatment with a TTP-EBA0-14 as low as 11 h when accounting for variability in pharmacokinetics and when variability in TTP slope was 104% [coefficient of variation (CV)] and 22%, respectively. Higher sample sizes are required for smaller early bactericidal activity and when pharmacokinetics is not accounted for. Based on sample size determinations to detect a difference between two groups, TTP slope, variability in TTP slope and effect difference between two treatment arms needs to be considered. Conclusion: In conclusion, a robust standardized pharmacometric model-based EBA analysis approach was established in close collaboration between microbiologists, clinicians and pharmacometricians. The work illustrates the importance of accounting for covariates and drug exposure in EBA analysis in order to increase the power of detecting early bactericidal activity for a single treatment arm as well as differences in EBA between treatments arms in Phase 2a trials of TB drug development. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Neural Basis of Social Influence of Observing Other's Perception in Dot-Number Estimation.
- Author
-
Ogawa, Akitoshi, Kameda, Tatsuya, and Nakatani, Hironori
- Subjects
- *
FUNCTIONAL magnetic resonance imaging , *PARIETAL lobe , *SOCIAL influence , *ATTENTIONAL bias , *IMAGE analysis , *BEHAVIORAL assessment - Abstract
• Participants' perceptual decisions were modulated by observing others' decisions. • The computational model could deduce the participants' decisions. • Model-based analysis revealed that MPFC was associated with self-other discrepancy. • Parietal region showed altered activity patterns after observing others' decisions. Our perceptions and decisions are often implicitly influenced by observing another's actions. However, it is unclear how observing other people's perceptual decisions without interacting with them can engage the processing of self-other discrepancies and change the observer's decisions. In this study, we employed functional magnetic resonance imaging and a computational model to investigate the neural basis of how unilaterally observing the other's perceptual decisions modulated one's own decisions. The experimental task was to discriminate whether the number of presented dots was higher or lower than a reference number. The participants performed the task solely while unilaterally observing the performance of another "participant," who produced overestimations and underestimations in the same task in separate sessions. Results of the behavioral analysis showed that the participants' decisions were modulated to resemble those of the other. Image analysis based on computational model revealed that the activation in the medial prefrontal cortex was associated with the discrepancy between the inferred participant's and the presented other's decisions. In addition, the number-sensitive region in the superior parietal region showed altered activation patterns after observing the other's overestimations and underestimations. The activity of the superior parietal region was not involved in assessing the observation of other's perceptual decisions, but it was engaged in plain numerosity perception. These results suggest that computational modeling can capture the neuro-behavioral processing of self-other discrepancies in perception followed by the activity modulation in the number-sensitive region in the task of dot-number estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Quantification of variations in the compressor characteristics of power generation gas turbines at partial loads using actual operation data.
- Author
-
Lee, Jae Hong, Kang, Do Won, Jeong, Ji Hun, and Kim, Tong Seop
- Subjects
- *
GAS turbines , *COMPRESSORS , *ELECTRICITY markets , *INFORMATION-seeking behavior , *RENEWABLE energy sources , *MANUFACTURING industries - Abstract
As the proportion of renewable energy is increasing steadily in the electricity market, gas turbines (GTs) need to operate under the partial load more frequently. Thus, it is necessary to diagnose the GT performance using partial load data. The variations in the characteristic parameters of the compressor should be identified according to the angle of inlet guide vane (IGV), which plays the key role in controlling GT power, to improve the accuracy of performance diagnosis at partial load. This paper proposes a method to identify the variations using actual operation data. The method consists of two steps. The first step evaluates the effect of compressor fouling using full load operating data. The second step quantifies the effect of changes in IGV angle using partial load operating data. The method was applied to the operation data of a 170 MW class gas turbine. The changes in flow capacity, pressure ratio, and compressor efficiency according to the IGV angle were identified, and formulas to describe the variations were derived. The significance of the proposed method is that it does not require detailed information of the compressor behavior change from the manufacturer but uses the actual operating data of the GT users. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Development of Dual Intake Port Technology in ORC-Based Power Unit Driven by Solar-Assisted Reservoir
- Author
-
Fabio Fatigati and Roberto Cipollone
- Subjects
dual intake port scroll expander ,solar ORC-based power unit ,domestic micro-cogenerative application ,volumetric expander ,model-based analysis ,ORC-unit comprehensive model experimental validation ,Technology - Abstract
The ORC-based micro-cogeneration systems exploiting a solar source to generate electricity and domestic hot water (DHW) simultaneously are a promising solution to reduce CO2 emissions in the residential sector. In recent years, a huge amount of attention was focused on the development of a technological solution allowing improved performance of solar ORC-based systems frequently working under off-design conditions due to the intermittence of the solar source availability and to the variability in domestic hot water demand. The optimization efforts are focused on the improvement of component technology and plant architecture. The expander is retained as the key component of such micro-cogeneration units. Generally, volumetric machines are adopted thanks to their better capability to deal with severe off-design conditions. Among the volumetric expanders, scroll machines are one of the best candidates thanks to their reliability and to their flexibility in managing two-phase working fluid. Their good efficiency adds further interest to place them among the best candidate machines to be considered. Nevertheless, similarly to other volumetric expanders, an additional research effort is needed toward efficiency improvement. The fixed built-in volume ratio, in fact, could produce an unsteady under- or over-expansion during vane filling and emptying, mainly when the operating conditions depart from the designed ones. To overcome this phenomenon, a dual intake port (DIP) technology was also introduced for the scroll expander. Such technology allows widening the angular extension of the intake phase, thus adapting the ratio between the intake and exhaust volume (so called built-in volume ratio) to the operating condition. Moreover, DIP technology allows increasing the permeability of the machine, ensuring a resulting higher mass flow rate for a given pressure difference at the expander side. On the other hand, for a given mass flow rate, the expander intake pressure diminishes with a positive benefit on scroll efficiency. DIP benefits were already proven experimentally and theoretically in previous works by the authors for Sliding Rotary Vane Expanders (SVRE). In the present paper, the impact of the DIP technology was assessed in a solar-assisted ORC-based micro-cogeneration system operating with scroll expanders and being characterized by reduced power (hundreds of W). It was found that the DIP Scroll allows elaboration of a 32% higher mass flow rate for a given pressure difference between intake and expander sides for the application at hand. This leads to an average power increase of 10% and to an improvement of up to 5% of the expander mechanical efficiency. Such results are particularly interesting for micro-cogeneration ORC-based units that are solar-assisted. Indeed, the high variability of hot source and DHW demand makes the operation of the DIP expander at a wide range of operating conditions. The experimental activity conducted confirms the suitability of the DIP expander to exploit as much as possible the thermal power available from a hot source even when at variable temperatures during operation.
- Published
- 2024
- Full Text
- View/download PDF
15. Standards for model-based early bactericidal activity analysis and sample size determination in tuberculosis drug development
- Author
-
Laurynas Mockeliunas, Alan Faraj, Rob C. van Wijk, Caryn M. Upton, Gerben van den Hoogen, Andreas H. Diacon, and Ulrika S. H. Simonsson
- Subjects
tuberculosis ,early bactericidal activity ,sample size ,pharmacometrics ,model-based analysis ,Therapeutics. Pharmacology ,RM1-950 - Abstract
Background: A critical step in tuberculosis (TB) drug development is the Phase 2a early bactericidal activity (EBA) study which informs if a new drug or treatment has short-term activity in humans. The aim of this work was to present a standardized pharmacometric model-based early bactericidal activity analysis workflow and determine sample sizes needed to detect early bactericidal activity or a difference between treatment arms.Methods: Seven different steps were identified and developed for a standardized pharmacometric model-based early bactericidal activity analysis approach. Non-linear mixed effects modeling was applied and different scenarios were explored for the sample size calculations. The sample sizes needed to detect early bactericidal activity given different TTP slopes and associated variability was assessed. In addition, the sample sizes needed to detect effect differences between two treatments given the impact of different TTP slopes, variability in TTP slope and effect differences were evaluated.Results: The presented early bactericidal activity analysis approach incorporates estimate of early bactericidal activity with uncertainty through the model-based estimate of TTP slope, variability in TTP slope, impact of covariates and pharmacokinetics on drug efficacy. Further it allows for treatment comparison or dose optimization in Phase 2a. To detect early bactericidal activity with 80% power and at a 5% significance level, 13 and 8 participants/arm were required for a treatment with a TTP-EBA0-14 as low as 11 h when accounting for variability in pharmacokinetics and when variability in TTP slope was 104% [coefficient of variation (CV)] and 22%, respectively. Higher sample sizes are required for smaller early bactericidal activity and when pharmacokinetics is not accounted for. Based on sample size determinations to detect a difference between two groups, TTP slope, variability in TTP slope and effect difference between two treatment arms needs to be considered.Conclusion: In conclusion, a robust standardized pharmacometric model-based EBA analysis approach was established in close collaboration between microbiologists, clinicians and pharmacometricians. The work illustrates the importance of accounting for covariates and drug exposure in EBA analysis in order to increase the power of detecting early bactericidal activity for a single treatment arm as well as differences in EBA between treatments arms in Phase 2a trials of TB drug development.
- Published
- 2023
- Full Text
- View/download PDF
16. The Impact of Speed-Accuracy Instructions on Spatial Congruency Effects.
- Author
-
Heuer, Herbert and Wühr, Peter
- Subjects
- *
ACCURACY , *ERROR rates , *STATISTICAL hypothesis testing , *SOFTWARE compatibility , *SPEED - Abstract
In many tasks humans can trade speed against accuracy. This variation of strategy has different consequences for congruency effects in different conflict tasks. Recently, Mittelstädt et al. (2022) suggested that these differences are related to the dynamics of congruency effects as assessed by delta plots. With increasing delta plots in the Eriksen flanker task congruency effects were larger under accuracy set, and with decreasing delta plots in the Simon task they were smaller. Here we tested the hypothesis for a single task, making use of the observation that for the Simon task delta plots decline when the irrelevant feature is presented first, but increase when the relevant feature leads. The differences between congruency effects under speed and accuracy instructions confirmed the hypothesized relation to the slope of delta plots. In fact, for similar delta plots in the compared speed-accuracy conditions, the relation should be a straightforward consequence of the shorter and longer reaction times with speed and accuracy set, respectively. However, when relevant and irrelevant features were presented simultaneously, congruency effects were stronger under speed set at all reaction times. For this condition, a supplementary model-based analysis with an extended leaky, competing accumulator model suggested a stronger and longerlasting influence of the irrelevant stimulus feature. The congruency effects for reaction times were accompanied by congruency effects for error rates when delta plots were decreasing, but not when they were increasing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. A model-based framework to prevent excavator induced damage in operations on natural gas pipelines.
- Author
-
Melchiorre, Matteo, Bacchi, Luca, Palmieri, Pierpaolo, Ruggeri, Andrea, Salamina, Laura, and Mauro, Stefano
- Subjects
- *
DAMAGE models , *BURIED pipes (Engineering) , *FORCE & energy , *KINETIC energy , *EXCAVATING machinery - Abstract
• Model-based method to assess pipe dent due to external interference by excavators. • Numerical scheme for calculating excavator maximum force and kinetic energy. • Excavator-pipe interaction model suitable for static and dynamic contacts. • Dent charts for selecting the excavator to be employed to avoid damaging the pipe. • Preventive action that involve limiting the excavator workspace for risk mitigation. Despite the presence of numerous safety precautions, the primary cause of damage to underground pipes is often attributed to external interference from excavators during their operations. To address this issue, this paper introduces a model-based approach that incorporates both static and dynamic contact in assessing the damage caused by excavators on pipes. The objective of this method is to offer a practical tool that can assist in determining the appropriate excavator size for buried gas pipes. By identifying the maximum safe excavator size, the aim is to minimize the potential risks of mechanical damage to the pipe, to prevent hazards for operators and to protect the integrity of the pipeline. For this purpose, a comprehensive excavator model is constructed. Following this, a suitable damage model for the pipe is selected and the interaction between the bucket tooth and the pipe is modelled. To enhance accuracy, the excavator and pipe are interconnected through the solution of damped contact equations, which take into account the stiffnesses of both the pipe and excavator, as well as the damping effect resulting from pipe deformation. Results provide valuable insight into the potential damage caused to the pipe, which can be attributed to either static or dynamic contacts, depending on which excavator is being used. Failure is addressed by plastic dent, that can be avoided by selecting the most suitable excavator size. Moreover, the analysis of dent depth caused by various excavators on different pipes for gas transmission, opens to the possibility of mitigating the risk of failures by limiting the excavator workspace. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The nonlinear frequency response method for the diagnosis of PEM water electrolyzer performance.
- Author
-
Miličić, Tamara, Muthunayakage, Kasun, Vũ, Thanh Hoàng, Ritschel, Tobias K.S., Živković, Luka A., and Vidaković-Koch, Tanja
- Subjects
- *
WATER electrolysis , *POLYELECTROLYTES , *POLYMERIC membranes , *IMPEDANCE spectroscopy , *SUSTAINABLE development - Abstract
[Display omitted] • Novel diagnostic method for studying PEMWE losses. • Cathode contribution to PEMWE losses isn't negligible and increases with current. • Cathode reaction can explain the bend in the Tafel slope of PEMWE. • Mass transport losses are less significant than the kinetic losses. • The nonlinear part of the response shows greater parameter sensitivity than EIS. A better grasp of the underlying phenomena occurring in electrochemical technologies is crucial for their further development and, consequently, a much-needed step forward to a greener economy. Diagnostic methods that can reliably determine the state of health and causes of the performance shortcomings are indispensable. The ease of obtaining electrochemical data makes the analysis of current and voltage responses the preferred diagnostic approach. Traditional techniques, like steady-state polarization and electrochemical impedance spectroscopy are limited by their inability to distinguish between different processes due to the constraints of steady-state and linearity of system response, respectively. The nonlinear frequency response (NFR) method is an advanced diagnostic method that has the potential to overcome these issues. In this work, the NFR method was applied both experimentally and theoretically to study polymer electrolyte membrane water electrolysis (PEMWE). The model-based analysis provides insights into the losses in the PEMWE at different current densities. It shows that the contributions of the cathode to the overpotential losses at high current densities cannot be neglected. This has been much discussed in the literature and was often attributed only to mass transport losses. The contribution of mass transport has also been identified at higher current densities but is less pronounced than the kinetic contributions. Furthermore, we show that including the nonlinear dynamics in the analysis was crucial for identifying the appropriate parameter set. Overall, this work showed a considerable potential of the NFR method for the diagnosis of PEMWE due to its prospects of identifying different processes occurring within. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Weighting, informativeness and causal inference, with an application to rainfall enhancement.
- Author
-
Chambers, Ray, Ranjbar, Setareh, Salvati, Nicola, and Pacini, Barbara
- Subjects
CAUSAL inference ,TREATMENT effectiveness ,SAMPLING (Process) ,PROBABILITY theory - Abstract
Sampling is informative when probabilities of sample inclusion depend on unknown variables that are correlated with a response variable of interest. When sample inclusion probabilities are available, inverse probability weighting can be used to account for informative sampling in such a situation, although usually at the cost of less precise inference. This paper reviews two important research contributions by Chris Skinner that modify these weights to reduce their variability while at the same time retaining consistency of the weighted estimators. In some cases, however, sample inclusion probabilities are not known, and are estimated as propensity scores. This is often the situation in causal analysis, and double robust methods that protect against the resulting misspecification of the sampling process have been the focus of much recent research. In this paper we propose two model‐assisted modifications to the popular inverse propensity score weighted estimator of an average treatment effect, and then illustrate their use in a causal analysis of a rainfall enhancement experiment that was carried out in Oman between 2013 and 2018. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. The Impact of Speed-Accuracy Instructions on Spatial Congruency Effects
- Author
-
Herbert Heuer and Peter Wühr
- Subjects
accuracy ,speed ,soa ,simon effect ,model-based analysis ,congruency ,compatibility ,Consciousness. Cognition ,BF309-499 - Abstract
In many tasks humans can trade speed against accuracy. This variation of strategy has different consequences for congruency effects in different conflict tasks. Recently, Mittelstädt et al. (2022) suggested that these differences are related to the dynamics of congruency effects as assessed by delta plots. With increasing delta plots in the Eriksen flanker task congruency effects were larger under accuracy set, and with decreasing delta plots in the Simon task they were smaller. Here we tested the hypothesis for a single task, making use of the observation that for the Simon task delta plots decline when the irrelevant feature is presented first, but increase when the relevant feature leads. The differences between congruency effects under speed and accuracy instructions confirmed the hypothesized relation to the slope of delta plots. In fact, for similar delta plots in the compared speed-accuracy conditions, the relation should be a straightforward consequence of the shorter and longer reaction times with speed and accuracy set, respectively. However, when relevant and irrelevant features were presented simultaneously, congruency effects were stronger under speed set at all reaction times. For this condition, a supplementary model-based analysis with an extended leaky, competing accumulator model suggested a stronger and longer-lasting influence of the irrelevant stimulus feature. The congruency effects for reaction times were accompanied by congruency effects for error rates when delta plots were decreasing, but not when they were increasing.
- Published
- 2023
- Full Text
- View/download PDF
21. Survey design and analysis considerations when utilizing misclassified sampling strata
- Author
-
Aya A. Mitani, Nathaniel D. Mercaldo, Sebastien Haneuse, and Jonathan S. Schildcrout
- Subjects
Complex survey ,Disproportionate stratified sampling ,Stratum misclassification ,Design-based analysis ,Model-based analysis ,Medicine (General) ,R5-920 - Abstract
Abstract Background A large multi-center survey was conducted to understand patients’ perspectives on biobank study participation with particular focus on racial and ethnic minorities. In order to enrich the study sample with racial and ethnic minorities, disproportionate stratified sampling was implemented with strata defined by electronic health records (EHR) that are known to be inaccurate. We investigate the effect of sampling strata misclassification in complex survey design. Methods Under non-differential and differential misclassification in the sampling strata, we compare the validity and precision of three simple and common analysis approaches for settings in which the primary exposure is used to define the sampling strata. We also compare the precision gains/losses observed from using a disproportionate stratified sampling scheme compared to using a simple random sample under varying degrees of strata misclassification. Results Disproportionate stratified sampling can result in more efficient parameter estimates of the rare subgroups (race/ethnic minorities) in the sampling strata compared to simple random sampling. When sampling strata misclassification is non-differential with respect to the outcome, a design-agnostic analysis was preferred over model-based and design-based analyses. All methods yielded unbiased parameter estimates but standard error estimates were lowest from the design-agnostic analysis. However, when misclassification is differential, only the design-based method produced valid parameter estimates of the variables included in the sampling strata. Conclusions In complex survey design, when the interest is in making inference on rare subgroups, we recommend implementing disproportionate stratified sampling over simple random sampling even if the sampling strata are misclassified. If the misclassification is non-differential, we recommend a design-agnostic analysis. However, if the misclassification is differential, we recommend using design-based analyses.
- Published
- 2021
- Full Text
- View/download PDF
22. Model-based Analysis of Data Inaccuracy Awareness in Business Processes.
- Author
-
Evron, Yotam, Soffer, Pnina, and Zamansky, Anna
- Abstract
Problem definition: Data errors in business processes can be a source for exceptions and hamper business outcomes. Relevance: The paper proposes a method for analyzing data inaccuracy issues already at process design time, in order to support process designers by identifying process parts where data errors might remain unrecognized, so decisions could be taken based on inaccurate data. Methodology: The paper follows design science, developing a method as an artifact. The conceptual basis is the notion of data inaccuracy awareness – the ability to tell whether potential discrepancies between real and IS values may exist. Results: The method was implemented on top of a Petri net modeling tool and validated in a case study performed in a large manufacturing company of safety–critical systems. Managerial implications: Anticipating consequences of data inaccuracy already during process design can help avoiding them at runtime. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. A comparative kinetic study between TGA & DSC techniques using model-free and model-based analyses to assess spontaneous combustion propensity of Indian coals.
- Author
-
Mandal, Somu, Mohalik, Niroj Kumar, Ray, Santosh Kumar, Khan, Asfar Mobin, Mishra, Debashish, and Pandey, Jai Krishna
- Subjects
- *
SPONTANEOUS combustion , *THERMAL analysis , *COAL combustion , *COAL , *COAL sampling , *ACTIVATION energy , *COALFIELDS - Abstract
Kinetic study of coal was carried out using simultaneous thermal analysis (STA) technique to assess the spontaneous combustion propensity of coal samples collected from various Indian coalfields having both fiery and non-fiery seams. The kinetic parameters were estimated by using both model-free and modelbased analysis for both TGA & DSC data. The model-based method comprises four different consecutive reaction steps, viz. A→B→C→D→E for the spontaneous combustion process and the second reaction step (B→C) were used for this investigation. Chemometric analysis was applied to know the relation between the proximate analysis and activation energy of the samples using model-free and model-based techniques. The activation energy for the second reaction step of the model-based method for both TGA and DSC data showed a good relationship with the standard methods i.e., crossing point temperature (XPT) and Tgign of the samples. It indicates that the activation energy values at the oxidation stage (2nd stage) play a significant role in the spontaneous combustion propensity of coal. The study also reveals that the model-based analysis provided better results in comparison to model-free analysis to assess the spontaneous combustion propensity of coal. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Generalized Laminar Population Analysis (gLPA) for Interpretation of Multielectrode Data from Cortex.
- Author
-
Głąbska, Helena T, Norheim, Eivind, Devor, Anna, Dale, Anders M, Einevoll, Gaute T, and Wójcik, Daniel K
- Subjects
LFP analysis ,MUA ,computational neuroscience ,local field potential ,model-based analysis ,multi-unit activity ,signal decomposition ,thalamocortical ,Neurosciences ,Cognitive Sciences - Abstract
Laminar population analysis (LPA) is a method for analysis of electrical data recorded by linear multielectrodes passing through all lamina of cortex. Like principal components analysis (PCA) and independent components analysis (ICA), LPA offers a way to decompose the data into contributions from separate cortical populations. However, instead of using purely mathematical assumptions in the decomposition, LPA is based on physiological constraints, i.e., that the observed LFP (low-frequency part of signal) is driven by action-potential firing as observed in the MUA (multi-unit activity; high-frequency part of the signal). In the presently developed generalized laminar population analysis (gLPA) the set of basis functions accounting for the LFP data is extended compared to the original LPA, thus allowing for a better fit of the model to experimental data. This enhances the risk for overfitting, however, and we therefore tested various versions of gLPA on virtual LFP data in which we knew the ground truth. These synthetic data were generated by biophysical forward-modeling of electrical signals from network activity in the comprehensive, and well-known, thalamocortical network model developed by Traub and coworkers. The results for the Traub model imply that while the laminar components extracted by the original LPA method overall are in fair agreement with the ground-truth laminar components, the results may be improved by use of gLPA method with two (gLPA-2) or even three (gLPA-3) postsynaptic LFP kernels per laminar population.
- Published
- 2016
25. Model-based energy flexibility analysis of a dry room HVAC system in battery cell production.
- Author
-
Vogt, Marcus, Platzdasch, Aïcha, Abraham, Tim, and Herrmann, Christoph
- Published
- 2022
- Full Text
- View/download PDF
26. Increasing stimulus similarity drives nonmonotonic representational change in hippocampus
- Author
-
Jeffrey Wammes, Kenneth A Norman, and Nicholas Turk-Browne
- Subjects
plasticity ,statistical learning ,image synthesis ,deep neural networks ,model-based analysis ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Studies of hippocampal learning have obtained seemingly contradictory results, with manipulations that increase coactivation of memories sometimes leading to differentiation of these memories, but sometimes not. These results could potentially be reconciled using the nonmonotonic plasticity hypothesis, which posits that representational change (memories moving apart or together) is a U-shaped function of the coactivation of these memories during learning. Testing this hypothesis requires manipulating coactivation over a wide enough range to reveal the full U-shape. To accomplish this, we used a novel neural network image synthesis procedure to create pairs of stimuli that varied parametrically in their similarity in high-level visual regions that provide input to the hippocampus. Sequences of these pairs were shown to human participants during high-resolution fMRI. As predicted, learning changed the representations of paired images in the dentate gyrus as a U-shaped function of image similarity, with neural differentiation occurring only for moderately similar images.
- Published
- 2022
- Full Text
- View/download PDF
27. Study of Power Equivalent Continuous Approximation Based on the Recent Consensus Recommendations for Brain Tumor Imaging with Pulsed Chemical Exchange Saturation Transfer at 3T.
- Author
-
Pan SQ, Hum YC, Lai KW, Yap WS, Ong CW, and Tee YK
- Abstract
The quantitative analysis of pulsed-chemical exchange saturation transfer (CEST) using a full model-based method is computationally challenging, as it involves dealing with varying RF values in pulsed saturation. A power equivalent continuous approximation of B
1 power was usually applied to accelerate the analysis. In line with recent consensus recommendations from the CEST community for pulsed-CEST at 3T, particularly recommending a high RF saturation power (B1 = 2.0 µT) for the clinical application in brain tumors, this technical note investigated the feasibility of using average power (AP) as the continuous approximation. The simulated results revealed excellent performance of the AP continuous approximation in low saturation power scenarios, but discrepancies were observed in the z-spectra for the high saturation power cases. Cautions should be taken, or it may lead to inaccurate fitted parameters, and the difference can be more than 10% in the high saturation power cases.- Published
- 2024
- Full Text
- View/download PDF
28. Quantifying mechanisms of cognition with an experiment and modeling ecosystem.
- Author
-
Weichart, Emily R., Darby, Kevin P., Fenton, Adam W., Jacques, Brandon G., Kirkpatrick, Ryan P., Turner, Brandon M., and Sederberg, Per B.
- Subjects
- *
COGNITION , *COGNITIVE ability , *STATISTICAL reliability , *TASK analysis , *EXPLICIT memory - Abstract
Although there have been major strides toward uncovering the neurobehavioral mechanisms involved in cognitive functions like memory and decision making, methods for measuring behavior and accessing latent processes through computational means remain limited. To this end, we have created SUPREME (Sensing to Understanding and Prediction Realized via an Experiment and Modeling Ecosystem): a toolbox for comprehensive cognitive assessment, provided by a combination of construct-targeted tasks and corresponding computational models. SUPREME includes four tasks, each developed symbiotically with a mechanistic model, which together provide quantified assessments of perception, cognitive control, declarative memory, reward valuation, and frustrative nonreward. In this study, we provide validation analyses for each task using two sessions of data from a cohort of cognitively normal participants (N = 65). Measures of test-retest reliability (r: 0.58–0.75), stability of individual differences (ρ: 0.56–0.70), and internal consistency (α: 0.80–0.86) support the validity of our tasks. After fitting the models to data from individual subjects, we demonstrate each model's ability to capture observed patterns of behavioral results across task conditions. Our computational approaches allow us to decompose behavior into cognitively interpretable subprocesses, which we can compare both within and between participants. We discuss potential future applications of SUPREME, including clinical assessments, longitudinal tracking of cognitive functions, and insight into compensatory mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Applying Integrated Domain-Specific Modeling for Multi-concerns Development of Complex Systems
- Author
-
Pröll, Reinhard, Rumpold, Adrian, Bauer, Bernhard, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Pires, Luís Ferreira, editor, Hammoudi, Slimane, editor, and Selic, Bran, editor
- Published
- 2018
- Full Text
- View/download PDF
30. Model‐based decomposition of environmental, spatial and species‐interaction effects on the community structure of common fish species in 772 European lakes.
- Author
-
Mehner, Thomas, Argillier, Christine, Hesthagen, Trygve, Holmgren, Kerstin, Jeppesen, Erik, Kelly, Fiona, Krause, Teet, Olin, Mikko, Volta, Pietro, Winfield, Ian J., Brucet, Sandra, and Leprieur, Fabien
- Subjects
- *
FISH communities , *MARKOV chain Monte Carlo , *LAKES , *FORAGE fishes , *FRESHWATER fishes , *LATENT variables - Abstract
Aim: We tested whether there is a strong effect of species interactions on assembly of local lake fish communities, in addition to environmental filters and dispersal. Location: Seven hundred and seventy‐two European lakes and reservoirs. Time period: 1993–2012. Major taxa studied: Nineteen species of freshwater fishes. Methods: We applied a latent variable approach using Bayesian Markov chain Monte Carlo algorithms (R package "BORAL"). We compared the contributions of six environmental predictors and the spatial organization of 772 European lakes in 209 river basins on the presence/absence of the 19 most frequent fish species and on the biomass and mean mass of the six dominant species. We inspected the residual correlation matrix for positive and negative correlations between species. Results: Environmental (50%) and spatial (10%) predictors contributed to the presence/absence assembly of lake fish communities, whereas lake size and productivity contributed strongly to the biomass and mean mass structures. We found highly significant negative correlations between predator and prey fish species pairs in the presence/absence, biomass and mean mass datasets. There were more significantly positive than negative correlations between species pairs in all three datasets. In addition, unmeasured abiotic predictors might explain some of the correlations between species. Main conclusions: Strong effects of species interactions on assembly of lake fish communities are very likely. We admit that our approach is of a correlational nature and does not generate mechanistic evidence that interactions strongly shape fish community structures; however, the results fit with present knowledge about the interactions between the most frequent fish species in European lakes and they support the assumption that, in particular, the mean masses of fish species in lakes are modified by species interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Model-based analysis of learning latent structures in probabilistic reversal learning task.
- Author
-
Masumi, Akira and Sato, Takashi
- Abstract
Flexibility in decision making is essential for adapting to dynamically changing scenarios. A probabilistic reversal learning task is one of the experimental paradigms used to characterize the flexibility of a subject. Recent studies hypothesized that in addition to a reward history, a subject may also utilize a "cognitive map" that represents the latent structures of the task. We conducted experiments on a probabilistic reversal learning task and performed model-based analysis using two types of reinforcement learning (RL) models, with and without state representations of the task. Based on statistical model selection, the RL model without state representations was selected for explaining the behavior of the average of all the subjects. However, the individual behaviors of approximately 20% subjects were explained using the RL model with state representation and by the probabilistic estimation of the current state. We inferred that these results possibly indicate the variations in the development of the orbitofrontal cortex of the subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Survey design and analysis considerations when utilizing misclassified sampling strata.
- Author
-
Mitani, Aya A., Mercaldo, Nathaniel D., Haneuse, Sebastien, and Schildcrout, Jonathan S.
- Subjects
- *
PATIENTS' attitudes , *ELECTRONIC health records , *STATISTICAL sampling , *RACIAL minorities , *MINORITIES - Abstract
Background: A large multi-center survey was conducted to understand patients' perspectives on biobank study participation with particular focus on racial and ethnic minorities. In order to enrich the study sample with racial and ethnic minorities, disproportionate stratified sampling was implemented with strata defined by electronic health records (EHR) that are known to be inaccurate. We investigate the effect of sampling strata misclassification in complex survey design.Methods: Under non-differential and differential misclassification in the sampling strata, we compare the validity and precision of three simple and common analysis approaches for settings in which the primary exposure is used to define the sampling strata. We also compare the precision gains/losses observed from using a disproportionate stratified sampling scheme compared to using a simple random sample under varying degrees of strata misclassification.Results: Disproportionate stratified sampling can result in more efficient parameter estimates of the rare subgroups (race/ethnic minorities) in the sampling strata compared to simple random sampling. When sampling strata misclassification is non-differential with respect to the outcome, a design-agnostic analysis was preferred over model-based and design-based analyses. All methods yielded unbiased parameter estimates but standard error estimates were lowest from the design-agnostic analysis. However, when misclassification is differential, only the design-based method produced valid parameter estimates of the variables included in the sampling strata.Conclusions: In complex survey design, when the interest is in making inference on rare subgroups, we recommend implementing disproportionate stratified sampling over simple random sampling even if the sampling strata are misclassified. If the misclassification is non-differential, we recommend a design-agnostic analysis. However, if the misclassification is differential, we recommend using design-based analyses. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
33. ErrorSim: A Tool for Error Propagation Analysis of Simulink Models
- Author
-
Saraoğlu, Mustafa, Morozov, Andrey, Söylemez, Mehmet Turan, Janschek, Klaus, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Tonetta, Stefano, editor, Schoitsch, Erwin, editor, and Bitsch, Friedemann, editor
- Published
- 2017
- Full Text
- View/download PDF
34. Improving Process Descriptions in Research by Model-Based Analysis.
- Author
-
Shaked, Avi and Reich, Yoram
- Abstract
In research, process descriptions provide causal mechanisms and context as a basis for establishing phenomena and enabling reproducibility and generalization, and are therefore considered essential to scientific progress. Process descriptions are a principle form of describing systems. However, process descriptions are typically unstructured and complex, presenting challenges with respect to the assessment of their quality, and eventually to their usability. Inspired by modern systems engineering approaches, we propose a model-based analysis approach to analyze complex process descriptions, and consequently improve the communicability and the applicability of said processes. We apply the approach to real-life case studies, and detail its contribution. Finally, we discuss how using the model-based approach can promote the quality of research-related process descriptions, specifically for supporting evaluation of research publications as well as for describing processes by the researchers themselves. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Synchronizing Torque-Based Transient Stability Index of a Multimachine Interconnected Power System
- Author
-
Albert Poulose and Soobae Kim
- Subjects
power system stability ,transient stability analysis ,model-based analysis ,stability margin ,synchronizing torque ,synchronizing torque coefficient ,Technology - Abstract
Newly developed tools and techniques are continuously established to analyze and monitor power systems’ transient stability limits. In this paper, a model-based transient stability index for each generator is proposed from the synchronizing torque contributions of all other connected generators in a multi-machine interconnected power system. It is a new interpretation of the generator’s synchronizing torque coefficient (STC) in terms of electromechanical oscillation modes to consider the synchronizing torque interactions among generators. Thus, the system operator can continuously monitor the system’s available secured transient stability limit in terms of synchronizing torque more accurately, which is helpful for planning and operation studies due to the modal based index. Furthermore, the popular transient stability indicator critical clearing time (CCT), and the traditionally determined synchronizing torque values without other generator contributions, are calculated to verify and compare the performance of the proposed transient stability index. The simulations and test result discussions are performed over a western system coordinating council (WSCC) 9-bus and an extensive New England 68-bus large power test system cases. The open-source power system analysis toolbox (PSAT) on the MATLAB/Simulink environment is used to develop, simulate, validate and compare the proposed transient stability index.
- Published
- 2022
- Full Text
- View/download PDF
36. A Modeling Method in Support of Strategic Analysis in the Realm of Enterprise Modeling: On the Example of Blockchain-Based Initiatives for the Electricity Sector.
- Author
-
de Kinderen, Sybren, Kaczmarek-Heß, Monika, Qin Ma, and Razo-Zapata, Iván S.
- Subjects
DIGITAL technology ,BUSINESS literature ,ELECTRICITY ,SWOT analysis ,INFORMATION technology ,INPUT-output analysis - Abstract
Organizations increasingly have to cope with the digital transformation, which is ubiquitous in today's society. Strategic analysis is an important first step towards the success of digital transformation initiatives, whereby all the elements (e. g., business processes and IT infrastructure) that are required to achieve the transformation can be aligned to the strategic goals and decisions. In this paper, we work towards a modeling method to perform model-based strategic analysis. We explicitly account for information technology (IT) infrastructure because of its key role for digital transformation. Specifically, (1) based on a conducted study on business scholar literature and existing work in conceptual modeling, a set of requirements is first identified; (2) then, we propose a modeling method that integrates, among others, goal modeling, strategic modeling, and IT infrastructure modeling. The method exploits, among others, three previously designed domain specific modeling languages in the Multi-Perspective Enterprise Modeling (MEMO) family: GoalML, SAML and ITML; (3) we illustrate the use of the modeling method in terms of a digital transformation initiative in the electricity sector; and finally, (4) we evaluate the proposed modeling method by comparing it with the conventional SWOT analysis and reflecting upon the fulfillment of the identified requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Interpretation and use of disjunction in natural language: A study about exclusivity and inclusivity.
- Author
-
López-Astorga, Miguel
- Abstract
The literature shows that people tend to interpret disjunctions as exclusive. From that fact, this paper describes a study intended to check whether or not that trend can also be observed when disjunction is used. The study was mainly based on an analysis of the semantic possibilities of the disjunctions appearing in a philosophical text. The method of analysis was akin to the one proposed by the theory of mental models to explore reasoning. The results revealed that the number of exclusive disjunctions employed in that text is exactly the same as the one of inclusive disjunctions in it. Hence, it is hard to claim that, in addition to what happens in the interpretation processes, disjunction is often exclusive when used as well. [ABSTRACT FROM AUTHOR]
- Published
- 2021
38. Combining the Senses: The Role of Experience- and Task-Dependent Mechanisms in the Development of Audiovisual Simultaneity Perception.
- Author
-
Petrini, Karin, Denis, Georgina, Love, Scott A., and Nardini, Marko
- Abstract
The brain's ability to integrate information from the different senses is essential for decreasing sensory uncertainty and ultimately limiting errors. Temporal correspondence is one of the key processes that determines whether information from different senses will be integrated and is influenced by both experience- and task-dependent mechanisms in adults. Here we investigated the development of both task- and experience-dependent temporal mechanisms by testing 7–8-year-old children, 10–11-year-old children, and adults in two tasks (simultaneity judgment, temporal order judgment) using audiovisual stimuli with differing degrees of association based on prior experience (low for beep-flash vs. high for face–voice). By fitting an independent channels model to the data, we found that while the experience-dependent mechanism of audiovisual simultaneity perception is already adult-like in 10–11-year-old children, the task-dependent mechanism is still not. These results indicate that differing maturation rates of experience-dependent and task-dependent mechanisms underlie the development of multisensory integration. Understanding this development has important implications for clinical and educational interventions. Public Significance Statement: Combining our different senses to perceive the world underpins our abilities to learn, reason, and act. This study strongly suggests that adult-like abilities to combine different senses are achieved through a lifelong process of learning and development, in which the underlying processes develop at different rates. A better understanding of this development has clinical and educational implications for future approaches to targeting improvements in multisensory perception in children of different ages. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Model-Based Analysis of Yo-yo Throwing Motion on Single-Link Manipulator.
- Author
-
Miyakawa, Hokuto, Nemoto, Takuma, and Iwase, Masami
- Subjects
- *
THROWING (Sports) , *MOTION , *COMPUTER simulation , *PARAMETER identification , *ANGULAR velocity - Abstract
This paper presents a method for analyzing the throwing motion of a yo-yo based on an integrated model of a yo-yo and a manipulator. Our previous integrated model was developed by constraining a model of a white painted commercial yo-yo and a model of a plain single-link manipulator with certain constraining conditions placed between two models. However, for the yo-yo model, the collisions between the string and the axle of the yo-yo were not taken into account. To avoid this problem, we estimate some of the yo-yo parameters from the experiments, thereby preserving the functionality of the model. By applying the new integrated model with the identified parameters, we analyze the throwing motion of the yo-yo through numerical simulations. The results of which show the ranges of the release angle and the angular velocity of the joint of the manipulator during a successful throw. In conclusion, the proposed analysis method is effective in analyzing the throwing motion of a manipulator. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. Model-based transmission acoustic analysis and improvement.
- Author
-
Cao, Zhan, Chen, Yong, and Zang, Libin
- Subjects
TRANSMISSION of sound ,WORKFLOW ,HOUSING ,TUNED mass dampers - Abstract
A model-based transmission acoustic performance has been studied. Both excitation source and structure-borne noise transfer are investigated. For excitation source, transmission error is the main cause. To reduce transmission error, the most effective method is gear micro-geometry modification. However, there is no direct mathematical model that could reveal the relationship between transmission error and gear micro-geometry parameters, especially in the context of transmission system with shafts, bearings and housings included. By using multiple variable linear regression method, the relationship between transmission error and gear micro-geometry parameters is connected. The optimised transmission error decreases from 9.34 to 5.65 μm at a typical working condition. For structure-borne noise transfer, the housing modal and transmission system modal have been analysed. Based on the simulation results, housing thickness around bearing assembly area is increased, and strengthen ribs are added at weak areas. As a result, the amplitude of housing vibration acceleration is reduced. Finally, the transmission noise sound pressure level at 0.3-m distance has decreased by 4.63 dB on average. The presented model-based transmission acoustic analysis work flow is demonstrated workable and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. Model-based analysis of arterial pulse signals for tracking changes in arterial wall parameters: a pilot study.
- Author
-
Wang, Dan, Reynolds, Leryn, Alberts, Thomas, Vahala, Linda, and Hao, Zhili
- Subjects
- *
TACTILE sensors , *COOLDOWN , *RADIAL artery , *CARDIOVASCULAR system , *PILOT projects , *CAROTID artery , *BLOOD flow - Abstract
Arterial wall parameters (i.e., radius and viscoelasticity) are prognostic markers for cardiovascular diseases (CVD), but their current monitoring systems are too complex for home use. Our objective was to investigate whether model-based analysis of arterial pulse signals allows tracking changes in arterial wall parameters using a microfluidic-based tactile sensor. The sensor was used to measure an arterial pulse signal. A data-processing algorithm was utilized to process the measured pulse signal to obtain the radius waveform and its first-order and second-order derivatives, and extract their key features. A dynamic system model of the arterial wall and a hemodynamic model of the blood flow were developed to interpret the extracted key features for estimating arterial wall parameters, with no need of calibration. Changes in arterial wall parameters were introduced to healthy subjects ( n = 5 ) by moderate exercise. The estimated values were compared between pre-exercise and post-exercise for significant difference ( p < 0.05 ). The estimated changes in the radius, elasticity and viscosity were consistent with the findings in the literature (between pre-exercise and 1 min post-exercise: − 11% ± 4%, 55% ± 38% and 28% ± 11% at the radial artery; − 7% ± 3%, 36% ± 28% and 16% ± 8% at the carotid artery). The model-based analysis allows tracking changes in arterial wall parameters using a microfluidic-based tactile sensor. This study shows the potential of developing a solution to at-home monitoring of the cardiovascular system for early detection, timely intervention and treatment assessment of CVD. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Model-based analysis of bactericidal activity and a new dosing strategy for optimised-dose rifampicin
- Author
-
Susanto, Budi O., Svensson, Elin M., te Brake, Lindsey, Aarnoutse, Rob E., Boeree, Martin J., Simonsson, Ulrika S. H., Susanto, Budi O., Svensson, Elin M., te Brake, Lindsey, Aarnoutse, Rob E., Boeree, Martin J., and Simonsson, Ulrika S. H.
- Abstract
Background Higher doses of rifampicin for tuberculosis have been shown to improve early bactericidal activity (EBA) and at the same time increase the intolerability due to high exposure at the beginning of treatment. To support dose optimisation of rifampicin, this study investigated new and innovative staggered dosing of rifampicin using clinical trial simulations to minimise tolerability problems and still achieve good efficacy. Methods Rifampicin population pharmacokinetics and time-to-positivity models were applied to data from patients receiving 14 days of daily 10–50 mg/kg rifampicin to characterise the exposure-response relationship. Furthermore, clinical trial simulations of rifampicin exposure were performed following four different staggered dosing scenarios. The simulated exposure after 35 mg/kg was used as a relative comparison for efficacy. Tolerability was derived from a previous model-based analysis relating exposure at day 7 and the probability of having adverse events. Results The linear relationship between rifampicin exposure and bacterial killing rate in sputum indicated that the maximum rifampicin EBA was not reached at doses up to 50 mg/kg. Clinical trial simulations of a staggered dosing strategy starting the treatment at a lower dose (20 mg/kg) for 7 days followed by a higher dose (40 mg/kg) predicted a lower initial exposure with lower probability of tolerability problems and better EBA compared with a regimen of 35 mg/kg daily. Conclusions Staggered dosing of 20 mg/kg for 7 days followed by 40 mg/kg is predicted to reduce tolerability while maintaining exposure levels associated with better efficacy.
- Published
- 2023
- Full Text
- View/download PDF
43. Effect of the milling conditions on the decomposition kinetics of gibbsite
- Author
-
Ministerio de Ciencia e Innovación (España), Consejo Superior de Investigaciones Científicas (España), Agencia Estatal de Investigación (España), Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão, Cabral, Aluisio A., Rivas-Mercury, Jose M., Sucupira, Jose R.M., Rodríguez Barbero, Miguel Ángel, Aza Moya, Antonio H. de, Pena, Pilar, Moukhina, Elena, Ministerio de Ciencia e Innovación (España), Consejo Superior de Investigaciones Científicas (España), Agencia Estatal de Investigación (España), Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão, Cabral, Aluisio A., Rivas-Mercury, Jose M., Sucupira, Jose R.M., Rodríguez Barbero, Miguel Ángel, Aza Moya, Antonio H. de, Pena, Pilar, and Moukhina, Elena
- Abstract
[EN] Synthetic gibbsite (Al(OH)3) was mechanically activated by attrition milling for 24 h, with various grinding ball-to-powder weight ratios (0, 5, 10, and 20), and characterized by thermal analysis (TG-DSC). Further, we determined the corresponding kinetic parameters using the model-free and model-fitting methods from the Thermogravimetric Analysis (TG) data set. We found that the activation energies provided by both models agree very well. At temperatures higher than 350 °C, the milled samples (GB5, GB10, and GB20) lose mass very slowly, while the unmilled sample (GB0) decomposes faster. In addition, we demonstrated that the decomposition mechanism of each sample engages multi-step reactions, and the corresponding activation energies change with the increasing milling conditions., [ES] Se ha activado mecánicamente gibbsita sintética, Al(OH)3, mediante molienda por atrición durante 24 h. Se han empleado diversas relaciones de bolas de molienda a polvo (0, 5, 10 y 20, en peso) y se caracterizó mediante análisis térmico (TG-DSC). Con estos resultados se determinaron los correspondientes parámetros cinéticos a través de los análisis sin modelo y con modelo de ajuste del conjunto de datos de análisis termogravimétrico (TG). Se ha encontrado que las energías de activación proporcionadas por ambos modelos concuerdan muy bien. A temperaturas superiores a 350 °C las muestras molidas (GB5, GB10 y GB20) pierden masa muy lentamente, mientras que la muestra sin moler (GB0) se descompone más rápido. Además, se demuestra que el mecanismo de descomposición de cada muestra involucra reacciones con varias etapas, y las correspondientes energías de activación cambian con el incremento de la energía de molienda.
- Published
- 2023
44. Model-Based Renewable Resource Risk Assessment Analysis and Simulation SBIR Phase II Final Report
- Author
-
Broadwater, R
- Published
- 2013
45. Model-Based Analysis of Flow Separation Control in a Curved Diffuser by a Vibration Wall
- Author
-
Weiyu Lu, Xin Fu, Jinchun Wang, and Yuanchi Zou
- Subjects
unsteady flow control ,flow separation ,vibration wall ,model-based analysis ,Technology - Abstract
Vibration wall control is an important active flow control technique studied by many researchers. Although current researches have shown that the control performance is greatly affected by the frequency and amplitude of the vibration wall, the mechanism hiding behind the phenomena is still not clear, due to the complex interaction between the vibration wall and flow separation. To reveal the control mechanism of vibration walls, we propose a simplified model to help us understand the interaction between the forced excitation (from the vibration wall) and self-excitation (from flow instability). The simplified model can explain vibration wall flow control behaviors obtained by numerical simulation, which show that the control performance will be optimized at a certain reduced vibration frequency or amplitude. Also, it is shown by the analysis of maximal Lyapunov exponents that the vibration wall is able to change the flow field from a disordered one into an ordered one. Consistent with these phenomena and bringing more physical insight, the simplified model implies that the tuned vibration frequency and amplitude will lock in the unsteady flow separation, promote momentum transfer from the main stream to the separation zone, and make the flow field more orderly and less chaotic, resulting in a reduction of flow loss.
- Published
- 2021
- Full Text
- View/download PDF
46. State-of-the-Art Pharmacometric Models in Osteoporosis
- Author
-
Georgieva Kondic, Anna, Cabal, Antonio, Fayad, Ghassan N., Mehta, Khamir, Kerbusch, Thomas, Post, Teun M., Crommelin, Daan J. A., Editor-in-chief, Lipper, Robert A., Editor-in-chief, Schmidt, Stephan, editor, and Derendorf, Hartmut, editor
- Published
- 2014
- Full Text
- View/download PDF
47. With an Open Mind: How to Write Good Models
- Author
-
Artho, Cyrille, Hayamizu, Koji, Ramler, Rudolf, Yamagata, Yoriyuki, Artho, Cyrille, editor, and Ölveczky, Peter Csaba, editor
- Published
- 2014
- Full Text
- View/download PDF
48. Performance Analysis of Computing Servers — A Case Study Exploiting a New GSPN Semantics
- Author
-
Katoen, Joost-Pieter, Noll, Thomas, Santen, Thomas, Seifert, Dirk, Wu, Hao, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Sudan, Madhu, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Vardi, Moshe Y., Editorial Board Member, Weikum, Gerhard, Editorial Board Member, Fischbach, Kai, editor, and Krieger, Udo R., editor
- Published
- 2014
- Full Text
- View/download PDF
49. Sampling and analysis frameworks for inference in ecology.
- Author
-
Williams, Byron K., Brown, Eleanor D., and McCrea, Rachel
- Subjects
INFERENTIAL statistics ,ECOSYSTEMS ,ECOLOGY ,STOCHASTIC models - Abstract
Reliable statistical inference is central to ecological research, much of which seeks to estimate population attributes and their interactions. The issue of sampling design and its relationship to inference has become increasingly important due to rapid proliferation of modelling methodology (line transect modelling, capture‐recapture, estimation of occurrence, model selection procedures, hierarchical modelling) and new sampling approaches (adaptive sampling, other specialized designs). It is important for ecologists using these advanced methods to be aware of how the linkages between sample selection and data analysis can potentially affect inference.We examine design‐based and model‐based inference frameworks for ecological data collected randomly, purposively or opportunistically. We elucidate differences in the probability structures for data arising from these frameworks, clarify the assumptions that underlie them, and demonstrate their differences.Design based inference builds on a probability structure inherited from randomized data collection, whereas model‐based inference relies on an assumed stochastic model of the data. By itself, a design‐based approach is of limited value for inferences about causal hypotheses. In contrast, model‐based inference is dependent on a conditionality principle that can seldom be shown to be met for an ecological system. We describe the conditions under which one can safely ignore sampling design in model‐based analysis, along with inferential implications if these conditions are not met. The special case of opportunistic sampling is discussed.We present a combined framework that takes advantage of both approaches to inference, and provides a robust methodology that can deal with the modelling of sampling problems such as non‐detection and misclassification, as well as the exploration of causal hypotheses. The combined framework can be useful for identifying optimal sampling strategies.Each approach to inference has its strengths and weaknesses, and practitioners should be aware of these in order to tailor designs and analyses to specific questions. We use the approaches and their underlying rationales to provide guidelines for choosing designs and estimators for reliable inference. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Multi-factor analysis in language production: Sequential sampling models mimic and extend regression results.
- Author
-
Anders, Royce, Van Maanen, Leendert, and Alario, F.-Xavier
- Subjects
- *
LANGUAGE research , *REGRESSION analysis , *DEPENDENT variables - Abstract
For multi-factor analyses of response times, descriptive models (e.g., linear regression) arguably constitute the dominant approach in psycholinguistics. In contrast empirical cognitive models (e.g., sequential sampling models, SSMs) may fit fewer factors simultaneously, but decompose the data into several dependent variables (a multivariate result), offering more information to analyze. While SSMs are notably popular in the behavioural sciences, they are not significantly developed in language production research. To contribute to the development of this modelling in language, we (i) examine SSMs as a measurement modelling approach for spoken word activation dynamics, and (ii) formally compare SSMs to the default method, regression. SSMs model response activation or selection mechanisms in time, and calculate how they are affected by conditions, persons, and items. While regression procedures also model condition effects, it is only in respect to the mean RT, and little work has been previously done to compare these approaches. Through analyses of two language production experiments, we show that SSMs reproduce regression predictors, and further extend these effects through a multivariate decomposition (cognitive parameters). We also examine a combined regression-SSM approach that is hierarchical Bayesian, which can jointly model more conditions than classic SSMs, and importantly, achieve by-item modelling with other conditions. In this analysis, we found that spoken words principally differed from one another by their activation rates and production times, but not their thresholds to be activated. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.