343 results
Search Results
2. SIMULATION-BASED ALGORITHM FOR CONTINUOUS IMPROVEMENT OF ENTERPRISES PERFORMANCE.
- Author
-
Pervaz, J., Sremcev, N., Stevanov, B., and Gusel, L.
- Subjects
- *
MANUFACTURING cells , *MANUFACTURING processes , *SETUP time , *ALGORITHMS , *LEAN management - Abstract
The printing company's process performance depends on the possibility of providing requested products and managing the existing constraints of fixed machine layouts and high setup times between different products. Process inefficiencies caused by these factors reflect on throughput, production times, and resource utilization. The changes that improve one part of the production system usually affect other parts, needing additional optimization, and it is very useful to test the feasibility of proposed solutions with simulation before implementation. This paper presents a new algorithm for continuous improvement of enterprises performance, combining the lean approach with cellular manufacturing, and simulation. The performance is observed in a way that a certain setup influences the system in its entirety, rather than on a specific part of that system. The results are presented through models developed within the production optimization phase, representing various ways in which the continuous improvement algorithm can unfold. Each of them comes with its advantages and disadvantages, all intending to create more efficient production processes that generate less production waste. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Differential Error-Based Self-Triggered Model Predictive Control With Adaptive Prediction Horizon for Discrete Systems.
- Author
-
Ning He, Shuoji Chen, Zhongxian Xu, Fuan Cheng, Ruoxia Li, and Feng Gao
- Subjects
- *
DISCRETE systems , *PREDICTION models , *DIFFERENTIAL forms , *FORECASTING , *SAMPLING errors , *ADAPTIVE control systems - Abstract
For discrete time nonlinear networked control systems, a novel self-triggered adaptive model predictive control (MPC) strategy is developed. Different from the existing self-triggered MPC methods that determine the triggering instants based on the difference between the optimal and real states at one single instant, the proposed approach updates the MPC system according to the differential form of the state error of two consecutive sampling moments to effectively reduce the computation and communication burden while maintaining the ideal control performance. In addition, this paper introduces a new adaptive prediction horizon mechanism to the self-triggered MPC, so that the amplitude of prediction horizon contraction is sufficiently large to further reduce the computational burden of the MPC method. Finally, the recursive feasibility and robust stability of this proposed strategy are proved strictly by theoretical analysis, and the simulation comparison results are shown to verify the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Enrutado polilineal basado en geometría para la planeación de movimiento en ordenamiento de objetos.
- Author
-
Alejandro Montaño-Herrera, Pedro, Pablo Sosa-Esquivel, Juan, and Antonio Jinete-Gómez, Marco
- Subjects
- *
CONFIGURATION space , *RANDOM access memory , *MANUFACTURING processes , *ARTIFICIAL vision , *ALGORITHMS - Abstract
This paper proposes the geometry-based polylinal routing method as a solution for motion planning in object sorting exercises in manufacturing processes. This algorithm is based on geometric properties that arise from the interaction among objects within the configuration space. The method proposed in this paper, during its experimental phase, successfully generated smooth routes with a processing time ranging from 62.5-125 ms on a computer equipped with an AMD Ryzen 7 2700X Eight-Core 3.70 GHz processor and 16 GB of RAM. When compared to the RRT algorithm, it exhibits a higher efficiency of 38% to 48%, resulting in a reduction in iterative processes and a shorter response time. Therefore, the proposed method presents a viable solution for addressing motion planning scenarios in object sorting exercises. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. AD-RED: A new variant of random early detection AQM algorithm.
- Author
-
Hassan, Samuel O.
- Subjects
- *
COMPUTER network traffic , *END-to-end delay , *TRAFFIC congestion , *ALGORITHMS , *RESEARCH personnel , *ROUTING algorithms - Abstract
Intensive continuing research has been noticed among scholars in the literature with a particular appreciable interest in developing new enhanced variants for the long-standing Random Early Detection (RED) algorithm. Contemporary trends shows that researchers continue to follow a research line thereby exchanging the linear curve needed in RED with nonlinear curves. Several reports have shown that RED's sole linear function is insufficiently powered for managing rising degrees of traffic congestion in the network. In this paper, Amended Dropping – Random Early Detection (AD-RED), a revised version of RED is presented. AD-RED algorithm consists in combining two nonlinear packet dropping functions: quadratic plus exponential. What's more, results from ns-3 simulator shows that AD-RED reasonably stabilized and minified the (average) queue size; and obtained a whittled down end-to-end delay when compared with RED itself and another variant of RED. Hence, AD-RED is offered as a fully sufficient replacement for RED's algorithm implementation in routers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Longitudinal plasmode algorithms to evaluate statistical methods in realistic scenarios: an illustration applied to occupational epidemiology.
- Author
-
Souli, Youssra, Trudel, Xavier, Diop, Awa, Brisson, Chantal, and Talbot, Denis
- Subjects
- *
STATISTICAL models , *STANDARD deviations , *MAXIMUM likelihood statistics , *ALGORITHMS , *PANEL analysis - Abstract
Introduction: Plasmode simulations are a type of simulations that use real data to determine the synthetic data-generating equations. Such simulations thus allow evaluating statistical methods under realistic conditions. As far as we know, no plasmode algorithm has been proposed for simulating longitudinal data. In this paper, we propose a longitudinal plasmode framework to generate realistic data with both a time-varying exposure and time-varying covariates. This work was motivated by the objective of comparing different methods for estimating the causal effect of a cumulative exposure to psychosocial stressors at work over time. Methods: We developed two longitudinal plasmode algorithms: a parametric and a nonparametric algorithms. Data from the PROspective Québec (PROQ) Study on Work and Health were used as an input to generate data with the proposed plasmode algorithms. We evaluated the performance of multiple estimators of the parameters of marginal structural models (MSMs): inverse probability of treatment weighting, g-computation and targeted maximum likelihood estimation. These estimators were also compared to standard regression approaches with either adjustment for baseline covariates only or with adjustment for both baseline and time-varying covariates. Results: Standard regression methods were susceptible to yield biased estimates with confidence intervals having coverage probability lower than their nominal level. The bias was much lower and coverage of confidence intervals was much closer to the nominal level when considering MSMs. Among MSM estimators, g-computation overall produced the best results relative to bias, root mean squared error and coverage of confidence intervals. No method produced unbiased estimates with adequate coverage for all parameters in the more realistic nonparametric plasmode simulation. Conclusion: The proposed longitudinal plasmode algorithms can be important methodological tools for evaluating and comparing analytical methods in realistic simulation scenarios. To facilitate the use of these algorithms, we provide R functions on GitHub. We also recommend using MSMs when estimating the effect of cumulative exposure to psychosocial stressors at work. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Review on Model Based Design of Advanced Control Algorithms for Cogging Torque Reduction in Power Drive Systems.
- Author
-
Dini, Pierpaolo and Saponara, Sergio
- Subjects
- *
INDUSTRIAL robots , *ELECTRIC drives , *ALGORITHMS , *ART collecting , *BRUSHLESS electric motors , *TORQUE control - Abstract
This review of the state of the art aims to collect the description and main research results in the field of development and validation of control algorithms with the main purpose to solve the problem of cogging torque and main sources of electromagnetic torque ripple. In particular, we focus on electric drives for advanced and modern mechatronic applications such as industrial automation, robotics, and automotive applications, with special emphasis on work that exploits model-based design. A great added value of this paper is to explicitly show the operational steps required for the model-based design design of optimized control algorithms for electric drives where it is necessary to make up for electromagnetic torque oscillations due to the main sources of ripple, particularly cogging torque. The ultimate goal of this paper is to provide researchers approaching this particular problem with a comprehensive collection of the most effective solutions reported in the state of the art and also a summary for effectively applying the model-based design methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A γ-power stochastic Lundqvist-Korf diffusion process: Computational aspects and simulation.
- Author
-
Abdenbi, El Azri and Ahmed, Nafidi
- Subjects
- *
INFERENTIAL statistics , *ENERGY consumption , *ALGORITHMS , *NONLINEAR equations , *PROBABILITY theory - Abstract
In this paper, we introduce a new family of stochastic Lundqvist-Korf diffusion process, defined from a g-power of the Lundqvist-Korf diffusion process. First, we determine the probabilistic characteristics of the process, such as its analytic expression, the transition probability density function from the corresponding It ˆo stochastic differential equation and obtain the conditional and non-conditional mean functions. We then study the statistical inference in this process. The parameters of this process are estimated by using the maximum likelihood estimation method with discrete sampling, thus we obtain a nonlinear equation, which is achieved via the simulated annealing algorithm. Finally, the results of the paper are applied to simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Study on centroid type-reduction of general type-2 fuzzy logic systems with sensible beginning weighted enhanced Karnik–Mendel algorithms.
- Author
-
Chen, Yang
- Subjects
- *
FUZZY logic , *FUZZY systems , *ALGORITHMS , *NUMERICAL integration , *CENTROID , *COMPUTER simulation - Abstract
General type-2 fuzzy logic systems have received wide concerns in current academic subject. Type-reduction is the kernel module for the systems. This paper shows the interpretations for the beginning of Karnik–Mendel (KM) algorithms. According to the famous numerical integration technique, the weighting approaches of enhanced Karnik–Mendel (EKM) algorithms are put forward. Then, the sensible beginning weighted enhanced Karnik–Mendel (SBWEKM) algorithms are put forward to perform the centroid type-reduction. Compared with the EKM algorithms, WEKM algorithms and SBEKM algorithms, this approach helps to improve both the absolute errors and convergence speeds as shown in four computer simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. sasa: a SimulAtor of Self-stabilizing Algorithms.
- Author
-
Altisen, Karine, Devismes, Stéphane, and Jahier, Erwan
- Subjects
- *
ALGORITHMS , *STOCHASTIC models - Abstract
In this paper, we present sasa , an open-source SimulAtor of Self-stabilizing Algorithms. Self-stabilization defines the ability of a distributed algorithm to recover after transient failures. sasa is implemented as a faithful representation of the atomic-state model (also called the locally shared memory model with composite atomicity). This model is the most commonly used one in the self-stabilizing area to prove both the correct operation of self-stabilizing algorithms and complexity bounds on them. sasa encompasses all features necessary to debug, test and analyze self-stabilizing algorithms. All these facilities are programmable to enable users to accommodate to their particular needs. For example, asynchrony is modeled by programmable stochastic daemons playing the role of input sequence generators. Properties of algorithms can be checked using formal test oracles. The sasa distribution also provides several facilities to easily achieve (batch-mode) simulation campaigns. We show that the lightweight design of sasa allows to efficiently perform huge such campaigns. Following a modular approach, we have aimed at relying as much as possible the design of sasa on existing tools, including ocaml , dot and several tools developed in the Synchrone Group of the VERIMAG laboratory. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Diagnostic Tools for Evaluating and Comparing Simulation-Optimization Algorithms.
- Author
-
Eckman, David J., Henderson, Shane G., and Shashaani, Sara
- Subjects
- *
ALGORITHMS - Abstract
Simulation optimization involves optimizing some objective function that can only be estimated via stochastic simulation. Many important problems can be profitably viewed within this framework. Whereas many solvers—implementations of simulation-optimization algorithms—exist or are in development, comparisons among solvers are not standardized and are often limited in scope. Such comparisons help advance solver development, clarify the relative performance of solvers, and identify classes of problems that defy efficient solution, among many other uses. We develop performance measures and plots, and estimators thereof, to evaluate and compare solvers and diagnose their strengths and weaknesses on a testbed of simulation-optimization problems. We explain the need for two-level simulation in this context and provide supporting convergence theory. We also describe how to use bootstrapping to obtain error estimates for the estimators. History: Accepted by Bruno Tuffin, area editor for simulation. Funding: This work was supported by the National Science Foundation [Grants CMMI-2035086, CMMI-2206972, and TRIPODS+X DMS-1839346]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplementary Information [https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.1261] or is available from the IJOC GitHub software repository (https://github.com/INFORMSJoC) at [http://dx.doi.org/10.5281/zenodo.7329235]. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. A Smart 3D RT Method: Indoor Radio Wave Propagation Modelling at 28 GHz.
- Author
-
Hossain, Ferdous, Geok, Tan Kim, Rahman, Tharek Abd, Hindia, Mohammad Nour, Dimyati, Kaharudin, Tso, Chih P., and Kamaruddin, Mohd Nazeri
- Subjects
- *
THREE-dimensional display systems , *ELECTRONIC paper , *RADIO wave propagation , *LITERATURE reviews - Abstract
This paper describes a smart ray-tracing method based on the ray concept. From the literature review, we observed that there is still a research gap on conventional ray-tracing methods that is worthy of further investigation. The herein proposed smart 3D ray-tracing method offers an efficient and fast way to predict indoor radio propagation for supporting future generation networks. The simulation data was verified by measurements. This method is advantageous for developing new ray-tracing algorithms and simulators to improve propagation prediction accuracy and computational speed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Simulating structural plasticity of the brain more scalable than expected.
- Author
-
Czappa, Fabian, Geiß, Alexander, and Wolf, Felix
- Subjects
- *
SYNAPSES , *SCALABILITY , *ALGORITHMS , *NEURONS - Abstract
Structural plasticity of the brain describes the creation of new and the deletion of old synapses over time. Rinke et al. (JPDC 2018) introduced a scalable algorithm that simulates structural plasticity for up to one billion neurons on current hardware using a variant of the Barnes–Hut algorithm. They demonstrate good scalability and prove a runtime complexity of O (n log 2 n). In this comment paper, we show that with careful consideration of the algorithm and a rigorous proof, the theoretical runtime can even be classified as O (n log n). • Improved serial runtime bound for the adapted Barnes–Hut algorithm to Θ(n*log(n)). • Improved the parallel runtime bound to Θ(n/p*log(n)+p). • Mathematical justification that the given runtime bound is sharp. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. SIMULATION ANALYSIS OF ROBOTIC MOBILE FULFILMENT SYSTEM BASED ON CELLULAR AUTOMATA.
- Author
-
Li, W., Miao, L., and Yang, P.
- Subjects
- *
CELLULAR automata , *TRAFFIC engineering , *WAREHOUSES , *TRAFFIC flow , *ROBOTICS , *RIGHT of way , *ALGORITHMS - Abstract
This paper analyses the picking performance of a robotic mobile fulfilment system (RMFS) and proposes a Simulation Framework of RMFS based on cellular automata (SFRMFSCA). Many previous RMFS simulation platforms stipulate all aisles to be set up in a fixed directional road network for oneway lines. The warehouse robot had to travel an unnecessarily long distance to perform tasks. We relax the one-way constraint on aisles and cross aisles in the warehouse and allocate the right of way among the warehouse aisles and cross-aisles intersection by referring the idea of traffic light and traffic flow control to the RMFS warehouse scenario. To improve the efficiency of RMFS order picking, this paper designs a comprehensive strategy combining adaptive traffic light update rule, deadlock detection and recovery algorithm, and traffic control to improve the traffic flow of the system. A series of numerical experiments show that the comprehensive strategy designed in this paper can effectively improve the order picking efficiency of RMFS and reduce the probability of scale deadlock. These results and strategies provide a useful reference for designers who initially set up the RMFS warehouse. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Developing and Pilot Testing Decision-Making Tools to Improve Nursing Care of Adults on the Autism Spectrum Using Simulation.
- Author
-
Giarelli, Ellen, Fisher, Kathleen, Wilson, Linda, Bonacquisti, Lisa M., Chornobroff, Maria, DiPietro, Anna Marie T., Weiss, Mary Jane, and Bannett, Gregory
- Subjects
- *
TREATMENT of autism , *NURSING , *RESEARCH methodology , *HEALTH occupations students , *TASK performance , *HUMAN services programs , *NURSE-patient relationships , *DESCRIPTIVE statistics , *NURSING students , *PATIENT-professional relations , *STATISTICAL sampling , *VIDEO recording , *DELPHI method , *ALGORITHMS , *ADULTS - Abstract
Nurses and other health care providers face daily challenges when delivering care in the acute care settings to adult patients on the autism spectrum (ASD). Secondary to a lack of disability-specific training, health care providers may struggle to establish and maintain a therapeutic rapport with patients diagnosed on the autism spectrum (Bury et al., 2020). The purpose of this paper is to describe the development and pilot testing of decision-making tools to guide healthcare providers as they interact with patients on the autism spectrum using a novel approach. This mixed methods project employed simulation technology and actors portraying patients. It was conducted in two phases. During Phase 1, the decision-making tools were created using video-taped encounters between nursing students (n = 11) and standardized patients (actors) who displayed a range of core characteristics and behavioral features associated with ASD. During Phase 2, we piloted the tools with a convenience sample of 17 nurses. A panel of experts using a modified Delphi technique, 17-item task completion, 22-item behavioral encounter checklists and debriefing sessions, analyzed the 17-recorded simulations. The decision-making tools show promise of being to guide nurses' efforts to establish and maintain a therapeutic rapport with hospitalized adult patients who are also on the autism spectrum. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Framework of safety evaluation and scenarios for automatic collision avoidance algorithm.
- Author
-
Sawada, Ryohei, Sato, Keiji, and Minami, Makiko
- Subjects
- *
COLLISIONS at sea , *ANALYTICAL solutions , *ALGORITHMS , *ANGLES - Abstract
This paper proposes a novel method to design a scenario set for evaluating automatic collision avoidance algorithms. While many automatic collision avoidance methods have been proposed recently, they have been verified using different methods and scenarios. This study aims to construct a scenario set that covers all possible encounter situations between ships. Based on risks, rules such as the COLREGs and characteristics of encounter angle of ships, the scenario sets of one-on-one and one-on-two encounter situations are built. For the evaluation of collision avoidance maneuvers, the authors provide the analytical solution in the time domain of evaluation indices using the position and velocity vectors of two ships. The authors also present the design method of the additional scenario to evaluate the automatic ship avoidance maneuvering algorithm, for the safety verification of autonomous and unmanned ships. • Encounter scenarios for ship collision avoidance were analyzed and classified. • A novel scenario set for the evaluation of automatic collision avoidance algorithms was proposed. • The analytical time-series solutions are presented for evaluation indices for collision avoidance. • The theorem that determines the target ship's position and course for a given CPA vector is obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Using Numerical Methods to Design Simulations: Revisiting the Balancing Intercept.
- Author
-
Robertson, Sarah E, Steingrimsson, Jon A, and Dahabreh, Issa J
- Subjects
- *
COMPUTER simulation , *EXPERIMENTAL design , *STATISTICS , *STATISTICAL models , *LOGISTIC regression analysis , *STATISTICAL correlation , *DATA analysis , *EPIDEMIOLOGICAL research , *ALGORITHMS - Abstract
In this paper, we consider methods for generating draws of a binary random variable whose expectation conditional on covariates follows a logistic regression model with known covariate coefficients. We examine approximations for finding a "balancing intercept," that is, a value for the intercept of the logistic model that leads to a desired marginal expectation for the binary random variable. We show that a recently proposed analytical approximation can produce inaccurate results, especially when targeting more extreme marginal expectations or when the linear predictor of the regression model has high variance. We then formulate the balancing intercept as a solution to an integral equation, implement a numerical approximation for solving the equation based on Monte Carlo methods, and show that the approximation works well in practice. Our approach to the basic problem of the balancing intercept provides an example of a broadly applicable strategy for formulating and solving problems that arise in the design of simulation studies used to evaluate or teach epidemiologic methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. MESSAGE PRIORITY MAXDELIVERY ALGORITHM APPLIED IN EARTHQUAKE COMMUNICATION SITUATION.
- Author
-
NĂNĂU, Corina-Ştefania
- Subjects
- *
ALGORITHMS , *TELECOMMUNICATION systems , *DELAY-tolerant networks , *NATURAL disasters - Abstract
The study of Delay Tolerant Networks (DTNs) has considerably grown in recent years as communication contexts have emerged with needs that go beyond what the Internet could offer. For example, in case of a natural disaster that damages the classic communication network, a delay tolerant network can be implemented ad hoc, to face the challenges imposed by this context. The delay tolerant network is special because there are no permanent end-to-end path between nodes and links characteristics are time-varying. This paper aims to test the performance of the DTN MaxDelivery algorithm in establishing an efficient communication in the context of a post-earthquake situation, that affected the classical communication network. The goal of the algorithm is to maximize the number of high priority messages that manage to reach their destination. A series of simulations are presented to verify the optimal parameters that the network must comply with in order to maximize the message transfer rate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Evaluation of fog application placement algorithms: a survey.
- Author
-
Smolka, Sven and Mann, Zoltán Ádám
- Subjects
- *
ALGORITHMS , *EDGE computing , *ENERGY consumption , *FOG , *SERVER farms (Computer network management) - Abstract
Recently, the concept of cloud computing has been extended towards the network edge. Devices near the network edge, called fog nodes, offer computing capabilities with low latency to nearby end devices. In the resulting fog computing paradigm (also called edge computing), application components can be deployed to a distributed infrastructure, comprising both cloud data centers and fog nodes. The decision which infrastructure nodes should host which application components has a large impact on important system parameters like performance and energy consumption. Several algorithms have been proposed to find a good placement of applications on a fog infrastructure. In most cases, the proposed algorithms were evaluated experimentally by the respective authors. In the absence of a theoretical analysis, a thorough and systematic empirical evaluation is of key importance for being able to make sound conclusions about the suitability of the algorithms. The aim of this paper is to survey how application placement algorithms for fog computing are evaluated in the literature. In particular, we identify good and bad practices that should be utilized respectively avoided when evaluating such algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. ROBUSTNESS ANALYSIS OF THE DATA-SELECTIVE VOLTERRA NLMS ALGORITHM.
- Author
-
Sharafi, Javad and Maarefparvar, Abbas
- Subjects
- *
ADAPTIVE filters , *PARAMETER estimation , *BEHAVIORAL assessment , *ALGORITHMS - Abstract
Recently, the data-selective adaptive Volterra filters have been proposed; however, up to now, there are not any theoretical analyses on its behavior rather than numerical simulations. Therefore, in this paper, we analyze the robustness (in the sense of l2-stability) of the data-selective Volterra normalized least-mean-square (DS-VNLMS) algorithm. First, we study the local robustness of this algorithm at any iteration, then we propose a global bound for the error/discrepancy in the coefficient vector. Also, we demonstrate that the DS-VNLMS algorithm improves the parameter estimation for the majority of the iterations that an update is implemented. Moreover, we also prove that if the noise bound is known, then we can set the DS-VNLMS so that it never degrades the estimate. The simulation results corroborate the validity of the executed analysis and demonstrate that the DS-VNLMS algorithm is robust against noise, no matter how its parameters are adopted. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Modified Group Lottery Scheduling Algorithm for Ready Queue Mean Time Estimation in Multiprocessor Environment.
- Author
-
Shukla, Diwakar and More, Sarla
- Subjects
- *
ALGORITHMS , *MEAN time to repair , *COMPUTER simulation - Abstract
The problem of ready queue mean time estimation in the multiprocessor environment was discussed by Shukla et. al. [5] and several others. In recent years, most of the existing and relating contributions assume that all processes in the ready queue might have been completed before a particular instant of time occur like a sudden failure or interrupt. Due to this, data of time consumed by processes remain available. The idea of improvement in this paper is to assume that at the instant of occurrence of breakdown, some processes are partially completed and remaining is completely processed. Under this situation, the time computation and allocation strategies need to be re-designed. Therefore this has been taken into account in this paper with a proposal of a modified scheme. It contains arbitrary, Type-A, and Type- B allocations of sample units to the processors. Confidence intervals for the sample mean values are calculated and simulated over many samples using cumulative probabilities. It was found that Type-A allocation has the lowest variance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
22. Multilevel Hierarchical Estimation for Thermal Management Systems of Electrified Vehicles With Experimental Validation.
- Author
-
Tannous, Pamela J. and Alleyne, Andrew G.
- Subjects
- *
MULTILEVEL models , *SYSTEM dynamics , *COMPUTATIONAL complexity , *HEAT , *VEHICLES , *REDUCED-order models , *PROPER orthogonal decomposition - Abstract
This paper presents a multilevel model-based hierarchical estimation framework for complex thermal management systems of electrified vehicles. System dynamics are represented by physics-based lumped parameter models derived from a graph-based modeling approach. The complexity of the hierarchical models is reduced by applying an aggregation-based model-order reduction technique that preserves the physical correspondence between a reduced-order model and the physical system. This paper also presents a case study in which a hierarchical observer is designed to estimate the dynamics of a candidate system. The hierarchical observer is connected to a previously developed hierarchical controller for closed-loop control, and the closed-loop performance is demonstrated through simulation and real-time experimental results. A comparison between the proposed hierarchical observer and a centralized observer shows the tradeoff between the estimation accuracy and the computational complexity of the two approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. SuperMICE: An Ensemble Machine Learning Approach to Multiple Imputation by Chained Equations.
- Author
-
Laqueur, Hannah S, Shev, Aaron B, and Kagawa, Rose M C
- Subjects
- *
MACHINE learning , *DATABASE management , *PREDICTION models , *ALGORITHMS - Abstract
Researchers often face the problem of how to address missing data. Multiple imputation is a popular approach, with multiple imputation by chained equations (MICE) being among the most common and flexible methods for execution. MICE iteratively fits a predictive model for each variable with missing values, conditional on other variables in the data. In theory, any imputation model can be used to predict the missing values. However, if the predictive models are incorrectly specified, they may produce biased estimates of the imputed data, yielding inconsistent parameter estimates and invalid inference. Given the set of modeling choices that must be made in conducting multiple imputation, in this paper we propose a data-adaptive approach to model selection. Specifically, we adapt MICE to incorporate an ensemble algorithm, Super Learner, to predict the conditional mean for each missing value, and we also incorporate a local kernel-based estimate of variance. We present a set of simulations indicating that this approach produces final parameter estimates with lower bias and better coverage than other commonly used imputation methods. These results suggest that using a flexible machine learning imputation approach can be useful in settings where data are missing at random, especially when the relationships among the variables are complex. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Water cycle algorithm: an approach for improvement of navigational strategy of multiple humanoid robots.
- Author
-
Muni, Manoj Kumar, Kumar, Saroj, Parhi, Dayal R., and Pandey, Krishna Kant
- Subjects
- *
HYDROLOGIC cycle , *HUMANOID robots , *ALGORITHMS , *MATHEMATICAL optimization , *TIME travel - Abstract
This paper presents an efficient water cycle algorithm based on the processes of water cycle with movement of streams and rivers in to the sea. This optimization algorithm is applied to obtain the optimal feasible path with minimum travel duration during motion planning of both single and multiple humanoid robots in both static and dynamic cluttered environments. This technique discards the rainfall process considering falling water droplets forming streams during raining and the process of flowing. The flowing process searches the solution space and finds the more accurate solution and represents the local search. Motion planning of humanoids is carried out in V-REP software. The performance of proposed algorithm is tested in experimental scenario under laboratory conditions and shows the developed algorithm performs well in terms of obtaining optimal path length and minimum time span of travel. Here, navigational analysis has been performed on both single as well as multiple humanoid robots. Statistical analysis of results obtained from both simulation and experimental environments is carried out for both single and multiple humanoids, along with the comparison with another existing optimization technique that indicate the strength and effectiveness of the proposed water cycle algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. A mathematical model to predict the properties of solid desiccant heat pump by self-refined mesh algorithm.
- Author
-
Li, Qian, Wang, Ruzhu, and Ge, T.S.
- Subjects
- *
HEAT pumps , *SILICA gel , *MATHEMATICAL models , *DRYING agents , *MASS transfer , *ALGORITHMS , *DYNAMIC models - Abstract
• An adsorption model based on the real microstructure of desiccant coating is developed. • A self-refined algorithm, which can realize automatic mesh optimization is proposed. • A model, which can simulate the transient performances of different SDHP systems is established. • Three SDHP systems based on silica gel, MOF, and hygroscopic salt are simulated and analyzed. The solid desiccant heat pump (SDHP) is a promising air-handling method. Present simulation models of SDHP have two challenges. Owing to the fixed mesh structure, one model should be deeply revised to simulate another SDHP system. Besides, an adsorption dynamic model based on the real mass transfer network of desiccant coating is still lacking. This study proposes a universal SDHP model, which is based on the self-refined mesh algorithm and a new adsorption dynamic model (GP model). Self-refined mesh algorithm can automatically optimize the mesh structure, making it easy to simulate different SDHP systems in variable environmental conditions. GP model, based on the real microstructure of coating is demonstrated to be suitable for MOF, hygroscopic salt (LiCl), and silica gel coatings in our paper. Our model simulates the performances of three SDHP systems (coated with silica gel, MIL-101Cr, and LiCl) in sub-humid and humid conditions. Compared with LiCl, MIL-101Cr is more suitable for situations of sub-humid environment and high dehumidification quantity. For the sub-humid condition, the optimal switch-over period for MIL-101Cr-based SDHP is 13 min to owe the highest COP (5.6). For the humid conditions, LiCl-based SDHP shows the highest COP (10.7) when the switch-over period is 3 min. We believe this work can enlighten other studies on SDHP simulation and mass transfer in desiccants and guide the design of SDHP systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Foam‐like phantoms for comparing tomography algorithms.
- Author
-
Pelt, Daniël M., Hendriksen, Allard A., and Batenburg, Kees Joost
- Subjects
- *
TOMOGRAPHY , *RANDOM numbers , *ALGORITHMS - Abstract
Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data‐driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real‐world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data‐driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam‐like mathematical phantoms that aims to satisfy all four requirements simultaneously. The phantoms consist of foam‐like structures with more than 100000 features, making them challenging to reconstruct and representative of common tomography samples. Because the phantoms are computer‐generated, varying acquisition modes and experimental conditions can be simulated. An effectively unlimited number of random variations of the phantoms can be generated, making them suitable for data‐driven approaches. We give a formal mathematical definition of the foam‐like phantoms, and explain how they can be generated and used in virtual tomographic experiments in a computationally efficient way. In addition, several 4D extensions of the 3D phantoms are given, enabling comparisons of algorithms for dynamic tomography. Finally, example phantoms and tomographic datasets are given, showing that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. ELECTRIC VEHICLE CHARGING STATION LAYOUT BASED ON PARTICLE SWARM SIMULATION.
- Author
-
Liu, J.-Y., Liu, S.-F., and Gong, D.-Q.
- Subjects
- *
ELECTRIC vehicle charging stations , *CONSTRUCTION costs , *COST functions , *ALGORITHMS - Abstract
The construction layout of charging piles restricts the rapidly development of EVs. To address this area, this paper first simulates the current layout of charging piles in Beijing and finds that there is an unbalanced distribution problem of oversupply and idle resources. Then, based on the current situation, this study takes the total charge station construction cost as the objective function and unfolds in three levels, using MIP for conditional constraints. Finally, an improved particle swarm algorithm is employed to simulate electric vehicles by generating demand points in the region to obtain the optimum location of charging stations. This method overcomes the limitation of using static data in traditional research, and the simulation can reflect the dynamic law of electric vehicle operation, which is more consistent with the real situation. The calculation results show that the method adopted in this study can reasonably plan the layout of charging stations, relieve the charging pressure of some charging stations, and minimize the overall service cost of new charging stations based on the continuation of the existing layout. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Initialization of Hidden Markov and Semi‐Markov Models: A Critical Evaluation of Several Strategies.
- Author
-
Maruotti, Antonello and Punzo, Antonio
- Subjects
- *
EXPECTATION-maximization algorithms , *ALGORITHMS , *GAUSSIAN distribution , *MARKOV processes - Abstract
Summary: The expectation–maximization (EM) algorithm is a familiar tool for computing the maximum likelihood estimate of the parameters in hidden Markov and semi‐Markov models. This paper carries out a detailed study on the influence that the initial values of the parameters impose on the results produced by the algorithm. We compare random starts and partitional and model‐based strategies for choosing the initial values for the EM algorithm in the case of multivariate Gaussian emission distributions (EDs) and assess the performance of each strategy with different assessment criteria. Several data generation settings are considered with varying number of latent states, of variables as well as of the level of fuzziness in the data, and discussion on how each factor influences the obtained results is provided. Simulation results show that different initialization strategies may lead to different log‐likelihood values and, accordingly, to different estimated partitions. A clear indication of which strategies should be preferred is given. We further include two real‐data examples, widely analysed in the hidden semi‐Markov model literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Construction of stochastic simulation metamodels with segmented polynomials.
- Author
-
Reis dos Santos, Pedro M. and Isabel Reis dos Santos, M.
- Subjects
- *
POLYNOMIALS , *NONLINEAR regression , *ALGORITHMS , *LEAST squares , *NONLINEAR theories - Abstract
Metamodels are an important tool in simulation analysis as they can provide insight about the behavior of the simulation response. Modeling the response with low-degree polynomial segments allows the identification of different behavior zones and the parameters still have relation with the physical world. The purpose of this paper is to extend the use of segmented polynomial functions for simulation metamodeling, where the segments have at most identical value and slope at the breaks. Our approach is to build segmented polynomials metamodels where the hypothesis of degree and continuity of splines are less exigent, allowing more flexibility of the approximation. When breaks are known, constrained least squares are used for metamodel estimation, taking into account the linear formulation of the problem. If breaks have to be estimated, the unconstrained nonlinear regression theory is used, when it can be applied. Otherwise, the estimation is performed using an iterative algorithm which is applied repeatedly in a cyclic manner for estimating the breaks, and jackknifing yields the confidence intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Robust Deception Scheme for Secure Interference Exploitation Under PSK Modulations.
- Author
-
Fan, Ye, Yao, Rugui, Li, Ang, Liao, Xuewen, and Leung, Victor C. M.
- Subjects
- *
TRANSMITTERS (Communication) , *DECEPTION , *INTERSYMBOL interference , *ALGORITHMS , *MISO , *WIRETAPPING - Abstract
This paper investigates the security problem of a multi-eavesdrop multiple-input-single-output (MISO) wiretap channel, where an N-antenna transmitter communicates with a single-antenna legitimate user in the presence of multiple single-antenna smart eavesdroppers. To overcome the security risk of the traditional secure constructive interference-based (CI-based) scheme when facing the smart eavesdroppers, we propose a novel deception scheme (DS) via a random transmission strategy, where the eavesdroppers are expected to decode the deception symbols correctly but unable to distinguish the authenticity of the decoded symbol. Then, an efficient algorithm is proposed for the deception signal-to interference-plus-noise (SINR)-balancing problem when perfect channel state information (CSI) is assumed. Furthermore, we consider a practical scenario where only imperfect CSI is available, and explore two different methods for the deception optimization problem, i.e., convexification relaxation approach (CRA) and Lagrangian relaxation approach (LRA), respectively. For both CSI cases, a closed-form solution to the considered CI-based deception scheme is obtained. Simulation results validate the superiority of the proposed approach over traditional secure precoding schemes, and also demonstrate the significant computation efficiency improvements for the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. On Using a Low-Density Flash Lidar for Road Vehicle Tracking.
- Author
-
A. R., Vimal Kumar, Subramanian, Shankar C., and Rajamani, Rajesh
- Subjects
- *
SENSOR placement , *LIDAR , *HIERARCHICAL clustering (Cluster analysis) , *KALMAN filtering , *ALGORITHMS , *AIR filters - Abstract
This study uses a low-density solid-state flash lidar for estimating the trajectories of road vehicles in vehicle collision avoidance applications. Low-density flash lidars are inexpensive compared to the commonly used radars and point-cloud lidars, and have attracted the attention of vehicle manufacturers recently. However, tracking road vehicles using the sparse data provided by such sensors is challenging due to the few reflected measurement points obtained. In this paper, such challenges in the use of low-density flash lidars are identified and estimation algorithms to handle the same are presented. A method to use the amplitude information provided by the sensor for better localization of targets is evaluated using both physics-based simulations and experiments. A two-step hierarchical clustering algorithm is then employed to group multiple detections from a single object into one measurement, which is then associated with the corresponding object using a Joint Integrated Probabilistic Data Association (JIPDA) algorithm. A Kalman filter is used to estimate the longitudinal and lateral motion variables and the results are presented, which show that good tracking, especially in the lateral direction, can be achieved using the proposed algorithm despite the sparse measurements provided by the sensor. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Secure Cognitive Radio Communication via Intelligent Reflecting Surface.
- Author
-
Dong, Limeng, Wang, Hui-Ming, and Xiao, Haitao
- Subjects
- *
COGNITIVE radio , *RADIO (Medium) , *RADIO transmitters & transmission , *ALGORITHMS - Abstract
In this paper, an intelligent reflecting surface (IRS) assisted spectrum sharing underlay cognitive radio (CR) wiretap channel (WTC) is studied, and we aim at enhancing the secrecy rate of secondary user in this channel subject to total power constraint at secondary transmitter (ST), interference power constraint (IPC) at primary receiver (PR) as well as unit modulus constraint at IRS. Due to extra IPC and eavesdropper (Eve) are considered, all the existing solutions for enhancing secrecy rate of IRS-assisted non-CR WTC as well as enhancing transmission rate in IRS-assisted CR channel without eavesdropper fail in this work. Therefore, we propose new numerical solutions to optimize the secrecy rate of this channel under full primary, secondary users’ channel state information (CSI) and three different cases of Eve’s CSI: full CSI, imperfect CSI with bounded estimation error, and no CSI. To solve the difficult non-convex optimization problem, an efficient alternating optimization (AO) algorithm is proposed to jointly optimize the beamformer at ST and phase shift coefficients at IRS. In particular, when optimizing the phase shift coefficients during each iteration of AO, a Dinkelbach based solution in combination with successive approximation and penalty based solution is proposed under full CSI and a penalty convex-concave procedure solution is proposed under imperfect Eve’s CSI. For no Eve’s CSI case, artificial noise (AN) aided approach is adopted to help enhancing the secrecy rate. Simulation results show that our proposed solutions for the IRS-assisted design greatly enhance the secrecy performance compared with the existing numerical solutions with and without IRS under full and imperfect Eve’s CSI. And positive secrecy rate can be achieved by our proposed AN aided approach given most channel realizations under no Eve’s CSI case so that secure communication also can be guaranteed. All of the proposed AO algorithms are guaranteed to monotonic convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Terahertz Massive MIMO With Holographic Reconfigurable Intelligent Surfaces.
- Author
-
Wan, Ziwei, Gao, Zhen, Gao, Feifei, Renzo, Marco Di, and Alouini, Mohamed-Slim
- Subjects
- *
CHANNEL estimation , *ELECTRONIC equipment , *MIMO systems , *ALGORITHMS , *UNIT cell , *TERAHERTZ technology - Abstract
We propose a holographic version of a reconfigurable intelligent surface (RIS) and investigate its application to terahertz (THz) massive multiple-input multiple-output systems. Capitalizing on the miniaturization of THz electronic components, RISs can be implemented by densely packing sub-wavelength unit cells, so as to realize continuous or quasi-continuous apertures and to enable holographic communications. In this paper, in particular, we derive the beam pattern of a holographic RIS. Our analysis reveals that the beam pattern of an ideal holographic RIS can be well approximated by that of an ultra-dense RIS, which has a more practical hardware architecture. In addition, we propose a closed-loop channel estimation (CE) scheme to effectively estimate the broadband channels that characterize THz massive MIMO systems aided by holographic RISs. The proposed CE scheme includes a downlink coarse CE stage and an uplink finer-grained CE stage. The uplink pilot signals are judiciously designed for obtaining good CE performance. Moreover, to reduce the pilot overhead, we introduce a compressive sensing-based CE algorithm, which exploits the dual sparsity of THz MIMO channels in both the angular domain and delay domain. Simulation results demonstrate the superiority of holographic RISs over the non-holographic ones, and the effectiveness of the proposed CE scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Achievable Rate Maximization for Intelligent Reflecting Surface-Assisted Orbital Angular Momentum-Based Communication Systems.
- Author
-
Li, Yiqing, Jiang, Miao, Zhang, Guangchi, and Cui, Miao
- Subjects
- *
TELECOMMUNICATION systems , *ALGORITHMS , *MEAN square algorithms , *DISCRETE Fourier transforms , *ANGULAR momentum (Mechanics) - Abstract
The orbital angular momentum (OAM)-based communication systems may face severe transmission problems when the transmit and receive uniform circular array pairs are blocked. In this paper, a promising technique named intelligent reflecting surface (IRS) is proposed to help alleviate blockages and provide alternative line-of-sight links. To maximize the achievable rate of the IRS-assisted OAM communication systems, we optimize its transmit power allocation along with the IRS's reflecting phase shifts, and propose an alternative optimization-based algorithm to solve the resulting optimization problem with coupled variables and non-convex structure. Specifically, the proposed algorithm obtains a closed-form solution to the transmit power allocation by applying the majorization-minimization and ℓ1-ball projection approaches, and obtain the locally optimal solution to the IRS's reflecting phase shifts by applying the weighted minimum mean square error-based fixed point iteration approach. Simulation results demonstrate the superiority of our proposed algorithm over existing baseline algorithms and also show its robust stability to the oblique angle errors. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Real-Time Planning and Nonlinear Control for Quadrupedal Locomotion With Articulated Tails.
- Author
-
Fawcett, Randall T., Pandala, Abhishek, Kim, Jeeseop, and Hamed, Kaveh Akbari
- Subjects
- *
QUADRUPEDALISM , *CENTER of mass , *QUADRATIC programming , *REACTION forces , *COUPLING schemes , *PRODUCTION planning , *LEG , *TRACKING control systems - Abstract
The primary goal of this paper is to develop a formal foundation to design nonlinear feedback control algorithms that intrinsically couple legged robots with bio-inspired tails for robust locomotion in the presence of external disturbances. We present a hierarchical control scheme in which a high-level and real-time path planner, based on an event-based model predictive control (MPC), computes the optimal motion of the center of mass (COM) and tail trajectories. The MPC framework is developed for an innovative reduced-order linear inverted pendulum (LIP) model that is augmented with the tail dynamics. At the lower level of the control scheme, a nonlinear controller is implemented through the use of quadratic programming (QP) and virtual constraints to force the full-order dynamical model to track the prescribed optimal trajectories of the COM and tail while maintaining feasible ground reaction forces at the leg ends. The potential of the analytical results is numerically verified on a full-order simulation model of a quadrupedal robot augmented with a tail with a total of 20 degrees-of-freedom. The numerical studies demonstrate that the proposed control scheme coupled with the tail dynamics can significantly reduce the effect of external disturbances during quadrupedal locomotion. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. A Two-Stage Local Positioning Method With Misalignment Calibration for Robotic Structural Monitoring of Buildings.
- Author
-
Rong Wang, Zhi Xiong, Yulu Luke Chen, Manjunatha, Preetham, and Masri, Sami F.
- Subjects
- *
STRUCTURAL health monitoring , *BUILDING failures , *MOBILE robots , *INERTIAL navigation systems , *ROBOTICS , *CALIBRATION - Abstract
In structural health monitoring (SHM) applications carried out by mobile robots, the precise locating of the SHM robot is essential for accurate detection and quantification of defects. The traditional dead reckoning (DR) approach can only provide local position in the horizon, which is not enough for SHM applications in three dimensions in large buildings. In this paper, a new robot positioning algorithm for active building structural defect detection and localization is proposed. The two-stage robot positioning scheme is designed through the self-misalignment calibration and the positioning during SHM task stages, fusing the absolute and relative measurements. In order to overcome the drawback of the DR algorithm, in the full analysis of existing localization mode that can be applied to mobile robots, this paper adopted the inertial navigation system (INS) approach to measure the absolute motion information of a moving robot. On this basis, through the transformation between the absolute positioning coordinates and the local positioning coordinates of buildings, the mobile robot's optimal trajectory on building surface was designed for self-calibration of coordinate misalignments. The proposed method could effectively achieve the robot local positioning in building coordinate frame by fusing the external relative assistant measurements with absolute measurement. By using the designed strategies, the coordinate misalignment can also be self-calibrated effectively, improving local positioning accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. Noise-Statistics Learning of Automotive-Grade Sensors Using Adaptive Marginalized Particle Filtering.
- Author
-
Berntorp, Karl and Di Cairano, Stefano
- Subjects
- *
DETECTORS , *ESTIMATION bias , *RANDOM noise theory , *ADAPTIVE filters , *DATA fusion (Statistics) , *PARTICLES - Abstract
This paper presents a method for real-time identification of sensor statistics especially aimed for low-cost automotive-grade sensors. Based on recent developments in adaptive particle filtering (PF) and under the assumption of Gaussian distributed noise, our method identifies the slowly time-varying sensor offsets and variances jointly with the vehicle state, and it extends to banked roads. While the method is primarily focused on learning the noise characteristics of the sensors, it also produces an estimate of the vehicle state. This can then be used in driver-assistance systems, either as a direct input to the control system or indirectly to aid other sensor-fusion methods. The paper contains verification against several simulation and experimental data sets. The results indicate that our method is capable of bias-free estimation of both the bias and the variance of each sensor, that the estimation results are consistent over different data sets, and that the computational load is feasible for implementation on computationally limited embedded hardware typical of automotive applications. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Dynamic AP Clustering and Precoding for User-Centric Virtual Cell Networks.
- Author
-
Jianfeng Shi, Ming Chen, Wence Zhang, Zhaohui Yang, and Hao Xu
- Subjects
- *
PARTICLE swarm optimization , *FEMTOCELLS , *ALGORITHMS , *MEAN square algorithms , *MATHEMATICAL optimization - Abstract
This paper investigates the dynamic access point (AP) clustering and precoding problem in the downlink of user-centric virtual cell networks. The goal is to maximize the weighted sum spectral efficiency (SE) while satisfying the power constraints and AP clustering constraints in adjacent time slots (TSs). By adopting the random walk mobility to model the mobile user equipments’ movement behaviors, we consider dynamic and time-varying channel conditions. Therefore, the weighted sum SE maximization programming takes the form of discrete-time sequence of mixed-integer non-convex optimization problems. In this paper, we propose to solve this sequential problem in two stages. In the first stage, a dynamic AP clustering approach based on discrete particle swarm optimization is developed. This approach takes the advantage of the channel correlation by exploiting the relationship between AP clustering solutions in adjacent TSs to improve the SE performance and reduce complexity. In the second stage, given the AP clustering solution obtained in the first stage, a distributed precoding algorithm is devised via applying the weighted minimum mean square error method. By combining these two stages, we propose a novel dynamic AP clustering and precoding algorithm (DAPC-Pre). The effectiveness of the proposed DAPC-Pre algorithm is verified by the simulation results. In particular, the proposed algorithm converges fast and significantly outperforms benchmark algorithms in terms of sum SE under different dynamic environments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Load Sharing and Wayside Battery Storage for Improving AC Railway Network Performance With Generic Model for Capacity Estimation—Part 2.
- Author
-
Perin, Igor, Walker, Geoffrey R., and Ledwich, Gerard
- Subjects
- *
ENERGY storage , *OPERATING costs , *ALGORITHMS , *PERFORMANCE of battery chargers , *ELECTRIC batteries , *TRACTION power supplies - Abstract
This paper investigates the use of large-scale wayside energy storage to reduce operational costs of ac electric traction power networks. This paper also introduces a method of estimating the battery capacity, based on manufacturer's charging and discharging specifications. The battery model is simple and easy to incorporate into load flow simulation algorithms. The concept and the battery model are validated through a feasibility study within Aurizon's heavy haul electric railway network. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. A temporal graph grammar formalism.
- Author
-
Shi, Zhan, Zeng, Xiaoqin, Zou, Yang, Huang, Song, Li, Hui, Hu, Bin, and Yao, Yi
- Subjects
- *
GRAPH grammars , *FORMALISM (Literary analysis) , *EDGES (Geometry) , *DECIDABILITY (Mathematical logic) , *ALGORITHMS - Abstract
As a useful formalism tool, graph grammars provide a rigorous but intuitive way to specify visual languages. This paper, based on the existing Edge-based Graph Grammar (EGG), proposes a new context-sensitive graph grammar formalism called the Temporal Edge-based Graph Grammar, or TEGG. TEGG introduces some temporal mechanisms to grammatical specifications, productions, operations and so on in order to tackle time-related issues. In the paper, formal definitions of TEGG are provided first. Then, a new parsing algorithm with a decidability proof is proposed to check the correctness of a given graph's structure, to analyze operations’ timing when needed, and to make the computer simulation of the temporal sequence in the graph available. Next, the complexity of the parsing algorithm is analyzed. Finally, a case study on an application with temporal requirements is provided to show how the parsing algorithm of TEGG works. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Constrained Quasi-Spectral MPSP With Application to High-Precision Missile Guidance With Path Constraints.
- Author
-
Mondal, Sabyasachi and Padhi, Radhakant
- Subjects
- *
MISSILE guidance systems , *SYSTEM dynamics , *PROCESS optimization , *AUTOMATIC pilot (Airplanes) - Abstract
This paper extends the recently developed quasi-spectral model predictive static programming (QS-MPSP) to include state and control path-constraints and yet retain its computational efficiency. This is achieved by (i) formulating the entire problem in the control variables alone by (a) converting the system dynamics to an equivalent algebraic constraint and (b) converting the state constraints to equivalent control constraints, both of which is done by manipulating the system dynamics, (ii) representing the control variables in Quasi-spectral form, which makes the number of free-variables independent of time-grids and (iii) using a computationally efficient optimization algorithm to solve this low-dimensional problem. This generic computationally efficient technique is utilized next as an effective lead angle, and lateral acceleration constrained optimal missile guidance to intercept incoming high-speed ballistic targets with high precision successfully. Both of these constraints, as well as near-zero miss-distance, are of high practical significance for this challenging problem. Extensive three-dimensional simulation studies show the effectiveness of the newly proposed constrained QS-MPSP guidance algorithm. Six degrees-of-freedom simulation studies have also been carried out using autopilot in the loop to validate the results more realistically. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Discrete-Time Nonlinear Singularly Perturbed System Identification Using Coupled Multimodel Representation.
- Author
-
Rajab, Asma Ben, Bahri, Nesrine, and Ltaief, Majda
- Subjects
- *
SYSTEM identification , *OBSERVABILITY (Control theory) , *SINGULAR perturbations - Abstract
Many control and observability theories for singularly perturbed systems require the full knowledge of system model parameters exceptionally if the system is considered as black box. To overcome this problem and to obtain an accurate and faithful model, this paper describes a new identification method for discrete-time nonlinear singularly perturbed systems (NLSPS) using the coupled state multimodel representation. The Levenberg-Marquardt algorithm is used to identify not only the submodels parameters but also the perturbation parameter e. Two cases are considered to identify these systems. The first one supposes that the perturbation parameter e of the real system is known and thus only the submodels parameters are identified. The second case supposes that this perturbation parameter is unknown and has to be identified with the other submodels parameters. The simulation example demonstrates the effectiveness of the proposed identification. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Dendrite P Systems Toolbox: Representation, Algorithms and Simulators.
- Author
-
Orellana-Martín, David, Martínez-del-Amor, Miguel Á., Valencia-Cabrera, Luis, Pérez-Hurtado, Ignacio, Riscos-Núñez, Agustín, and Pérez-Jiménez, Mario J.
- Subjects
- *
CONCEPTUAL design , *ALGORITHMS , *DENDRITES - Abstract
Dendrite P systems (DeP systems) are a recently introduced neural-like model of computation. They provide an alternative to the more classical spiking neural (SN) P systems. In this paper, we present the first software simulator for DeP systems, and we investigate the key features of the representation of the syntax and semantics of such systems. First, the conceptual design of a simulation algorithm is discussed. This is helpful in order to shade a light on the differences with simulators for SN P systems, and also to identify potential parallelizable parts. Second, a novel simulator implemented within the P-Lingua simulation framework is presented. Moreover, MeCoSim, a GUI tool for abstract representation of problems based on P system models has been extended to support this model. An experimental validation of this simulator is also covered. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Assessment of a Machine-Learnt Adaptive Wall-Function in a Compressor Cascade With Sinusoidal Leading Edge.
- Author
-
Tieghi, Lorenzo, Corsini, Alessandro, Delibra, Giovanni, and Angelini, Gino
- Abstract
Near-wall modeling is one of the most challenging aspects of computational fluid dynamic computations. In fact, integration-to-the-wall with low-Reynolds approach strongly affects accuracy of results, but strongly increases the computational resources required by the simulation. A compromise between accuracy and speed to solution is usually obtained through the use of wall functions (WFs), especially in Reynolds averaged Navier-Stokes computations, which normally require that the first cell of the grid to fall inside the log-layer (50
- Published
- 2020
- Full Text
- View/download PDF
45. Proper Orthogonal Decomposition Framework for the Explicit Solution of Discrete Systems With Softening Response.
- Author
-
Ceccato, Chiara, Xinwei Zhou, Pelessone, Daniele, and Cusatis, Gianluca
- Subjects
- *
DISCRETE systems , *ORTHOGONAL decompositions , *COMPRESSION loads - Abstract
The application of explicit dynamics to simulate quasi-static events often becomes impractical in terms of computational cost. Different solutions have been investigated in the literature to decrease the simulation time and a family of interesting, increasingly adopted approaches are the ones based on the proper orthogonal decomposition (POD) as a model reduction technique. In this study, the algorithmic framework for the integration of the equation of motions through POD is proposed for discrete linear and nonlinear systems: a low dimensional approximation of the full order system is generated by the so-called proper orthogonal modes (POMs), computed with snapshots from the full order simulation. Aiming to a predictive tool, the POMs are updated in itinere alternating the integration in the complete system, for the snapshots collection, with the integration in the reduced system. The paper discusses details of the transition between the two systems and issues related to the application of essential and natural boundary conditions (BCs). Results show that, for one-dimensional (1D) cases, just few modes are capable of excellent approximation of the solution, even in the case of stress-strain softening behavior, allowing to conveniently increase the critical time-step of the simulation without significant loss in accuracy. For more general three-dimensional (3D) situations, the paper discusses the application of the developed algorithm to a discrete model called lattice discrete particle model (LDPM) formulated to simulate quasi-brittle materials characterized by a softening response. Efficiency and accuracy of the reduced order LDPM response are discussed with reference to both tensile and compressive loading conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. GENERATING AND MODELING OF BRAKING CURVE AND THE ASSESSMENT OF THE QUALITY OF AUTOMATIC STOPPING OF THE TRAIN.
- Author
-
ANUSZCZYK, Jan, GOCEK, Andrzej, PACHOLSKI, Krzysztof, and DOMINIKOWSKI, Bartosz
- Subjects
- *
RAILROAD trains , *BRAKE systems , *QUALITY factor , *ALGORITHMS , *ELECTRIC controllers - Abstract
This paper presents the theory about generating the braking curve and the analysis of the influence of the braking controller parameters on the generation of the braking curve of the train. In this paper, computed examples of braking quality developed using generic quality factor are shown, and on the basis of the calculations, weight components of the factor and an additional criterion for assessing the quality of braking were proposed. It has been demonstrated that the developed algorithms can be used to verify the effectiveness of the braking controller and the adjustment of the terms, and the change of these algorithms affects the shape of the generated braking curve of the train. It has been shown that the analysis of a failure of the propulsion car revealed the existence of a safe braking area. The performed statistical analysis confirmed the normal distribution of the scatter of braking results, for which the regression model fitted. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. Sensor data fusion in the context of electric vehicles charging stations using a Network-on-Chip.
- Author
-
Stoychev, Ivan, Wehner, Philipp, Rettkowski, Jens, Kalb, Tobias, Wichert, Patrick, Göhringer, Diana, and Oehm, Jürgen
- Subjects
- *
ELECTRIC vehicles , *ELECTRIC vehicle charging stations , *DATA fusion (Statistics) , *NETWORKS on a chip , *ENERGY measurement , *ALGORITHMS - Abstract
This paper presents a platform for evaluating sensor data fusion algorithms based on a Network-on-Chip (NoC). Initially the NoC is simulated by MPSoCSim, which contains an ARM and several MicroBlaze processors. The approach is furthermore evaluated on a real FPGA system in an automotive context. Charging stations for electric vehicles consist of expensive sensors, as the energy measurement system is an important part used for billing purposes, for safety of the charging process and for security. The sensors, that are usually used, are special made solutions, which are thermally stable and highly accurate. The NoC infrastructure presented in this paper results in a new method for sensor data fusion, which decreases costs by using inexpensive sensor units. These sensor units are sensitive to temperature changes. The target of the sensor data fusion approach is to replace expensive sensors with inexpensive ones without loosing accuracy. The presented simulation platform allows an easy exchange of sensor models and sensor data fusion algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Hyperspectral radiative transfer modeling to explore the combined retrieval of biophysical parameters and canopy fluorescence from FLEX – Sentinel-3 tandem mission multi-sensor data.
- Author
-
Verhoef, Wouter, Van Der Tol, Christiaan, and Middleton, Elizabeth M.
- Subjects
- *
LIGHT transmission in plant canopies , *RADIATIVE transfer , *ARTIFICIAL satellites , *FLUORIMETRY , *ANISOTROPY , *ALGORITHMS - Abstract
The FLuorescence EXplorer (FLEX) satellite mission, selected as ESA's 8th Earth Explorer, has been designed for the measurement of sun-induced fluorescence ( F ) spectra emitted by plants. This will be accomplished through a multi-sensor approach by placing it in a common orbit in tandem with the Sentinel-3 (S3) mission, which will have two optical sensors on board, OLCI (Ocean and Land Colour Instrument) and SLSTR (Sea and Land Surface Temperature Radiometer) to complement FLEX. These S3 instruments will be used in combination with the imaging spectrometers on board FLEX to provide data useful for atmospheric correction of FLEX data. However, a fully synergetic approach, i.e. by exploiting the spectral and directional information from all tandem mission instruments together, is an attractive alternative which is explored in this paper. By employing all combined top-of-atmosphere (TOA) spectral radiance data, one can (i) characterize the relevant optical properties of the atmosphere, (ii) retrieve biophysical canopy properties including the associated reflectance anisotropy, and (iii) retrieve a more accurate and consistent canopy F . Regarding retrieval methods, Fraunhofer Line Depth (FLD) and Spectral Fitting (SF) are well-known techniques applied to hyperspectral data. Both methods depend on a high spectral resolution and assume a Lambertian (isotropic) canopy reflectance. However, most vegetation canopies are non-Lambertian. This implies that, in particular when ignoring the anisotropic surface reflection, substantial retrieval errors can occur due to the interaction between atmospheric absorption bands and surface reflectance anisotropy. In this paper, a novel method based on spectral radiative transfer (RT) modeling is proposed, in which coupled RT models are used to simulate TOA radiance spectra. These are then matched with ‘measured’ spectra in order to retrieve surface fluorescence, along with a suite of biophysical parameters, by model inversion through optimization. By applying coupled RT models of the soil-leaf-canopy and the surface-atmosphere systems, TOA radiance spectra can be simulated for all optical sensors of this tandem mission. In this way, complex effects due to surface reflectance anisotropy and the spectral sampling by the various instruments, which are difficult to compensate for in the end products, are properly taken into account by their incorporation in the forward modeling. Next, by model inversion of TOA radiance data via optimization, the most accurate F retrievals can be achieved in a consistent manner, along with important canopy level biophysical parameters that may help interpret the F spectrum, such as chlorophyll content and leaf area index (LAI). The potential of this approach has been explored in a numerical experiment, and the results are presented in this paper. We find that, with the assumed well-characterized and plausible FLEX/S3 instrument performances, the simultaneous retrieval of biophysical canopy parameters and F spectra would be possible with a remarkable accuracy, provided the correct atmospheric characterization is available. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. The Impact of Model-Based Clutter Suppression on Cluttered, Aberrated Wavefronts.
- Author
-
Dei, Kazuyuki and Byram, Brett
- Subjects
- *
ULTRASONIC imaging , *IMAGE quality analysis , *WAVEFRONTS (Optics) , *COMPUTER simulation , *ALGORITHMS , *IMAGE reconstruction - Abstract
Recent studies reveal that both phase aberration and reverberation play a major role in degrading ultrasound image quality. We previously developed an algorithm for suppressing clutter, but we have not yet tested it in the context of aberrated wavefronts. In this paper, we evaluate our previously reported algorithm, called aperture domain model image reconstruction (ADMIRE), in the presence of phase aberration and in the presence of multipath scattering and phase aberration. We use simulations to investigate phase aberration corruption and correction in the presence of reverberation. As part of this paper, we observed that ADMIRE leads to suppressed levels of aberration. In order to accurately characterize aberrated signals of interest, we introduced an adaptive component to ADMIRE to account for aberration, referred to as adaptive ADMIRE. We then use ADMIRE, adaptive ADMIRE, and conventional filtering methods to characterize aberration profiles on in vivo liver data. These in vivo results suggest that adaptive ADMIRE could be used to better characterize a wider range of aberrated wavefronts. The aberration profiles’ full-width at half-maximum of ADMIRE, adaptive ADMIRE, and postfiltered data with 0.4- mm^-1 spatial cutoff frequency are 4.0 ± 0.28 mm, 2.8 ± 1.3 mm, and 2.8 ± 0.57 mm, respectively, while the average root-mean square values in the same order are 16 ± 5.4 ns, 20 ± 6.3 ns, and 19 ± 3.9 ns, respectively. Finally, because ADMIRE suppresses aberration, we perform a limited evaluation of image quality using simulations and in vivo data to determine how ADMIRE and adaptive ADMIRE perform with and without aberration correction. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. Fast Simulation of Dynamic Ultrasound Images Using the GPU.
- Author
-
Storve, Sigurd and Torp, Hans
- Subjects
- *
ECHOCARDIOGRAPHY , *COMPUTER simulation , *ALGORITHMS , *IMAGE analysis , *ULTRASONIC imaging , *GRAPHICS processing units - Abstract
Simulated ultrasound data is a valuable tool for development and validation of quantitative image analysis methods in echocardiography. Unfortunately, simulation time can become prohibitive for phantoms consisting of a large number of point scatterers. The COLE algorithm by Gao et al. is a fast convolution-based simulator that trades simulation accuracy for improved speed. We present highly efficient parallelized CPU and GPU implementations of the COLE algorithm with an emphasis on dynamic simulations involving moving point scatterers. We argue that it is crucial to minimize the amount of data transfers from the CPU to achieve good performance on the GPU. We achieve this by storing the complete trajectories of the dynamic point scatterers as spline curves in the GPU memory. This leads to good efficiency when simulating sequences consisting of a large number of frames, such as B-mode and tissue Doppler data for a full cardiac cycle. In addition, we propose a phase-based subsample delay technique that efficiently eliminates flickering artifacts seen in B-mode sequences when COLE is used without enough temporal oversampling. To assess the performance, we used a laptop computer and a desktop computer, each equipped with a multicore Intel CPU and an NVIDIA GPU. Running the simulator on a high-end TITAN X GPU, we observed two orders of magnitude speedup compared to the parallel CPU version, three orders of magnitude speedup compared to simulation times reported by Gao et al. in their paper on COLE, and a speedup of 27000 times compared to the multithreaded version of Field II, using numbers reported in a paper by Jensen. We hope that by releasing the simulator as an open-source project we will encourage its use and further development. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.