16,254 results
Search Results
102. A Theoretical Framework for Stability Regions for Standing Balance of Humanoids Based on Their LIPM Treatment.
- Author
-
Kim, Jung Hoon, Lee, Jongwoo, and Oh, Yonghwan
- Subjects
MOLECULAR force constants ,CENTER of mass ,EXPONENTIAL stability ,STABILITY criterion ,PENDULUMS - Abstract
The aim of this paper is to construct a theoretical framework for stability analysis relevant to standing balance of humanoids on top of the linear inverted pendulum model, in which their dynamics between the center of mass (CoM) and the zero moment point (ZMP) is dealt with. Based on the well-known sufficient condition that the contact between the ground and the support leg is stable if the corresponding ZMP is always inside the supporting region, this paper aims at characterizing three types of the associated stability regions. More precisely, assuming no external force disturbances affecting the motion of the humanoids, the stability region of the initial CoM position and velocity values can be explicitly computed by solving a finite number of linear inequalities. The stability regions of time-invariant force disturbances such as impulsive force and constant force disturbances are also dealt with in this paper, where the former is exactly obtained through a finite number of linear inequalities while the latter is approximately derived by using an idea of truncation. Furthermore, time-varying force disturbances of finite energy and finite amplitude are concerned with, and their maximum admissible ${l} _{2}$ and ${l} _{\infty }$ norms are computed in this paper, where the former can be exactly obtained by solving the discrete-time Lyapunov equation while the latter is approximately derived through an idea of truncation. It is further shown for both the truncation ideas that the approximately obtained stability regions converge to the exact stability regions with an exponential order of ${N}$ , where ${N}$ is the truncation parameter. Finally, the effectiveness of the computation methods proposed in this paper is demonstrated through some simulation results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
103. Vanishing Flats: A Combinatorial Viewpoint on the Planarity of Functions and Their Application.
- Author
-
Li, Shuxing, Meidl, Wilfried, Polujan, Alexandr, Pott, Alexander, Riera, Constanza, and Stanica, Pantelimon
- Subjects
VECTOR spaces ,NONLINEAR functions ,UNIFORMITY ,FINITE fields ,POLYNOMIALS - Abstract
For a function $f$ from $\mathbb {F}_{2}^{n}$ to $\mathbb {F}_{2}^{n}$ , the planarity of $f$ is usually measured by its differential uniformity and differential spectrum. In this paper, we propose the concept of vanishing flats, which supplies a combinatorial viewpoint on the planarity. First, the number of vanishing flats of $f$ can be regarded as a measure of the distance between $f$ and the set of almost perfect nonlinear functions. In some cases, the number of vanishing flats serves as an “intermediate” concept between differential uniformity and differential spectrum, which contains more information than differential uniformity, however less than the differential spectrum. Secondly, the set of vanishing flats forms a combinatorial configuration called partial quadruple system, since it conveys a detailed structural information about $f$. We initiate this study by considering the number of vanishing flats and the partial quadruple systems associated with monomials and Dembowski-Ostrom polynomials. In addition, we present an application of vanishing flats to the partition of a vector space into disjoint equidimensional affine spaces. We conclude the paper with several further questions and challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
104. Deep Learning for Magnetic Field Estimation.
- Author
-
Khan, Arbaaz, Ghorbanian, Vahid, and Lowther, David
- Subjects
DEEP learning ,MAGNETIC fields ,MAXWELL equations ,PERMANENT magnet motors ,SUPERVISED learning ,ARTIFICIAL neural networks - Abstract
This paper investigates the feasibility of novel data-driven deep learning (DL) models to predict the solution of Maxwell’s equations for low-frequency electromagnetic (EM) devices. With ground truth (empirical evidence) data being generated from a finite-element analysis solver, a deep convolutional neural network is trained in a supervised manner to learn a mapping for magnetic field distribution for topologies of different complexities of geometry, material, and excitation, including a simple coil, a transformer, and a permanent magnet motor. Preliminary experiments show DL model predictions in close agreement with the ground truth. A probabilistic model is introduced to improve the accuracy and to quantify the uncertainty in the prediction, based on Monte Carlo dropout. This paper establishes a basis for a fast and generalizable data-driven model used in the analysis, design, and optimization of EM devices. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
105. ISI/ITI Turbo Equalizer for TDMR Using Trained Local Area Influence Probabilistic Model.
- Author
-
Sun, Xueliang, Shen, Jinlu, Belzer, Benjamin J., Sivakumar, Krishnamoorthy, James, Ashish, Chan, Kheong Sann, and Wood, Roger
- Subjects
MAGNETIC recorders & recording ,INTERSYMBOL interference ,PROBABILISTIC number theory ,ERROR rates ,LINEAR systems ,DETECTORS ,ITERATIVE decoding - Abstract
In this paper, a local area influence probabilistic (LAIP) detector for estimating magnetic grain interactions with coded data bits in two-dimensional magnetic recording is combined with a 2-D Bahl–Cocke–Jelinek–Raviv (BCJR)-based detector for joint removal of intertrack interference (ITI) and intersymbol interference (ISI). The LAIP detector sends log-likelihood ratio estimates of coded bits and an estimate of the local ISI/ITI convolution mask to a BCJR-based ISI/ITI detector followed by an irregular-repeat-accumulate decoder. Simulation results on a random Voronoi grain media model with ISI and ITI show that the concatenated LAIP/BCJR system, which detects three tracks simultaneously, achieves user information bit areal densities competitive or higher than results reported in a previous paper that employed the LAIP detector alone on a Voronoi grain channel without ISI/ITI. Simulation results on a grain-flipping probability media model based on micromagnetic simulations show that the proposed detector achieves an 11.3% bit error rate reduction compared to a recently proposed system with a 2-D linear equalizer followed by a two-track BCJR detector with 2-D pattern-dependent noise prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
106. Holistic Appraisal of Modeling Installed Antennas for Aerospace Applications.
- Author
-
Vukovic, Ana, Sewell, Phillip, and Benson, Trevor M.
- Subjects
MIMO systems ,ANTENNA arrays ,ANTENNA radiation patterns ,FINITE element method ,NUMERICAL analysis - Abstract
This paper uses the unstructured transmission line modeling method to investigate near-field interactions between a broadband microwave antenna and a platform that arise as a result of antenna installation. The antenna, feed line, and the platform are represented by a common meshed model and simulated using a single time-domain numerical method. This paper aims to establish guidelines on how to achieve high accuracy when modeling both the near and far fields of an antenna while at the same time prioritizing computational resources. By isolating critical features such as the feed line and selected fine details of the antenna geometry, this paper assesses how accurately these fine features need to be described in the model and how they affect the return loss and far-field pattern of the antenna. The size of the platform is varied from small to medium size (up to 10 wavelengths) and its impact on the antenna performance is assessed. Finally, the conclusions of the study are applied to an example of an antenna installed in the leading edge of an aircraft wing, with and without, a protective radome cover. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
107. Similarity Domains Machine for Scale-Invariant and Sparse Shape Modeling.
- Author
-
Ozer, Sedat
- Subjects
IMAGE processing ,KERNEL functions ,GEOMETRIC approach ,SHAPE analysis (Computational geometry) ,MACHINE learning - Abstract
We present an approach to extend the functionality and the use of kernel machines in image processing applications. We introduce a novel way to design spatial kernel machines with spatial properties and demonstrate how those newly introduced spatial properties enhance the possibilities of the use of kernel machines in image processing applications as a proof of concept. In this paper, we demonstrate four particular extensions: 1) how to model shapes efficiently with spatially computed kernel parameters in a geometrically scalable way; 2) how to visualize the kernel parameters precisely and intuitively on binary 2D shapes; 3) how to construct a one-class classifier from the binary classifier in a straightforward manner without re-training; and 4) how to use the computed kernel parameters for filtering. The existing literature on kernel machines mostly focuses on estimating the optimal kernel parameters via additional cost function(s). In this paper, instead of employing an additional cost function to estimate the kernel-related parameters, we investigate on an analytical solution to predict the actual kernel parameters locally and show how to build a spatial kernel machine with our analytical approach. Classical kernel machines do not perform well on precise shape modeling with a low number of support vectors as demonstrated in this paper. However, we demonstrate and visualize that our analytical approach provides a natural means to relate the kernel parameters to the 2D shapes for sparse shape modeling, where the shape boundary represents the decision boundary. For that, we incorporate the selected kernel function’s geometric properties as an additional constraint into the classifier’s optimization problem by defining an easy-to-explain and intuitive concept: similarity domains. In our experiments, we study and demonstrate how the resulting new kernel machine enhances the capabilities of the classical kernel machines with applications on shape modeling, (geometrically) scaling the non-linear decision boundary at various scales and precise visualization of the kernel parameters in 2D images. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
108. Numerical Studies of Coulomb Collisions, Relaxation, and Debye Shielding by N-Body Simulation.
- Author
-
Wang, Cheng-Pu and Nishimura, Yasutaro
- Subjects
ELASTIC scattering ,CHEMICAL relaxation ,ELECTRON distribution ,PLASMA kinetic theory ,ANISOTROPY - Abstract
Basic kinetic process of plasmas is studied by a first principle based N-body simulation. By confirming Coulomb collisional relaxation process, this paper is extended to investigate the dynamics of Debye shielding. Some of the key numerical techniques are discussed, including how to avoid singularities at the near encounter of two charged particles. Electron distribution function in Debye shielding is revealed, which deviates from conventional Maxwellian distribution function. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
109. Bottleneck Analysis to Reduce Primary Care to Specialty Care Referral Delay.
- Author
-
Zhong, Xiang, Prakash, Aditya Mahadev, Petty, Leanne, and James, Rita A.
- Subjects
PRIMARY care ,BOTTLENECKS (Manufacturing) ,MEDICAL specialties & specialists ,WORKFLOW management ,MARKOV processes - Abstract
Reducing delays between primary care and specialty care visits is critical to improving continuity of ambulatory care delivery. To comply with referral protocols, personnel involved in patients’ care pathways process and record pertinent information to ensure appropriate care is rendered, and missing necessary information might cause “dropping of the baton” during the patient transition. The objective of this paper is to analyze the information flow along patients’ primary care to specialty care referral pathway, and identify system bottlenecks to enhance the workflow design and workforce configuration. A semi-Markov process is introduced to describe information transition and the operations of involved personnel are modeled as capacity-constrained service queues at every stage of referral pathway. Analytical formulas are derived to evaluate the overall referral delay, and a continuous improvement method is developed to identify the most critical factor that impedes the referral process. The proposed systems approach is applied to the clinics of a large academic medical center, and the analysis stresses the importance of building a health information system that supports breaking silos and adapting providers’ workflows to the information system to facilitate smart and connected care delivery. Note to Practitioners—The ambulatory care delivery system is plagued by delays that dissatisfy patients, physicians, and other medical staff, and adversely affect patient outcomes. While many extant industrial engineering/operations research (IE/OR) articles focus on reducing physician appointment delays or waiting times during visits to a medical office, this paper seeks to address the lag between primary care visits and specialty care visits, i.e., the waiting for access to specialty care once the primary care physician has decided to refer the patient. This paper establishes a novel analytical framework to model primary care to specialty care referral processes with an emphasis on the information transition along patient pathway. The rigorous analytical model unveils the insights that might not be easily identified using qualitative approaches, and enjoys higher computational efficiency compared to simulation methods. What emerges from the study is the need to integrate health information technology into medical personnel’s workflow and redesign the process to allow effective communication, which will lead to improved care, efficiency, and satisfaction. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
110. Beam-RF Simulation via Partial Decomposition of the Maxwell Equations Part I: Mathematical Framework.
- Author
-
Jackson, Robert H.
- Subjects
RADIO frequency ,ORDINARY differential equations ,MAXWELL equations ,ELECTROMAGNETIC theory ,PARTIAL differential equations - Abstract
Simulating the interaction of RF-waves with charged particle beams is difficult and computationally expensive, even on present high-end computers. There is a tradeoff between fidelity and complexity/expense of the two primary techniques, particle-in-cell and envelope methods. This paper presents an approach based on the partial decomposition of the Maxwell equations to achieve a fully second-order set of ordinary differential equations (ODEs) describing the interaction. It has the potential for self-consistent inclusion of ac and dc space-charge effects through a solution of evanescent ODEs, effects that are handled poorly, if at all, by existing envelope methods. The resulting technique provides flexibility in adjusting simulation fidelity versus computational costs. The developments presented in this paper establish a mathematical framework for this technique and test its core aspects by demonstrating accurate solution of the ODEs using analytic sources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
111. Modeling and Simulation of the Effect of Cathode Gas Flow on the Lifetime and Performance of an Annular-Geometry Ion Engine.
- Author
-
Chen, Juanjuan, Zhang, Tianping, Liu, Mingzheng, Gu, Zengjie, Yang, Wei, and Yang, Le
- Subjects
PLASMA density ,PLASMA physics ,ELECTRODES ,CATHODES ,ION acoustic waves - Abstract
The past measurements of the plasma density and potential profiles near the exit of the keeper electrode in a hollow cathode device suggested that turbulent ion acoustic fluctuations and ionization instability in the cathode plume significantly increased the energy of the ions that flow from this region. The lifetime of keeper electrode is limited by sputtering or ion bombardment of the Molybdenum surface exposed to the discharge plasma. Increases in the cathode gas flow reduce the amplitude of the fluctuations and the number and energy of the energetic ions, which decreases the erosion rate of the keeper electrodes. However, as the cathode gas flow is raised for a given discharge current, the performance of a 5-kW Annular-Geometry Ion Engine (AGI-Engine) declines. There is a strong relationship between the performance and the lifetime of the ion thruster. To validate whether the 20-A hollow cathode satisfied China’s communication satellite platform’s application requirement for North-South station keeping, this paper analyzed the effect of the cathode gas flow on the performance and the lifetime of the AGI-Engine. Different from the previous methods, this paper first tracked the movement of energetic ions generated in the plume of the hollow cathode, predicted erosion rates, found where they hit the keeper electrode, and then determined the amount of material that they sputtered. A review of past experimental results was presented first. Next, based on the existing experimental data, theoretical analysis and numerical calculations were performed to determine the optimum gas flow range and the performance curve of the 20-A hollow cathode. The results showed that the primary erosion mechanism of the keeper electrode was caused by impact from Charge Exchange Xenon (CEX) ions, which caused the cathode orifice to be widened and attenuated over time. However, heavy double xenon ($\text{X}_{\text{e}}^{++}$) ions, which struck the keeper electrode more severely than CEX ions, were the most crucial factor that limited the lifetime of the 20-A hollow cathode due to their high energy and large mass. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
112. Forest Learning From Data and its Universal Coding.
- Author
-
Suzuki, Joe
- Subjects
GRAPHICAL modeling (Statistics) ,CODING theory ,STRUCTURAL learning theory ,SIGNAL processing ,BIOINFORMATICS - Abstract
This paper considers structure learning from data with $n$ samples of $p$ variables, assuming that the structure is a forest, using the Chow–Liu algorithm. Specifically, for incomplete data, we construct two model selection algorithms that complete in $O(p^{2})$ steps: one obtains a forest with the maximum posterior probability given the data and the other obtains a forest that converges to the true one as $n$ increases. We show that the two forests are generally different when some values are missing. In addition, we present estimations for benchmark data sets to demonstrate that both algorithms work in realistic situations. Moreover, we derive the conditional entropy provided that no value is missing, and we evaluate the per-sample expected redundancy for the universal coding of incomplete data in terms of the number of non-missing samples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
113. A Comparison of Automated RF Circuit Design Methodologies: Online Versus Offline Passive Component Design.
- Author
-
Passos, Fabio, Roca, Elisenda, Castro-Lopez, Rafael, and Fernandez, Francisco V.
- Subjects
RADIO frequency integrated circuits ,SIMULATION methods & models ,DIGITAL electronics ,ELECTROMAGNETIC waves ,COMPUTATIONAL complexity - Abstract
In this paper, surrogate modeling techniques are applied for passive component modeling. These techniques are exploited to develop and compare two alternative strategies for automated radio-frequency circuit design. The first one is a traditional approach where passive components are designed during the optimization stage. The second one, inspired on bottom-up circuit design methodologies, builds passive component Pareto-optimal fronts (POFs) prior to any circuit optimization. Afterward, these POFs are used as an optimized library from where the passive components are selected. This paper exploits the advantages of evolutionary computation algorithms in order to efficiently explore the circuit design space, and the accuracy and efficiency of surrogate models to model passive components. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
114. A Basic Signal Analysis Approach for Magnetic Flux Leakage Response.
- Author
-
Huang, Song Ling, Peng, Lisha, Wang, Shen, and Zhao, Wei
- Subjects
MAGNETIC flux leakage ,COMPUTER simulation ,POINT defects ,FINITE element method ,SIGNAL processing - Abstract
Magnetic flux leakage (MFL) response predicting is important for defect estimation. This paper first proposes a concept of basic signal by discovering the combination property of the MFL response. The basic signal can be used to predict the MFL response conveniently by a basic signal combination method (BSCM), which is also innovatively put forward in this paper. In this method, the basic signal is calculated in advance, and the MFL response can be calculated by several transformation and combination operations. Both the simulation and experimental results demonstrate the feasibility of BSCM. Compared with other traditional methods (magnetic dipole method and finite-element method), BSCM shows a good performance on both the computational speed and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
115. Minimizing Expected Cycle Time of Stochastic Customer Orders Through Bounded Multi-Fidelity Simulations.
- Author
-
Zhao, Yaping, Xu, Xiaoyun, and Li, Haidong
- Subjects
STOCHASTIC models ,MATHEMATICAL optimization ,SIMULATION methods & models ,COMPUTER simulation ,SYNCHRONIZATION - Abstract
This paper considers the scheduling of stochastic customer orders to minimize expected cycle time. Customer orders dynamically arrive at a machine station, and each order consists of multiple product types. Random workloads are required by each product type, and the workloads are assigned to a set of unrelated parallel machines in the station to be processed. The objective is to obtain the minimal long-run expected order cycle time through proper workload assignments. In view of the difficulty in evaluating the objective function, this paper models the targeted problem as a simulation optimization problem and proposes to solve it under the multifidelity model framework. To improve the efficiency in evaluating candidate solutions, a low-fidelity model is constructed to select solutions with better performances for high-fidelity simulations. The effectiveness of this low-fidelity model is demonstrated through a series of theoretical evidences. A simulation optimization algorithm, named Bound-Multi-fidelity Optimization with Ordinal Transformation and Optimal Sampling Bound-(MO2TOS), is developed by taking advantage of the properties of the low-fidelity model. Numerical experiments are conducted to evaluate the performance of the proposed algorithm against three other well-known simulation optimization algorithms in the literature. Results indicate that the Bound-MO2TOS outperforms all the other tested algorithms, and its performance is robust against changes in problem scale. Note to Practitioners—Customer order scheduling models have wide applications in industries where every single order may contain multiple product types and the entire order requires one shipment delivery. In many applications, such as pigments dyes manufacturing, auto repair, and grocery, consideration of customer orders rather than individual jobs is often preferred. Given the stochastic nature and the synchronization constraints of the problem, a multifidelity simulation algorithm named Bound-Multi-fidelity Optimization with Ordinal Transformation and Optimal Sampling Bound-(MO2TOS) is proposed. Experimental results suggest that the proposed algorithm outperforms three other popular simulation optimization algorithms under a variety of scenarios. It is admitted that further improvement can be made in refining the computation resource distribution process of Bound-MO2TOS. In the future research, such an improvement will be more thoroughly elaborated so that practitioners can utilize it to coordinate their productions better. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
116. Structural Fault Detection and Isolation in Hybrid Systems.
- Author
-
Khorasgani, Hamed and Biswas, Gautam
- Subjects
STRUCTURAL analysis (Engineering) ,FAULT indicators ,HYBRID systems ,DETECTORS ,MANUFACTURING processes ,POWER plants - Abstract
This paper develops a structural diagnosis approach for fault detection and isolation in hybrid systems. Hybrid systems are characterized by continuous behaviors that are interspersed with discrete mode changes in the system, making the analysis of behaviors quite complex. In this paper, we address the mode detection problem in hybrid systems as the first step in diagnoser design. The proposed method uses analytic redundancy methods to detect the operating mode of the system even in the presence of system faults. We define hybrid minimal structurally overdetermined (HMSO) sets for hybrid systems. For residual generation, we develop the HMSO selection problem, formulated as a binary integer linear programming optimization problem to minimize the number of selected HMSOs and reduce online computational costs of the diagnosis algorithm. The proposed structural approach does not require preenumeration of all possible modes in the diagnoser design step. Therefore, our approach is feasible for hybrid systems with a large number of switching elements, implying that the system can have a large number of operating modes. The case study demonstrates the effectiveness of our approach. We discuss the results of our case study, and present directions for future work. Note to Practitioners—Developing feasible approaches for online monitoring, fault detection, and fault isolation of complex hybrid and embedded systems, such as automobiles, aircraft, power plants, and manufacturing processes, is essential in securing their safe, reliable, and efficient operation. Frequent changes in the operational modes of these systems because of operator actions, such as changing gears in an automobile, or environmental changes, such as driving on a wet or icy road make the fault detection and isolation task in these systems challenging. It is important to detect and isolate faults in all the operating modes, and at the same time, not mistake a mode change as a fault in the system. In this paper, we propose an approach that exploits the equation structure of hybrid systems behavior to combine mode detection and diagnosis in nonlinear hybrid systems. The proposed algorithm is scalable and efficient. We demonstrate its effectiveness using a case study of a reverse osmosis subsystem in an advances life support system for long duration manned space missions. Important challenges that can affect the success of our approach include the need for sufficiently detailed hybrid models that capture nominal and faulty behavior, and a sufficient number of sensors to make simultaneous mode detection and fault isolation possible. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
117. Modeling and System-Level Simulation for Nonideal Conductance Response of Synaptic Devices.
- Author
-
Gi, Sang-Gyun, Yeo, Injune, Chu, Myonglae, Moon, Kibong, Hwang, Hyunsang, and Lee, Byung-geun
- Subjects
ELECTRIC admittance ,MICROELECTROMECHANICAL systems - Abstract
This paper presents a new method for modeling the nonideal conductance response (CR) of synaptic devices. Unlike previous studies, which utilize physical device properties for modeling, this paper only uses the measured CR data. This allows the proposed modeling method to be easily applied to various types of synaptic devices without considering the unique physical properties of each device. An efficient piecewise linear approximation method which offers a tradeoff between computational complexity and simulation accuracy of neural networks is also presented to generate a linear device model out of nonlinear CR data. In addition, model parameters, which reflect the nonideal characteristics of the CR such as abrupt and asymmetric conductance changes, conductance variation, and limited conductance dynamic range, are introduced to evaluate the network performances in the presence of the nonidealities. By adjusting the model parameters, the desired CR satisfying the network performance requirements can be derived for device development. A three-layer neural network employing the device model has been designed and trained for the MNIST data set in order to demonstrate the application of the model to system-level simulations and verify the effectiveness of the modeling method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
118. Optimizing Computational Mission Operation by Periodic Backups and Preventive Replacements.
- Author
-
Levitin, Gregory, Xing, Liudong, and Dai, Yuanshun
- Subjects
COMPUTATIONAL learning theory ,DATA transmission systems - Abstract
This paper models a warm standby system where a single element is online performing a specified mission task (e.g., a computing task) and subject to corrective replacement (CR) by an available standby element upon its failure. During the mission, preventive replacements (PRs) are also performed to renew the aged or worn online operating element before its actual failure according to a predetermined policy. In addition, to facilitate an effective restoration of system function in case of CR or PR happening, backups are also performed periodically so that the mission task can be resumed from the last successful backup point instead of from scratch. The mission succeeds if the specified mission task is accomplished; in other words, the mission fails when no operating elements remain prior to the mission task completion. In this paper, we make new contributions by first proposing an event transition-based numerical method to evaluate mission performance indices of the considered standby system subject to periodic backups, CR and PR. Mission success probability (MSP), expected mission completion time, expected mission operation cost (EMC), and expected uncompleted work fraction are evaluated. Based on the suggested evaluation algorithm, we make another contribution by formulating and solving optimization problems that help to determine the optimal backup-PR policy or the optimal combination of element activation sequencing and backup-PR policy to maximize MSP or minimize EMC. Influence of element performance and reliability parameters, data backup and retrieval complexity parameters on the optimal operation policy is investigated. Findings from this paper can guide the optimal decision making on policies related to element sequencing, backups as well as preventive maintenance planning, contributing toward reliable and cost-effective design and operation of standby computing systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
119. An Efficient Four-Parameter Affine Motion Model for Video Coding.
- Author
-
Li, Li, Li, Houqiang, Liu, Dong, Li, Zhu, Yang, Haitao, Lin, Sixin, Chen, Huanbang, and Wu, Feng
- Subjects
MATHEMATICAL models ,MOTION ,VIDEO coding ,COMPUTATIONAL complexity ,MOTION estimation (Signal processing) ,COMPUTER algorithms - Abstract
In this paper, we study a simplified affine motion model-based coding framework to overcome the limitation of a translational motion model and maintain low-computational complexity. The proposed framework mainly has three key contributions. First, we propose to reduce the number of affine motion parameters from 6 to 4. The proposed four-parameter affine motion model can not only handle most of the complex motions in natural videos, but also save the bits for two parameters. Second, to efficiently encode the affine motion parameters, we propose two motion prediction modes, i.e., an advanced affine motion vector prediction scheme combined with a gradient-based fast affine motion estimation algorithm and an affine model merge scheme, where the latter attempts to reuse the affine motion parameters (instead of the motion vectors) of neighboring blocks. Third, we propose two fast affine motion compensation algorithms. One is the one-step sub-pixel interpolation that reduces the computations of each interpolation. The other is the interpolation-precision-based adaptive block size motion compensation that performs motion compensation at the block level rather than the pixel level to reduce the number of interpolation. Our proposed techniques have been implemented based on the state-of-the-art high-efficiency video coding standard, and the experimental results show that the proposed techniques altogether achieve, on average, 11.1% and 19.3% bits saving for random access and low-delay configurations, respectively, on typical video sequences that have rich rotation or zooming motions. Meanwhile, the computational complexity increases of both the encoder and the decoder are within an acceptable range. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
120. RF Transient Analysis and Stabilization of the Phase and Energy of the Proposed PIP-II LINAC.
- Author
-
Edelen, J. P. and Chase, B. E.
- Subjects
RADIO frequency ,PARTICLE accelerators ,CALIBRATION ,PROTON beams ,PARTICLE beams - Abstract
This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of radio frequency (RF) transients and their compensation in an H-linear accelerator. Existing tools in this area either focus on electron linear accelerators (LINACs) or lack fundamental details about the low level radio frequency system that are necessary to provide realistic performance estimates. In this paper, we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the Proton Improvement Plan-II LINAC, followed by an analysis of calibration errors and how Newton’s method-based feedback scheme can be used to regulate the beam energy to within the specified limits. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
121. A Channel Model and Simulation Technique for Reproducing Channel Realizations With Predefined Stationary or Non-Stationary PSD.
- Author
-
Parra-Michel, Ramon, Vazquez Castillo, Javier, Vela-Garcia, Luis Rene, Kontorovich, Valeri, and Pena-Campos, Fernando
- Abstract
Recent communications standards, such as vehicle-to-vehicle and fifth generation, include applications where the transmitted signal encounters rapid changes of propagation scenarios, resulting in wireless links characterized as non-stationary (NS) channels. Hence, channel models that correctly explain and represent the measured time-varying channel statistics, and their associated simulation methods for testing purposes, are all required. Although the body of works devoted to NS channel modeling is vast, due to the complexity and variety of this problem, the provided NS statistics are defined only within a limited observation time, and therefore, the generated channel realizations do not include the changes between scenarios. In light of this problem, this paper introduces a channel model that mimics the continuous change of the mobile propagation channel via a continual renewal of channel parameters, in which all stationary and NS channels are represented under a unified structure. Theoretical and simulation results provided in this paper confirm that the proposed model reproduces stationary models with high accuracy. In addition, NS channel realizations with predefined time-varying power spectral density and time-varying envelope distributions are also shown in this paper, providing a means for testing modern communications systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
122. On Evaluating Runtime Performance of Interactive Visualizations.
- Author
-
Bruder, Valentin, Muller, Christoph, Frey, Steffen, and Ertl, Thomas
- Subjects
BIG data ,VISUALIZATION ,SCIENTIFIC visualization ,DATA visualization ,GRAPHICS processing units ,STATISTICS - Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data, discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
123. Probabilistic Modeling for Optimization of Resource Mix With Variable Generation and Storage.
- Author
-
Gao, Weixuan and Gorinevsky, Dimitry
- Subjects
INFORMATION theory ,MACHINE learning ,INFORMATION modeling ,STORAGE ,STATISTICAL learning - Abstract
Renewables, such as solar and wind generation, combined with storage are becoming a key part of modern grid. This paper develops probabilistic tools for analysis of grid reliability with such variable generation resources. The developed tools improve speed and accuracy of the reliability analysis compared to usual Monte Carlo methods. This is achieved by using an extension of well known convolution method applicable to interdependent variables. The interdependent distributions are obtained from historical data using Machine Learning of quantile models. The paper presents a novel approach to the analysis of reliability contribution of storage based on these models and related to Information Theory. The developed tools are demonstrated in several example scenarios for ISO-New England service area. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
124. Analytical Models of the Performance of IEEE 802.11p Vehicle to Vehicle Communications.
- Author
-
Sepulcre, Miguel, Gonzalez-Martin, Manuel, Gozalvez, Javier, Molina-Masegosa, Rafael, and Coll-Perales, Baldomero
- Subjects
TRAFFIC density ,DATA packeting ,POWER transmission ,ACCESS control ,COMMUNICATION models - Abstract
The critical nature of vehicular communications requires their extensive testing and evaluation. Analytical models can represent an attractive and cost-effective approach for such evaluation if they can adequately model all underlying effects that impact the performance of vehicular communications. Several analytical models have been proposed to date to model vehicular communications based on the IEEE 802.11p (or DSRC) standard. However, existing models normally model in detail the MAC (Medium Access Control), and generally simplify the propagation and interference effects. This reduces their value as an alternative to evaluate the performance of vehicular communications. This paper addresses this gap, and presents new analytical models that accurately model the performance of vehicle-to-vehicle communications based on the IEEE 802.11p standard. The models jointly account for a detailed modeling of the propagation and interference effects, as well as the impact of the hidden terminal problem. The model quantifies the PDR (Packet Delivery Ratio) as a function of the distance between transmitter and receiver. The paper also presents new analytical models to quantify the probability of the four different types of packet errors in IEEE 802.11p. In addition, the paper presents the first analytical model capable to accurately estimate the Channel Busy Ratio (CBR) metric even under high channel load levels. All the analytical models are validated by means of simulation for a wide range of parameters, including traffic densities, packet transmission frequencies, transmission power levels, data rates and packet sizes. An implementation of the models is provided openly to facilitate their use by the community. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
125. Harmonic Power-Flow Study of Polyphase Grids With Converter-Interfaced Distributed Energy Resources—Part I: Modeling Framework and Algorithm.
- Author
-
Kettner, Andreas Martin, Reyes-Chamorro, Lorenzo, Maria Becker, Johanna Kristin, Zou, Zhixiang, Liserre, Marco, and Paolone, Mario
- Abstract
Power distribution systems are experiencing a large-scale integration of Converter-Interfaced Distributed Energy Resources (CIDERs). This complicates the analysis and mitigation of harmonics, whose creation and propagation are facilitated by the interactions of converters and their controllers through the grid. In this paper, a method for the calculation of the so-called Harmonic Power-Flow (HPF) in three-phase grids with CIDERs is proposed. The distinguishing feature of this HPF method is the generic and modular representation of the system components. Notably, as opposed to most of the existing approaches, the coupling between harmonics is explicitly considered. The HPF problem is formulated by combining the hybrid nodal equations of the grid with the closed-loop transfer functions of the CIDERs, and solved using the Newton-Raphson method. The grid components are characterized by compound electrical parameters, which allow to represent both transposed or non-transposed lines. The CIDERs are represented by modular linear time-periodic systems, which allows to treat both grid-forming and grid-following control laws. The method’s accuracy and computational efficiency are confirmed via time-domain simulations of the CIGRÉ low-voltage benchmark microgrid. This paper is divided in two parts, which focus on the development (Part I) and the validation (Part II) of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
126. Lightning Surge Analysis of HV Transmission Line: Bias AC-Voltage Effect on Multiphase Back-Flashover.
- Author
-
Yamanaka, Akifumi, Nagaoka, Naoto, and Baba, Yoshihiro
- Subjects
ELECTRIC lines ,FLASHOVER ,LIGHTNING ,TIME-domain analysis ,FINITE difference method ,NUMERICAL analysis - Abstract
This paper discusses the back-flashover (BFO) phenomena in a high-voltage (HV) vertical double-circuit transmission line by means of numerical simulations. The single- and multiphase BFOs are analyzed considering 24 cases of AC-voltage angles and nonlinear characteristic of flashovers, taking advantage of the circuit analysis method in time domain. The models used in circuit analysis are the TEM-delay model, which can take into account the non-TEM characteristics of tower and line, and the conventional models. Prior to the BFO analysis, the characteristics of circuit models are compared and discussed with the numerical electromagnetic analysis results. In BFO analysis, it is clarified that the occurrence probability of the multiphase BFO is heavily depending on the bias AC-voltages. The relation of measured BFO phases and AC-voltages are well explained by the TEM-delay model with the valid lightning current magnitude. The conventional circuit models could have underestimated the occurrence of BFOs in HV transmission lines. The analysis and discussions shown in this paper can be utilized for advanced evaluation of lightning performance of transmission lines that considers the seriousness of lightning accidents. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
127. Coded Caching With Device Computing in Mobile Edge Computing Systems.
- Author
-
Li, Yingjiao, Chen, Zhiyong, and Tao, Meixia
- Abstract
Edge caching and computing have been regarded as an efficient approach to tackle the wireless spectrum crunch problem. In this paper, we design a general coded caching with device computing strategy for computational tasks, e.g., virtual reality (VR) rendering, to minimize the average transmission bandwidth under the quality of service guarantee. Because both coded data and stored data can be the data before or after computing, the proposed scheme has numerous edge computing and caching paths corresponding to different bandwidth requirement. We thus formulate a joint coded caching and computing optimization problem to decide whether a mobile device stores the data before computing or the data after computing, which tasks to be coded cached and which tasks to be computed locally. The optimization problem is shown to be 0–1 non-convex non-smooth programming, and can be decomposed into a computation offloading programming and a coded caching programming. For a computation offloading subproblem, we proposed an algorithm which applies the convergence of the alternating direction method of multipliers (ADMM) under a non-convex programming and the wide application of the concave-convex procedure (CCCP) for difference of convex (DC) programming to obtain a stationary point, and numerical results verify the convergence of the proposed algorithm and its suboptimality. For the coded cache programming, we design a low complexity algorithm to obtain an acceptable solution. Numerical results demonstrate that the proposed scheme provides a significant bandwidth saving by taking full advantage of the caching and computing capability of mobile devices. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
128. Platform Profit Maximization on Service Provisioning in Mobile Edge Computing.
- Author
-
Huang, Xiaoyao, Zhang, Baoxian, and Li, Cheng
- Subjects
MOBILE computing ,EDGE computing ,PROFIT maximization ,LOGIC design ,INTEGER programming ,CLOUD computing - Abstract
Mobile edge computing has been an important supplement for traditional cloud computing architecture to offer low-delay computing services to mobile users. However, it is in general impossible for edge service providers to overdeploy so much edge resources to satisfy the rapidly increasing while diverse user demands. In this paper, we study a mobile edge computing system consisting of a service platform, cloudlets joining the system, and mobile users. In this study, we focus on a profit-driven perspective such that the service platform purchases computation resource from the resource-rich cloudlets and makes profit by processing tasks from user side. The design objective is to maximize the platform profit subject to budget constraint and stringent delay requirements for task processing. We formulate this problem as a mixed integer programming problem. Due to the NP-hardness of the problem, we design a logic based Benders decomposition algorithm as the offline solution. We further study the scenario where the task arrivals from user side and resource availability at the cloudlets are both stochastic and unknown in advance. We accordingly propose a Multi-Armed Bandit learning based resource purchasing and greedy task scheduling algorithm for the online scenario. Simulations results show the high performance of our proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
129. Mixed-Integer Convex Optimization for DC Microgrid Droop Control.
- Subjects
ELECTRICAL load ,MICROGRIDS ,MONTE Carlo method ,LINEAR programming ,CONVEXITY spaces ,VOLTAGE control - Abstract
Droop control is a viable method for the operation of island DC microgrids in a decentralized architecture. This paper presents a mixed-integer conic optimization formulation for the design of generator droop control, comprising the parameters of a piecewise linear droop curve. The mixed-integer formulation originates from a stochastic optimization framework that considers several operating scenarios for finding the optimal design. The convexity of the mixed-integer problem continuous relaxation gives global optimality guarantees for the design problem. The paper presents computational results using a tight polyhedral approximation of the conic program, leading to a mixed-integer linear programming (MILP) problem that is solved using a state-of-the-art commercial solver. The results from the proposed approach are contrasted with both a classic linear droop control design and a recent piecewise linear formulation. The Monte-Carlo simulation results quantify the extent to which the MILP solution is superior in reducing voltage violations and power loss, and the degree to which the loss is close to that from a conic optimal power flow solution. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
130. Prediction of Cloud Resources Demand Based on Hierarchical Pythagorean Fuzzy Deep Neural Network.
- Author
-
Chen, Dawei, Zhang, Xiaoqin, Wang, Li, and Han, Zhu
- Abstract
Having stepped into the era of information explosion, storing, processing and analyzing the vast data sometimes are quite intractable problems. However, it is impossible for personal computer or devices to tackle with such heavy workloads. Then, companies that provides cloud computational services come into business. From the perspective of companies, the cost for providing fog computing services is much higher than the traditional computing services. Consequently, the price for real-time requests is more expensive than the reserved services. Aiming at minimizing the expenditures, the most important part is how many cloud services the customers should reserve in advance because different amounts they consume will yield different expenses and both of insufficient and excess consumption result in wastes. The emerging machine learning method provides a powerful tool to address such a prediction problem. In this paper, we propose a hierarchical Pythagorean fuzzy deep neural network (HPFDNN) to forecast the quantity of requisite cloud services. On account of obtaining the better interpretations of original data, beyond the employment of fuzzy logic, the neural representation is also utilized as a complementary method. The information or the knowledge acquired from fuzzy and neural perspectives are coalesced as the final transformed data to be put into the learning systems, so that the useful information concealed in the enormous contents can be effectively described. On the basis of the anticipation of the deep neural network, the consumers are able to decide the amount of cloud services to purchase. Numerical results based on the real data set from Carnegie Mellon University demonstrate that the proposed model yields the economical predictions and outperforms the prediction by the traditional deep neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
131. On the Tradeoff Between Computation and Communication Costs for Distributed Linearly Separable Computation.
- Author
-
Wan, Kai, Sun, Hua, Ji, Mingyue, and Caire, Giuseppe
- Subjects
DISTRIBUTED computing ,LINEAR codes ,GRID computing ,SYMMETRIC matrices ,COST - Abstract
This paper studies the distributed linearly separable computation problem, which is a generalization of many existing distributed computing problems such as distributed gradient coding and distributed linear transform. A master asks ${\mathsf {N}}$ distributed workers to compute a linearly separable function of ${\mathsf {K}}$ datasets, which is a set of ${\mathsf {K}}_{\mathrm{ c}}$ linear combinations of ${\mathsf {K}}$ equal-length messages (each message is a function of one dataset). We assign some datasets to each worker in an uncoded manner, who then computes the corresponding messages and returns some function of these messages, such that from the answers of any ${\mathsf {N}}_{\mathrm{ r}}$ out of ${\mathsf {N}}$ workers the master can recover the task function with high probability. In the literature, the specific case where ${\mathsf {K}}_{\mathrm{ c}}=1$ or where the computation cost is minimum has been considered. In this paper, we focus on the general case (i.e., general ${\mathsf {K}}_{\mathrm{ c}} $ and general computation cost) and aim to find the minimum communication cost. We first propose a novel converse bound on the communication cost under the constraint of the popular cyclic assignment (widely considered in the literature), which assigns the datasets to the workers in a cyclic way. Motivated by the observation that existing strategies for distributed computing fall short of achieving the converse bound, we propose a novel distributed computing scheme for some system parameters. The proposed computing scheme is optimal for any assignment when ${\mathsf {K}}_{\mathrm{ c}}$ is large and is optimal under the cyclic assignment when the numbers of workers and datasets are equal or ${\mathsf {K}}_{\mathrm{ c}}$ is small. In addition, it is order optimal within a factor of 2 under the cyclic assignment for the remaining cases. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
132. Modeling and Identification of Nonlinear Systems: A Review of the Multimodel Approach—Part 2.
- Author
-
El Ferik, Sami and Adeniran, Ahmed A.
- Subjects
NONLINEAR systems ,NEURAL circuitry - Abstract
The efficacy of the multimodel framework (MMF) in modeling and identification of complex, nonlinear, and uncertain systems has been widely recognized in the literature owing to its simplicity, transparency, and mathematical tractability, allowing the use of well-known modeling analysis and control design techniques. The approach proved to be effective in addressing some of the shortcomings of other modeling techniques such as those based on a single nonlinear autoregressive network with exogenous inputs model or neural networks. A great number of researchers have contributed to this active field. Due to the significant amount of contributions and the lack of a recent survey, the review of recent developments in this field is vital. In this two-part paper, we attempt to provide a comprehensive coverage of the multimodel approach for modeling and identification of complex systems. This paper contains a classification of different methods, the challenges encountered, as well as recent applications of MMF in various fields. In this part 2, the review of multimodel internal structures and parameter estimation as well as validity computation methods is presented. In addition, a multimodel application and future direction are covered. In this literature survey, our main focus is on the MMF where the final system’s representation and behavior is generated through the interpolation of several possible local models. This is of prime importance to control designers. All through this paper, different active research areas and open problems are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
133. Decentralized Optimal Control of Distributed Interdependent Automata With Priority Structure.
- Author
-
Stursberg, Olaf and Hillmann, Christian
- Subjects
DISCRETE systems ,COMPUTER simulation ,PROCESS control systems ,SUPERVISORY control systems ,PROCESS optimization - Abstract
For distributed discrete-event systems (DESs), which are specified by a set of coupled automata, the centralized synthesis for a composed plant model is often undesired due to a high computational effort and the need to subsequently split the result into local controllers. This paper proposes modeling and synthesis procedures to obtain optimal decentralized controllers in state-feedback form for distributed DES. In particular, this paper addresses the DES with priority structures, in which subsystems with high priorities are supplied with the output of subsystems with lower priority. If the subsystem dependencies have linear or treelike structures, the synthesis of the subsystem controllers can be accomplished separately. Any local controller is computed by algebraic computations, it communicates with controllers of adjacent subsystems, and it aims at transferring the corresponding subsystem into goal states with a minimal sum of transfer costs. As is shown for an example, the computational effort can be significantly reduced compared with the synthesis of centralized controllers following the composition of all subsystem models. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
134. Transverse 2-D Gliding Arc Modeling.
- Author
-
Gutsol, Alexander F. and Gangoli, Shailesh P.
- Subjects
ELECTRIC arc ,COMPUTER simulation of electric discharges ,ATMOSPHERIC pressure ,PLASMA devices ,COMPUTATIONAL fluid dynamics ,PLASMA temperature - Abstract
This paper was prepared in response to the growing interest in the numerical simulation of the gliding arc (GA) discharge. Our approach is rather simple 2-D modeling of the GA, in the plane that is parallel to the gas flow and perpendicular to the discharge current. We used Fluent software with a subroutine that calculates electric conductivity of argon plasma and local heat release due to the electric current of predetermined value. Electric conductivity of argon was calculated as function of the reduced electric field and gas temperature. Our results show that this approach can give very useful information about the gas-discharge interaction, which is very important to capture the discharge behavior. Presence of discharge inside the gas flow significantly disturbs both of them. Gas-discharge slip velocity exists at least at the beginning of GA development cycle even if there is no mechanism of the discharge deceleration. Just original spark formation associated with the electrode surfaces results in the appearance of this “independent” slip. In the cases of reasonably high gas velocities and discharge currents, this initial slip does not disappear during the discharge lifetime and can result in significant discharge cross-sectional elongation along the gas flow. Electric field fluctuation at any particular part of the discharge channel can be very large, and this can have the major effect on the nonequilibrium ionization and chemical processes. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
135. Convolution-Based Fast Thermal Model for 3-D-ICs: Transient Experimental Validation.
- Author
-
Maggioni, Federica L. T., Cherman, Vladimir, Oprins, Herman, Beyne, Eric, De Wolf, Ingrid, and Baelmans, Martine
- Subjects
INTEGRATED circuits ,ELECTRONIC circuits ,MICROELECTRONICS ,INTEGRATED optics ,TEMPERATURE - Abstract
In this paper, the transient experimental validation of the convolution-based fast thermal model (FTM) for 3-D integrated circuits is presented. The methodology is proven to be applicable to analyze the temperature evolution of real devices. A low-power package configuration with two different power dissipation scenarios, hotspot (HS) in the corner and HS in the center of the active region, has been considered for the validation. In this way, the importance of the package thermal impact on the definition of the final temperature profiles is highlighted. The error between the measurement and the FTM results remains always below 2.3 °C/W. Moreover, the model has been validated with respect to case studies concerning duty cycles of different durations. This proves the applicability of the model in defining the allowed duration of the chip activity before a maximum threshold temperature is reached. Finally, in this paper, the possibility to use a variable time-step approach in the transient FTM is presented together with the option of computing the temperature in individual points only. This is possible maintaining the same accuracy as if the fully resolved temperature maps would be calculated. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
136. On the Optimal Fronthaul Compression and Decoding Strategies for Uplink Cloud Radio Access Networks.
- Author
-
Zhou, Yuhan, Xu, Yinfei, Yu, Wei, and Chen, Jun
- Subjects
GAUSSIAN distribution ,CONTINUOUS distributions ,DISTRIBUTION (Probability theory) ,DECODING algorithms ,ITERATIVE decoding - Abstract
This paper investigates the compress-and-forward scheme for an uplink cloud radio access network (C-RAN) model, where multi-antenna base stations (BSs) are connected to a cloud-computing-based central processor (CP) via capacity-limited fronthaul links. The BSs compress the received signals with Wyner-Ziv coding and send the representation bits to the CP; the CP performs the decoding of all the users’ messages. Under this setup, this paper makes progress toward the optimal structure of the fronthaul compression and CP decoding strategies for the compress-and-forward scheme in the C-RAN. On the CP decoding strategy design, this paper shows that under a sum fronthaul capacity constraint, a generalized successive decoding strategy of the quantization and user message codewords that allows arbitrary interleaved order at the CP achieves the same rate region as the optimal joint decoding. Furthermore, it is shown that a practical strategy of successively decoding the quantization codewords first, then the user messages, achieves the same maximum sum rate as joint decoding under individual fronthaul constraints. On the joint optimization of user transmission and BS quantization strategies, this paper shows that if the input distributions are assumed to be Gaussian, then under joint decoding, the optimal quantization scheme for maximizing the achievable rate region is Gaussian. Moreover, Gaussian input and Gaussian quantization with joint decoding achieve to within a constant gap of the capacity region of the Gaussian multiple-input multiple-output (MIMO) uplink C-RAN model. Finally, this paper addresses the computational aspect of optimizing uplink MIMO C-RAN by showing that under fixed Gaussian input, the sum rate maximization problem over the Gaussian quantization noise covariance matrices can be formulated as convex optimization problems, thereby facilitating its efficient solution. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
137. The Foundational Work of Harrison-Ruzzo-Ullman Revisited.
- Author
-
Tripunitara, Mahesh V. and Li, Ninghui
- Subjects
ACCESS control ,MATRIX organization ,OPERATIONS research ,DATA reduction ,ERROR analysis in mathematics - Abstract
The work by Harrison, Ruzzo, and Ullman (the HRU paper) on safety in the context of the access matrix model is widely considered to be foundational work in access control. In this paper, we address two errors we have discovered in the HRU paper. To our knowledge, these errors have not been previously reported in the literature. The first error regards a proof that shows that safety analysis for mono-operational HRU systems is in (\bf NP). The error stems from a faulty assumption that such systems are monotonic for the purpose of safety analysis. We present a corrected proof in this paper. The second error regards a mapping from one version of the safety problem to another that is presented in the HRU paper. We demonstrate that the mapping is not a reduction, and present a reduction that enables us to infer that the second version of safety introduced in the HRU paper is also undecidable for the HRU scheme. These errors lead us to ask whether the notion of safety as defined in the HRU paper is meaningful. We introduce other notions of safety that we argue have more intuitive appeal, and present the corresponding safety analysis results for the HRU scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
138. Longitudinal Model Identification and Velocity Control of an Autonomous Car.
- Author
-
Dias, Jullierme Emiliano Alves, Pereira, Guilherme Augusto Silva, and Palhares, Reinaldo Martinez
- Abstract
This paper presents the model identification and the velocity control of an autonomous car. The control system was designed so that the car is controlled at low speeds, where the main applications for the vehicle's autonomous operations include parking and urban adaptive cruise control. A longitudinal model of the car was used in the control loop to compensate the nonlinear behavior of its dynamics. Since the determination of the vehicle's model is a difficult step in the design of model-based controllers, the main contribution of this paper is the use of an empirically determined model to this end. In this paper, the structure of the model was conceived from the car's physics equations, but its parameters were estimated using data-based identification techniques. An important contribution of this paper is the fact that, although the model is strictly linear, we can change its parameters as a function of the operation point of the vehicle to represent the engine's and the transmission's nonlinear behaviors. Moreover, in this paper, we propose a way to include changes in the longitudinal dynamics caused by the automatic gear shifting. The validation of the proposed controller was conducted by computer simulations and real-world experiments. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
139. Penetration Depth Between Two Convex Polyhedra: An Efficient Stochastic Global Optimization Approach.
- Author
-
Abramson, Mark A., Kent, Griffin D., and Smith, Gavin W.
- Subjects
POLYHEDRA ,MATHEMATICAL optimization ,GEOMETRIC modeling ,COMPUTER graphics ,COMMUNITIES ,POLYTOPES ,GLOBAL optimization - Abstract
During the detailed design phase of an aerospace program, one of the most important consistency checks is to ensure that no two distinct objects occupy the same physical space. Since exact geometrical modeling is usually intractable, geometry models are discretized, which often introduces small interferences not present in the fully detailed model. In this paper, we focus on computing the depth of the interference, so that these false positive interferences can be removed, and attention can be properly focused on the actual design. Specifically, we focus on efficiently computing the penetration depth between two polyhedra, which is a well-studied problem in the computer graphics community. We formulate the problem as a constrained five-variable global optimization problem, and then derive an equivalent unconstrained, two-variable nonsmooth problem. To solve the optimization problem, we apply a popular stochastic multistart optimization algorithm in a novel way, which exploits the advantages of each problem formulation simultaneously. Numerical results for the algorithm, applied to 14 randomly generated pairs of penetrating polytopes, illustrate both the effectiveness and efficiency of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
140. Vehicular Traffic Simulation in the City of Turin From Raw Data.
- Author
-
Rapelli, Marco, Casetti, Claudio, and Gagliardi, Giandomenico
- Subjects
TELECOMMUNICATION ,SUBURBS ,CITIES & towns ,CITY traffic ,URBAN studies ,COMMUNICATION of technical information ,MOBILE computing - Abstract
The testing of vehicular communication technologies, the study of urban mobility patterns, the evaluation of new traffic policies cannot dispense from vehicle mobility simulation. As is often the case, the larger the dataset, the better. Indeed, in recent years, many projects in the fields of mobility or vehicular communication have sought new traffic simulators with extended areas of investigation, possibly covering a whole city and its suburbs. In this spirit, we have modeled an urban traffic simulation in a 600-Km2 area in and around the Municipality of Turin, leveraging the SUMO tool. This paper aims at reporting in detail the methodology we followed in the creation of this dataset. Our results demonstrate that a complete modeling of such a wide area is possible at the expense of minor simplifications, reaching a very good level of approximation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
141. A New Approach to Represent the Corona Effect and Frequency-Dependent Transmission Line Models in EMT-Type Programs.
- Author
-
Pereira, Thassio Matias and Tavares, Maria Cristina
- Subjects
ELECTRIC lines ,ELECTRIC transients ,WAVE equation - Abstract
This paper presents an accurate and efficient model to represent the corona effect and frequency dependence of line parameters in electromagnetic transient simulations. The new method, named the voltage and frequency dependent line model (VFDLM), can be seen as a more general case of the well-known frequency-dependent (FD) or wideband (WB) line models, wherein the characteristic admittance and propagation function are considered voltage- and frequency-dependent parameters. The time domain traveling wave equations are solved using recursive convolutions and rational approximation through vector fitting (VF). Since the model can be represented by Norton equivalents, it is totally compatible with EMT-type programs. The model is validated through comparisons with three field measurement data available in the literature, and good agreement could be observed between them, showing that the VFDLM is a stable, efficient, and accurate approach to represent the corona effect in EMT-type programs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
142. LDA-Reg: Knowledge Driven Regularization Using External Corpora.
- Author
-
Yang, Kai, Luo, Zhaojing, Gao, Jinyang, Zhao, Junfeng, Ooi, Beng Chin, and Xie, Bing
- Subjects
CORPORA ,NEURAL development ,DATA mining ,ARTIFICIAL neural networks - Abstract
While recent developments of neural network (NN) models have led to a series of record-breaking achievements in many applications, the lack of sufficiently good datasets remains a problem for some applications. For such a problem, we can however exploit a large number of unstructured text corpora as an external knowledge to complement the training data, and most prevailing neural network solutions employ word embedding methods for such purposes. In this paper, we propose LDA-Reg, a novel knowledge driven regularization framework based on Latent Dirichlet Allocation (LDA) as an alternative to the word embedding methods to adaptively utilize abundant external knowledge and to interpret the NN model. For the joint learning of the parameters, we propose EM-SGD, an effective update method which incorporates Expectation Maximization (EM) and Stochastic Gradient Descent (SGD) to update parameters iteratively. Moreover, we also devise a lazy update and sparse update method for the high-dimensional inputs and sparse inputs respectively. We validate the effectiveness of our regularization framework through an extensive experimental study over real world and standard benchmark datasets. The results show that our proposed framework not only achieves significant improvement over state-of-the-art word embedding methods but also learns interpretable and significant topics for various tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
143. Knowledge Graph Embedding by Double Limit Scoring Loss.
- Author
-
Zhou, Xiaofei, Niu, Lingfeng, Zhu, Qiannan, Zhu, Xingquan, Liu, Ping, Tan, Jianlong, and Guo, Li
- Subjects
KNOWLEDGE graphs ,SPARSE matrices - Abstract
Knowledge graph embedding is an effective way to represent knowledge graph, which greatly enhance the performances on knowledge graph completion tasks, e.g., entity or relation prediction. For knowledge graph embedding models, designing a powerful loss framework is crucial to the discrimination between correct and incorrect triplets. Margin-based ranking loss is a commonly used negative sampling framework to make a suitable margin between the scores of positive and negative triples. However, this loss can not ensure ideal low scores for the positive triplets and high scores for the negative triplets, which is not beneficial for knowledge completion tasks. In this paper, we present a double limit scoring loss to separately set upper bound for correct triplets and lower bound for incorrect triplets, which provides more effective and flexible optimization for knowledge graph embedding. Upon the presented loss framework, we present several knowledge graph embedding models including TransE-SS, TransH-SS, TransD-SS, ProjE-SS and ComplEx-SS. The experimental results on link prediction and triplet classification show that our proposed models have the significant improvement compared to state-of-the-art baselines. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
144. Faster Domain Adaptation Networks.
- Author
-
Li, Jingjing, Jing, Mengmeng, Su, Hongzu, Lu, Ke, Zhu, Lei, and Shen, Heng Tao
- Subjects
DEEP learning ,OPTIMAL stopping (Mathematical statistics) ,MACHINE learning ,EDGE computing ,KNOWLEDGE transfer ,COMMUNITIES - Abstract
It is widely acknowledged that the success of deep learning is built upon large-scale training data and tremendous computing power. However, the data and computing power are not always available for many real-world applications. In this paper, we address the machine learning problem where it lacks training data and limits computing power. Specifically, we investigate domain adaptation which is able to transfer knowledge from one labeled source domain to an unlabeled target domain, so that we do not need much training data from the target domain. At the same time, we consider the situation that the running environment is confined, e.g., in edge computing the end device has very limited running resources. Technically, we present the Faster Domain Adaptation (FDA) protocol and further report two paradigms of FDA: early stopping and amid skipping. The former accelerates domain adaptation by multiple early exit points. The latter speeds up the adaptation by wisely skip several amid neural network blocks. Extensive experiments on standard benchmarks verify that our method is able to achieve the comparable and even better accuracy but employ much less computing resources. To the best of our knowledge, there are very few works which investigated accelerating knowledge adaptation in the community. This work is expected to inspire the topic for more discussion. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
145. Simplifying VGG-16 for Plant Species Identification.
- Author
-
Campos-Leal, Juan Augusto, Yee-Rendon, Arturo, and Vega-Lopez, Ines Fernando
- Abstract
Plant species identification represents an extraordinary challenge for machine learning due to visual interspecies similarities and large intraspecies variations. Furthermore, research literature reports that plant species identification usually lacks sufficiently large datasets for training classification models. In this paper, we address this problem with a model that simplifies the VGG-16 architecture, the N-VGG model. The idea behind N-VGG is to reduce experimentally observed overfitting on VGG-16 by using as few trainable parameters as possible. To do this, we substitute the flattening layer on the VGG architecture with a global average pooling layer. This reduces the size of the feature vector. In addition, we eliminate one of the two fully-connected layers and use a new hyper-parameter, N, to indicate the number of nodes on the remaining layer. To show the robustness of the N-VGG model, we conducted extensive experimentation. We trained N-VGG on five datasets for plant species identification. Four of these datasets are publicly available and have been widely used as benchmarks for plant identification models. For all datasets, we compare the accuracy of N-VGG to that of the VGG-16, Inception-v4, and EfficienNet-B3 models. The experimental results show that the N-VGG model achieved the best classification performance for all but one datasets, whereas all the models showed a remarkable performance for the remaining dataset. This evidence supports our initial idea that, for plant species classification, some accuracy might be lost due to overfitting and that having fewer trainable parameters helps in producing a more robust model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
146. BEHAVE: Behavior-Aware, Intelligent and Fair Resource Management for Heterogeneous Edge-IoT Systems.
- Author
-
AlQerm, Ismail, Wang, Jianyu, Pan, Jianli, and Liu, Yuanni
- Subjects
REINFORCEMENT learning ,RESOURCE management ,ARTIFICIAL intelligence ,RESOURCE allocation ,EDGE computing - Abstract
Edge computing is an emerging solution to support the future Internet of Things (IoT) applications that are delay-sensitive, processing-intensive or that require closer intelligence. Machine intelligence and data-driven approaches are envisioned to build future Edge-IoT systems that satisfy IoT devices’ demands for edge resources. However, significant challenges and technical barriers exist which complicate the resource management for such Edge-IoT systems. IoT devices running various applications can demonstrate a wide range of behaviors in the devices’ resource demand that are extremely difficult to manage. In addition, the management of multidimensional resources fairly and efficiently by the edge in such a setting is a challenging task. In this paper, we develop a novel data-driven resource management framework named BEHAVE that intelligently and fairly allocates edge resources to heterogeneous IoT devices with consideration of their behavior of resource demand (BRD). BEHAVE aims to holistically address the management technical barriers by: 1) building an efficient scheme for modeling and assessment of the BRD of IoT devices based on their resource requests and resource usage; 2) expanding a new Rational, Fair, and Truthful Resource Allocation (RFTA) model that binds the devices’ BRD and resource allocation to achieve fair allocation and encourage truthfulness in resource demand; and 3) developing an enhanced deep reinforcement learning (EDRL) scheme to achieve the RFTA goals. The evaluation results demonstrate BEHAVE's capability to analyze the IoT devices’ BRD and adjust its resource management policy accordingly. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
147. Hierarchical Bayesian LSTM for Head Trajectory Prediction on Omnidirectional Images.
- Author
-
Yang, Li, Xu, Mai, Guo, Yichen, Deng, Xin, Gao, Fangyuan, and Guan, Zhenyu
- Subjects
BAYESIAN field theory ,GAUSSIAN distribution ,HEAD ,FORECASTING ,MAGNETIC recording heads - Abstract
When viewing omnidirectional images (ODIs), viewers can access different viewports via head movement (HM), which sequentially forms head trajectories in spatial-temporal domain. Thus, head trajectories play a key role in modeling human attention on ODIs. In this paper, we establish a large-scale dataset collecting 21,600 head trajectories on 1,080 ODIs. By mining our dataset, we find two important factors influencing head trajectories, i.e., temporal dependency and subject-specific variance. Accordingly, we propose a novel approach integrating hierarchical Bayesian inference into long short-term memory (LSTM) network for head trajectory prediction on ODIs, which is called HiBayes-LSTM. In HiBayes-LSTM, we develop a mechanism of Future Intention Estimation (FIE), which captures the temporal correlations from previous, current and estimated future information, for predicting viewport transition. Additionally, a training scheme called Hierarchical Bayesian inference (HBI) is developed for modeling inter-subject uncertainty in HiBayes-LSTM. For HBI, we introduce a joint Gaussian distribution in a hierarchy, to approximate the posterior distribution over network weights. By sampling subject-specific weights from the approximated posterior distribution, our HiBayes-LSTM approach can yield diverse viewport transition among different subjects and obtain multiple head trajectories. Extensive experiments validate that our HiBayes-LSTM approach significantly outperforms 9 state-of-the-art approaches for trajectory prediction on ODIs, and then it is successfully applied to predict saliency on ODIs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
148. Mining Data Impressions From Deep Models as Substitute for the Unavailable Training Data.
- Author
-
Nayak, Gaurav Kumar, Mopuri, Konda Reddy, Jain, Saksham, and Chakraborty, Anirban
- Subjects
DATA mining ,OCEAN mining ,COMPUTER vision ,DISTILLATION - Abstract
Pretrained deep models hold their learnt knowledge in the form of model parameters. These parameters act as “memory” for the trained models and help them generalize well on unseen data. However, in absence of training data, the utility of a trained model is merely limited to either inference or better initialization towards a target task. In this paper, we go further and extract synthetic data by leveraging the learnt model parameters. We dub them Data Impressions, which act as proxy to the training data and can be used to realize a variety of tasks. These are useful in scenarios where only the pretrained models are available and the training data is not shared (e.g., due to privacy or sensitivity concerns). We show the applicability of data impressions in solving several computer vision tasks such as unsupervised domain adaptation, continual learning as well as knowledge distillation. We also study the adversarial robustness of lightweight models trained via knowledge distillation using these data impressions. Further, we demonstrate the efficacy of data impressions in generating data-free Universal Adversarial Perturbations (UAPs) with better fooling rates. Extensive experiments performed on benchmark datasets demonstrate competitive performance achieved using data impressions in absence of original training data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
149. A(DP) $^2$ 2 SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent With Differential Privacy.
- Author
-
Xu, Jie, Zhang, Wei, and Wang, Fei
- Subjects
PRIVACY ,HETEROGENEOUS computing ,DEEP learning ,LEAKS (Disclosure of information) - Abstract
As deep learning models are usually massive and complex, distributed learning is essential for increasing training efficiency. Moreover, in many real-world application scenarios like healthcare, distributed learning can also keep the data local and protect privacy. Recently, the asynchronous decentralized parallel stochastic gradient descent (ADPSGD) algorithm has been proposed and demonstrated to be an efficient and practical strategy where there is no central server, so that each computing node only communicates with its neighbors. Although no raw data will be transmitted across different local nodes, there is still a risk of information leak during the communication process for malicious participants to make attacks. In this paper, we present a differentially private version of asynchronous decentralized parallel SGD framework, or A(DP) $^2$ 2 SGD for short, which maintains communication efficiency of ADPSGD and prevents the inference from malicious participants. Specifically, Rényi differential privacy is used to provide tighter privacy analysis for our composite Gaussian mechanisms while the convergence rate is consistent with the non-private version. Theoretical analysis shows A(DP) $^2$ 2 SGD also converges at the optimal $\mathcal {O}(1/\sqrt{T})$ O (1 / T) rate as SGD. Empirically, A(DP) $^2$ 2 SGD achieves comparable model accuracy as the differentially private version of Synchronous SGD (SSGD) but runs much faster than SSGD in heterogeneous computing environments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
150. Recurrent Multi-Frame Deraining: Combining Physics Guidance and Adversarial Learning.
- Author
-
Yang, Wenhan, Tan, Robby T., Feng, Jiashi, Wang, Shiqi, Cheng, Bin, and Liu, Jiaying
- Subjects
IMAGE color analysis ,PHYSICS - Abstract
Existing video rain removal methods mainly focus on rain streak removal and are solely trained based on the synthetic data, which neglect more complex degradation factors, e.g., rain accumulation, and the prior knowledge in real rain data. Thus, in this paper, we build a more comprehensive rain model with several degradation factors and construct a novel two-stage video rain removal method that combines the power of synthetic videos and real data. Specifically, a novel two-stage progressive network is proposed: recovery guided by a physics model, and further restoration by adversarial learning. The first stage performs an inverse recovery process guided by our proposed rain model. An initially estimated background frame is obtained based on the input rain frame. The second stage employs adversarial learning to refine the result, i.e., recovering the overall color and illumination distributions of the frame, the background details that are failed to be recovered in the first stage, and removing the artifacts generated in the first stage. Furthermore, we also introduce a more comprehensive rain model that includes degradation factors, e.g., occlusion and rain accumulation, which appear in real scenes yet ignored by existing methods. This model, which generates more realistic rain images, will train and evaluate our models better. Extensive evaluations on synthetic and real videos show the effectiveness of our method in comparisons to the state-of-the-art methods. Our datasets, results and code are available at: https://github.com/flyywh/Recurrent-Multi-Frame-Deraining. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.