1,645 results
Search Results
2. A new effective algorithm for the resonant state of a Schrödinger equation
- Author
-
Wang, Zhongcheng
- Subjects
- *
PAPER , *ALGORITHMS , *FOUNDATIONS of arithmetic , *COMPUTER programming - Abstract
Abstract: In this paper we present a new effective algorithm for the Schrödinger equation. This new method differs from the original Numerov method only in one simple coefficient, by which we can extend the interval of periodicity from 6 to infinity and obtain an embedded correct factor to improve the accuracy. We compare the new method with the original Numerov method by the well-known problem of Woods–Saxon potential. The numerical results show that the new method has great advantage in accuracy over the original. Particularly for the resonant state, the accuracy is improved with four orders overall, and even six to seven orders for the highest oscillatory solution. Surely, this method will replace the original Numerov method and be widely used in various area. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
3. Frame Error Concealment Technique Using Adaptive Inter-Mode Estimation for H.264/AV.
- Author
-
Min-Cheol Hwang, Jun-Hyung Kim, Hae-Yong Yang, Sung-Jea Ko, and Morales, Aldo W.
- Subjects
COMPRESSION (Audiology) ,ALGORITHMS ,VIDEO recording ,MAGNETIC recorders & recording ,8MM (Video format) ,ALGEBRA ,FOUNDATIONS of arithmetic ,PAPER - Abstract
Since H.264/A VC achieves a high compression atio by reducing spatio-temporal redundancy in video equences, the payload of a single packet can often contain a vhole frame encoded by H.264/AVC. Therefore, the loss of a ingle packet does not only cause the loss of a whole frame, `ut also produce error propagation into succeeding frames. ~o deal with this problem, in this paper, we propose a novel ~rame error concealment method for ff264/A VC. First, the iroposed method extrapolates motion vectors from available reighboring frames onto the lost frame. Then, inter-modes of rh macroblocks in the lost frame are adaptively estimated by rsing the extrapolated motion vectors and features of 1.264/A VC. Experimental results exhibit that the proposed ~iethod outperforms the conventional methods in terms of roth objective and subjective video quality. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
4. A new implicit blending technique for volumetric modelling.
- Author
-
Bogdan Lipu and Nikola Guid
- Subjects
PAPER ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Abstract Current implicit blending techniques are mostly designed for use in surface modelling, where only boundaries of the object defined by the implicit primitives are important. In contrast, in volumetric implicit modelling the interior of the object is also significant, which requires different and more suitable techniques for combining implicit primitives. In this paper, we first discuss irregularities that occur using the current techniques. Then, a new technique for blending implicit primitives, especially appropriate in volumetric modelling (e.g., cloud modelling), is introduced. It overcomes these abnormalities and gives us better results than current techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2005
5. High Dynamic Range Imaging by Fusing Multiple Raw Images and Tone Reproduction.
- Author
-
Chung Kao, Wen
- Subjects
IMAGING systems ,OPTICS ,OPTOELECTRONIC devices ,REMOTE sensing equipment ,SCANNING systems ,PAPER ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
This paper presents an integrated color imaging system for taking images in extremely high dynamic range scenes. The system first fuses several differently exposed raw images to acquire more intensity information. The effective dynamic range of the image raw data can be extended to 256 times ~five differently exposed images are fused. Then it runs edge detection iterations to extract the image details in different luminance levels. The proposed tone reproduction algorithm equalizes the histogram of the extracted fine edges which tends to assign larger dynamic range for highly populated regions. Finally, the local contrast enhancement is performed to further refine the image details. The experimental results show that the proposed high dynamic range imaging system has good performance on both tone and color reproduction'. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
6. An Improved Algorithm for Detection and Pose Estimation of Texture-Less Objects.
- Author
-
Peng, Jian and Su, Ya
- Subjects
ALGORITHMS ,TEMPLATE matching (Digital image processing) ,ALGEBRA ,FOUNDATIONS of arithmetic ,MACHINE theory - Abstract
This paper introduces an improved algorithm for texture-less object detection and pose estimation in industrial scenes. In the template training stage, a multi-scale template training method is proposed to improve the sensitivity of LineMOD to template depth. When this method performs template matching, the test image is first divided into several regions, and then training templates with similar depth are selected according to the depth of each test image region. In this way, without traversing all the templates, the depth of the template used by the algorithm during template matching is kept close to the depth of the target object, which improves the speed of the algorithm while ensuring that the accuracy of recognition will not decrease. In addition, this paper also proposes a method called coarse positioning of objects. The method avoids a lot of useless matching operations, and further improves the speed of the algorithm. The experimental results show that the improved LineMOD algorithm in this paper can effectively solve the algorithm's template depth sensitivity problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. The Porter stemming algorithm: then and now.
- Author
-
Willett, Peter
- Subjects
INFORMATION retrieval ,ALGORITHMS ,ENGLISH language ,INFORMATION science ,FOUNDATIONS of arithmetic ,INFORMATION storage & retrieval systems - Abstract
Purpose-- In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach-- Review of literature and research involving use of the Porter algorithm. Findings-- The algorithm has been widely adopted and extended so that it has become the standard approach to word conflation for information retrieval in a wide range of languages. Originality/value-- The 1980 paper in Program by Porter describing his algorithm has been highly cited. This paper provides a context for the original paper as well as an overview of its subsequent use. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
8. A practical guide for using statistical tests to assess randomized algorithms in software engineering.
- Author
-
Arcuri, Andrea and Briand, Lionel
- Subjects
SOFTWARE engineering ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,ENGINEERING - Abstract
Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
9. Better predicate testing.
- Author
-
Kaminski, Gary, Ammann, Paul, and Offutt, Jeff
- Subjects
MUTATION testing of computer software ,COMPUTER software testing ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Mutation testing is widely recognized as being extremely powerful, but is considered difficult to automate enough for practical use. This paper theoretically addresses two possible reasons for this: the generation of redundant mutants and the lack of integration of mutation analysis with other test criteria. By addressing these two issues, this paper brings an important mutation operator, relational-operator-replacement (ROR), closer to practical use. First, we develop fault hierarchies for the six relational operators, each of which generates seven mutants per clause. These hierarchies show that, for any given clause, only three mutants are necessary. This theoretical result can be integrated easily into mutation analysis tools, thereby eliminating generation of 57% of the ROR mutants. Second, we show how to bring the power of the ROR operator to the widely used Multiple Condition-Decision Coverage (MCDC) test criterion. This theoretical result includes an algorithm to transform any MCDC-adequate test set into a test set that also satisfies RORG, a new version of ROR appropriate for the MCDC context. The transformation does not use traditional mutation analysis, so can easily be integrated into existing MCDC tools and processes. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
10. On Gradient-Based Search for Multivariable System Estimates.
- Author
-
Wills, Adrian and Ninness, Brett
- Subjects
ALGORITHMS ,MATRICES (Mathematics) ,ALGEBRA ,FOUNDATIONS of arithmetic ,COMPUTER programming ,EXPECTATION-maximization algorithms ,POLAR forms (Mathematics) ,ABSTRACT algebra ,UNIVERSAL algebra - Abstract
This paper addresses the design of gradient-based search algorithms for multivariable system estimation. In particular, the paper here considers so-called "full parametrization" approaches, and establishes that the recently developed "data-driven local coordinate" methods can be seen as a special case within a broader class of techniques that are designed to deal with rank-deficient Jacobians. This informs the design of a new algorithm that, via a strategy of dynamic Jacobian rank determination, is illustrated to offer enhanced performance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
11. Scalable Construction of Clock Trees With Useful Skew and High Timing Quality.
- Author
-
Ewetz, Rickard and Koh, Cheng-Kok
- Subjects
CLOCK distribution networks ,ALGORITHMS ,FOUNDATIONS of arithmetic ,EIGENFACTOR ,CLOCK & watch industry - Abstract
Clock trees can be constructed based on static arrival time constraints or dynamic implied skew constraints. Dynamic implied skew constraints allow the full timing margins to be utilized. However, the dynamic skew constraints require a high run-time complexity to be evaluated. In contrast, static arrival time constraints are more restrictive but can be evaluated in constant time. Consequently, there is a tradeoff between timing margin utilization and run-time. In this paper, a scalable clock tree synthesis (CTS) framework is proposed for the construction of low-cost useful skew trees (USTs) with high timing quality. The scalability is based on combining the use of arrival time constraints with virtual minimum and maximum delay offsets, which facilitates that a pair of smaller subtrees can be joined into a larger subtree in constant time. The ability to quickly join subtrees is leveraged to perform a high degree of solution space exploration, which translates into the construction of USTs with low-cost. In particular, clock trees with various routing tree topologies, buffer tree topologies, buffer sizes, and stem wire lengths are explored. Moreover, the arrival time constraints are specified with the objective of being the least restrictive to reduce cost. Furthermore, the constraints are respecified throughout the tree construction process using a slack graph (SG) to expose additional timing margins. The high timing quality is obtained by seamlessly integrating arbitrary timing models using the SG. Finally, the proposed CTS framework is integrated with a clock tree optimization framework to demonstrate that the constructed USTs are capable of meeting timing constraints under the influence of on-chip variations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. Simplified Vector Control Algorithm for Induction Motor Drives Based on Sophisticated Look-up Tables.
- Author
-
Satyanarayana, K., Amarnath, J., Kailasa Rao, A., and Bramhananda Reddy, T.
- Subjects
ROTATIONAL motion (Rigid dynamics) ,INDUCTION motors ,ALGORITHMS ,FOUNDATIONS of arithmetic ,BOOSTING algorithms ,COMPUTER algorithms ,SEARCH algorithms - Abstract
This paper presents a simple and novel vector control technique for induction motor drives based on sophisticated look-up tables. Vector control and direct torque control (DTC) are two popular approaches for high-performance drives. Though these algorithms give better dynamic torque response, vector control algorithm uses reference frame transformations and DTC gives large ripple in steady state current, torque and flux. To overcome these drawbacks, the proposed vector control algorithm combines the principles of both vector control and DTC. In the proposed algorithm, the d- and q-axes reference currents are generated as per conventional vector control algorithm and regulated according to a switching table as given in DTC. This paper presents two switching tables. Switching table-I is based on the conventional DTC principle, which gives good performance with increased common mode voltage variations. To reduce the common mode voltage variations, switching table-II is presented. To validate the proposed method numerical simulations have been carried out and compared with the existing algorithms. The simulation results show the effectiveness of the proposed technique. [ABSTRACT FROM AUTHOR]
- Published
- 2010
13. An iterative solver-based long-step infeasible primal-dual path-following algorithm for convex QP based on a class of preconditioners.
- Author
-
Lu, Zhaosong, Monteiro, RenatoD.C., and O'Neal, JeromeW.
- Subjects
ALGORITHMS ,LITERATURE ,LINEAR systems ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
In this paper, we present a long-step infeasible primal-dual path-following algorithm for convex quadratic programming (CQP) whose search directions are computed by means of a preconditioned iterative linear solver. In contrast to the authors' previous paper [Z. Lu, R.D.C. Monteiro, and J.W. O'Neal. An iterative solver-based infeasible primal-dual path-following algorithm for convex quadratic programming, SIAM J. Optim. 17(1) (2006), pp. 287-310], we propose a new linear system, which we refer to as the hybrid augmented normal equation (HANE), to determine the primal-dual search directions. Since the iterative linear solver can only generate an approximate solution to the HANE, this solution does not yield a primal-dual search direction satisfying all equations of the primal-dual Newton system. We propose a recipe to compute an inexact primal-dual search direction, based on a suitable approximate solution to the HANE. The second difference between this paper and [Z. Lu, R.D.C. Monteiro, and J.W. O'Neal. An iterative solver-based infeasible primal-dual path-following algorithm for convex quadratic programming, SIAM J. Optim. 17(1)(2006), pp. 287-310] is that, instead of using the maximum weight basis (MWB) preconditioner in the aforesaid recipe for constructing the inexact search direction, this paper proposes the use of any member of a whole class of preconditioners, of which the MWB preconditioner is just a special case. The proposed recipe allows us to: (i) establish a polynomial bound on the number of iterations performed by our path-following algorithm and (ii) establish a uniform bound, depending on the quality of the preconditioner, on the number of iterations performed by the iterative solver. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
14. User Selection With Zero-Forcing Beamforming Achieves the Asymptotically Optimal Sum Rate.
- Author
-
Jianqi Wang, Love, David J., and Zoltowski, Michael D.
- Subjects
ALGORITHMS ,MATRICES (Mathematics) ,ABSTRACT algebra ,UNIVERSAL algebra ,QUANTITATIVE research ,COMPLEX matrices ,OPTIMAL designs (Statistics) ,EXPERIMENTAL design ,FOUNDATIONS of arithmetic - Abstract
In this paper, we propose a generalized greedy (G-greedy) algorithm based on zero-forcing beamforming (ZFBF) for the multiple-input multiple-output (MIMO) broadcast channel. This algorithm serves as a general mathematical framework that includes a number of existing greedy user selection methods as its realizations. As previous results only give the scaling law of the sum rate of dirty paper coding (DPC), with the help of the G-greedy structure, we are able to obtain the exact limit of the DPC sum rate for a large number of users. We also prove that the difference between the sum rates obtained by G-greedy user selection and by DPC goes to zero as the number number of users increases. In addition to this, we investigate one particular greedy user selection scheme called sequential water-filling (SWF). For this algorithm, a complexity reduction is achieved byan iterative procedure based on an LQ decomposition, which converts the calculation of the Moore-Penrose matrix inverse to one vector-matrix multiplication. A sufficient condition is given to prune the search space of this algorithm that results in further complexity reduction. With the help of the G-greedy algorithm, we prove that SWF achieves the full DPC sum rate for a large number of users. For a moderate number of users, simulation demonstrates that, compared with other user selection algorithms, SWF achieves a higher sum rate that is close to the maximal sum rate achievable by ZFBF with the same order of complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
15. Reconstruction of high order derivatives by new mollification methods.
- Author
-
Zhen-yu Zhao and Guo-qiang He
- Subjects
ALGORITHMS ,NOISE ,ALGEBRA ,FOUNDATIONS of arithmetic ,SOUND - Abstract
In this paper, the problem of reconstructing numerical derivatives from noisy data is considered. A new framework of mollification methods based on the L generalized solution regularization methods is proposed. A specific algorithm for the first three derivatives is presented in the paper, in which a modification of TSVD, termed cTSVD is chosen as the regularization technique. Numerical examples given in the paper verify the theoretical results and show efficiency of the new method. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
16. SBAS Algorithm Performance in the Implementation of the ASIAPACIFIC GNSS Test Bed.
- Subjects
ALGORITHMS ,GLOBAL Positioning System ,ARTIFICIAL satellites in navigation ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
This paper discusses the preliminary performance analysis result of the Asia-Pacific Global Navigation Satellite Systems (GNSS) Test Bed. Currently, seven Asia-Pacific economies are participating in the Test Bed project, namely Australia, Chinese Taipei, Indonesia, Malaysia, the Philippines, Thailand and Vietnam. The Test Bed was commissioned in May 2006. The discussion topics in this paper include Test Bed system architecture and preliminary analysis of the system performance. As presented in this paper, while current Satellite-Based Augmentation System (SBAS) algorithms can improve the accuracy performance of GPS positioning results, it cannot fulfill the integrity performance required by civil aviation community. This paper analyzes the limitation of the algorithm and proposes future research topics related to the limitation. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
17. MINIMAL SHAPE-PRESERVING PROJECTIONS ONTO ∏n: GENERALIZATIONS AND EXTENSIONS.
- Author
-
Lewicki, G. and Prophet, M. P.
- Subjects
ALGORITHMS ,FOUNDATIONS of arithmetic ,GRAPHICAL projection ,SPACES of measures ,MATHEMATICS - Abstract
The goal of this paper is to further the investigation begun in Chalmers and Prophet, Numer. Funct. Anal. Optimiz. 1997; 18:507–520. With the benefit of nearly 10 years of work, we begin by indicating how several proofs from Chalmers and Prophet, Numer. Funct. Anal. Optimiz. 1997; 18:507–520, can be substantially improved. We show that the problem of preserving k-convexity onto Π
n is one part of a larger shape-preserving problem (multiconvex preservation) relative to Πn , and we completely solve this expanded problem. And finally, we demonstrate that multiconvex preserving projections constructed in this paper are in fact of minimal operator norm in a large class of Banach spaces. [ABSTRACT FROM AUTHOR]- Published
- 2006
- Full Text
- View/download PDF
18. EFFICIENT COMPUTATION OF NETWORK RELIABILITY IMPORTANCE ON K-TERMINAL RELIABILITY.
- Author
-
KOIDE, TAKESHI, SHINMORI, SHUICHI, and ISHII, HIROAKI
- Subjects
ALGORITHMS ,SYSTEMS engineering ,NETWORK PC (Computer) ,COMPUTER networks ,FOUNDATIONS of arithmetic ,COMPUTER programming - Abstract
This paper proposes an algorithm to compute marginal reliability importance for network systems with k-terminal reliability efficiently. Marginal reliability importance is an appropriate quantitative measure on a system component against system reliability and it contributes to design of reliable systems. Computing marginal reliability importance in network systems is time-consuming due to its NP-hardness. This paper extends the algorithm proposed in our last study to deal with k-terminal reliability and incorporates an extended factoring theorem to improve the algorithm. Numerical experiments compare the proposed algorithm with a traditional method to reveal efficiency of the algorithm. The algorithm helps to construct efficient algorithm to reliable network design problems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
19. ML-Based Frequency Estimation and Synchronization of Frequency Hopping Signals.
- Author
-
Ko, C. C., Zhi, Wanjun, and Chin, Francois
- Subjects
ALGORITHMS ,SYNCHRONIZATION ,WAVELETS (Mathematics) ,TIME measurements ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
A maximum likelihood (ML)-based algorithm for frequency estimation and synchronization of frequency hopping signals is proposed in this paper. By using a two-hop signal model that incorporates the unknown hop transition time, the likelihood function of the received frequency hopping signal is formulated. A new iterative method is then derived to estimate the hopping frequencies and hop transition time. Without using any pilot signal or sync bit, the new algorithm is able to implement synchronization and frequency estimation at the same time. Unlike the time-frequency distribution (TFD) and the wavelet-based algorithms in papers by Barbarossa and Scaglione and by Khalil and Hippenstiel, the new ML algorithm does not require the selection of a kernel or mother wavelet function. In addition, compared with the TFD-based algorithm, it has a better performance with a lower implementation complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
20. CONGESTION AWARE MULTIPATH ROUTING: PERFORMANCE IMPROVEMENTS IN CORE NETWORKS.
- Author
-
KULTAN, Matej and MEDVECKY, Martin
- Subjects
TELECOMMUNICATION systems routing ,ALGORITHMS ,FOUNDATIONS of arithmetic ,ROUTING (Computer network management) ,PERFORMANCE - Abstract
In this paper the improved Congestion Aware Multipath Routing algorithm is analysed in terms of Core network throughput and longer paths contribution. The algorithm discovers unused network resources and dynamically adapts to the actual traffic load and available resources displacement. Several simulation scenarios have been benchmarked in order to verify algorithm in typical Internet Service Provider cases. Simulation scenarios in this paper are focused on verifying functionality in dense core networks. For this purpose, tests were performed on TeraStream based network - the Deutsche Telekom Group design concept. Simulation results have proven better performance and resource utilization of the proposed algorithm than traditional Bellman-Ford based algorithms and equal cost multipath approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
21. Minimizing the Total Flow Time of n Jobs on a Network.
- Author
-
Simchi-Levi, David and Berman, Oded
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,MATHEMATICS - Abstract
This paper addresses the problem of determining a sequence for processing jobs on a machine, with sequence dependent setup times, such that the total flow time is minimized. The paper includes a branch and bound algorithm to find the optimal tour sequence. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
22. Systematic Construction Algorithm of Fixed Block- Length Multitrack (d, k) Codes.
- Author
-
Tanaka, Hatsukazu
- Subjects
ALGORITHMS ,ALGEBRA ,MATHEMATICS ,FOUNDATIONS of arithmetic ,FUZZY algorithms ,COMPUTER algorithms - Abstract
This paper notes the fact that the recording code often takes the form of a multitrack code, as well as the fact that the k-constraint in the (d, k)-constraint is considerably relaxed, which increases the capacity. Then the paper proposes a systematic construction algorithm for the fixed block-length multitrack (d, k) code and discusses the coding characteristics. A simple idea is applied in the code construction to apply the (d, ∞) code to all tracks except for one in order to reflect the relaxation of the k-constraint on the improvement of the coding rate. The (d, ∞) code is applied to the left track if the vector obtained by the elementwise logical sum of the code words satisfies the k-constraint and, if not, the (d, k) code is applied. The author has already proposed a systematic construction algorithm for the efficient fixed block-length (d, k) code. This paper discusses mainly the systematic construction algorithm and the coding characteristics for the fixed block-length (d, ∞) code, based on the unique ordering technique of the binary sequence by the Schalkwijk algorithm, as the systematic construction algorithm for the fixed block-length multitrack (d, k) code based on the forementioned idea. The coding rate and the coding efficiency are examined, and the result indicates that the characteristics are very close to those of the (d, ∞) code. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
23. Image denoising with patch estimation and low patch-rank regularization.
- Author
-
Li, Bo, Lin, Ge, Chen, Qiang, and Wang, Hongyi
- Subjects
ALGORITHMS ,TEXTURES ,FOUNDATIONS of arithmetic ,MATHEMATICAL programming ,ALGORITHMIC randomness - Abstract
In this paper, we propose an image denoising algorithm for one special class of images which have periodical textures and contaminated by poisson noise using patch estimation and low patch-rank regularization. In order to form the data fidelity term, we take the patch-based poisson likelihood, which will effectively remove the 'blurring' effect. For the sparse prior, we use the low patch-rank as the regularization, avoiding the choosing of dictionary. Putting together the data fidelity and the prior terms, the denoising problem is formulated as the minimization of a maximum likehood objective functional involving three terms: the data fidelity term; a sparsity prior term, in the form of the low patch-rank regularization ;and a non-negativity constraint (as Poisson data are positive by definition). Experimental results show that the new method performs well for this special class of images which have periodical texture, and even for images with not strictly periodical textures. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
24. Downlink Vertical Beamforming Designs for Active Antenna Systems.
- Author
-
Lee, Wookbong, Lee, Sang-Rim, Kong, Han-Bae, Lee, Sunho, and Lee, Inkyu
- Subjects
BEAMFORMING ,SIGNAL processing ,MATHEMATICAL optimization ,ALGORITHMS ,FOUNDATIONS of arithmetic - Abstract
In this paper, we study a vertical beamforming technique for multiple-input multiple-output downlink multi-user systems. In general, the transmit antenna gain is controlled by adjusting the boresight of antennas in directional antennas, and thus the cell average rate varies according to the angle of the boresight. First, we compute the tilting angles for directional antenna systems which maximize the cell average rate. To this end, the probability density function of a three-dimensional user distribution is derived. Based on the result, we analyze the average rate gain of active antenna systems over passive antenna systems for a single user case. Furthermore, for a multi-user active antenna system, beamforming designs to maximize the weighted sum rate are proposed by optimizing the transmit antenna gain and power allocation. Since finding joint optimal parameters requires prohibitively high computational complexity, we separate the optimization problem into two sub-problems of the vertical beamforming and the power allocation. Then a simple vertical beamforming algorithm based on a high signal-to-noise ratio assumption is presented. Also, for a multi-user passive antenna system, we provide a beamforming scheme based on a multi-sector concept. Simulation results show that the proposed beamforming schemes outperform the conventional beamforming schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. A New Grey Phase Plane Control Algorithm.
- Author
-
Jianmin Zhu, Zhengqiang Shen, Fucai Li, Donger Zhou, and Beichuan Qi
- Subjects
ALGORITHMS ,DECISION making ,PROBLEM solving ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
A new grey phase plane control algorithm is proposed to overcome the problem of low control accuracy in normal phase plane control algorithm. The grey phase plane control rules can be established by the experts' control experience. By the process of greying, grey decision-making and whitening for the error and the error change rate of the control system, the control variables can be obtained, thereafter the grey phase plane control is implemented A pneumatic position servo experimental system with the DGPL type rod-less cylinder selected as control object has been developed to compare the accuracy between the normal phase plane control method and the proposed method in this paper. The experimental results demonstrated that the method proposed in this paper is simple, feasible, effective and of higher control accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2014
26. Improved multilevel physical optics algorithm for fast computation of monostatic radar cross section.
- Author
-
Yuyuan An, Daoxiang Wang, and Chen, Rushan
- Subjects
RADAR ,ELECTRONIC systems ,ALGORITHMS ,FOUNDATIONS of arithmetic ,OPTICS - Abstract
This paper proposes an acceleration technique to fast evaluate the monostatic radar cross section (RCS) with the multilevel physical optics (MLPO) algorithm. The proposed method combines the adaptive cross approximation (ACA) algorithm with the MLPO to fast evaluate two-dimensional monostatic RCS (two-dimensional monostatic RCS over a range of elevation angles φ and azimuths angles j at a fixed frequency f, or a range of frequencies f and azimuths angles φ at a fixed elevation angle θ) responses. Owing to the phase compensation in the MLPO, the matrix corresponding to the compensated back-scattered field of each group is highly rank-deficient, which is compressed with ACA in a multilevel fashion therefore a lot of central processing unit time will be saved. The rank-deficiency, accuracy and computational complexity of the proposed method have been studied through a couple of numerical examples, which illustrate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. The Australian Informatics Competition.
- Author
-
CLARK, David and CLAPPER, Mike
- Subjects
COMPUTER science competitions ,ALGORITHMS ,MATHEMATICAL programming ,FOUNDATIONS of arithmetic - Abstract
The Australian Informatics Competition (AIC) is the entry-level Informatics competition in Australia. It is a pen-and-paper competition, requiring no programming experience. Most of the questions focus on testing algorithmic ability, with others testing students' ability to analyse algorithms, apply rules and use logic to solve problems. The algorithmic questions include standard algorithms such as breadth first search, dynamic programming and two person games, as well as ad-hoc algorithms specific to a particular scenario. Currently about 7,000 students enter the competition, but there are plans to expand it with on-line entries. The AIC is becoming more relevant with the introduction of algorithmic thinking as a component of the Digital Technologies strand of the Australian Curriculum. [ABSTRACT FROM AUTHOR]
- Published
- 2014
28. Granular control of photovoltaic arrays by means of a multi-output Maximum Power Point Tracking algorithm.
- Author
-
Petrone, Giovanni, Ramos ‐ Paja, Carlos Andrés, Spagnuolo, Giovanni, and Vitelli, Massimo
- Subjects
PHOTOVOLTAIC power generation ,ALGORITHMS ,ALGEBRA ,MATHEMATICAL programming ,FOUNDATIONS of arithmetic - Abstract
ABSTRACT In this paper, a Distributed Maximum Power Point Tracking (D-MPPT) approach in photovoltaic (PV) applications is discussed. The proposed control method is suitable for the granular control of the PV generator at a module level or even at a sub-module level. D-MPPT is usually implemented by means of independent converters, each one of them running its own MPPT algorithm. Instead, the architecture proposed in this paper consists of only one digital controller, implementing a multivariable MPPT algorithm based on the Perturb and Observe approach, acting on a number of dc/dc converters, each one of them dedicated to a single PV module. The proposed control strategy reduces the number of current sensors with respect to the classical D-MPPT architecture and tracks the maximum power evaluated at the dc/dc converters' output. Planar solid immersion mirror simulations and experimental results confirm the validity of the approach and of the design guidelines proposed in the paper. Copyright © 2012 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
29. Performance Evaluation of Line Symmetry-Based Validity Indices on Clustering Algorithms.
- Author
-
Kumar, Vijay, Chhabra, Jitender Kumar, and Kumar, Dinesh
- Subjects
PERFORMANCE anxiety ,ALGORITHMS ,PERFORMANCE standards ,ALGORITHMIC randomness ,FOUNDATIONS of arithmetic - Abstract
Finding the optimal number of clusters and the appropriate partitioning of the given dataset are the two major challenges while dealing with clustering. For both of these, cluster validity indices are used. In this paper, seven widely used cluster validity indices, namely DB index, PS index, I index, XB index, FS index, K index, and SV index, have been developed based on line symmetry distance measures. These indices provide the measure of line symmetry present in the partitioning of the dataset. These are able to detect clusters of any shape or size in a given dataset, as long as they possess the property of line symmetry. The performance of these indices is evaluated on three clustering algorithms: K-means, fuzzy-C means, and modified harmony search-based clustering (MHSC). The efficacy of symmetry-based validity indices on clustering algorithms is demonstrated on artificial and real-life datasets, six each, with the number of clusters varying from 2 to n, where n is the total number of data points existing in the dataset. The experimental results reveal that the incorporation of line symmetry-based distance improves the capabilities of these existing validity indices in finding the appropriate number of clusters. Comparisons of these indices are done with the point symmetric and original versions of these seven validity indices. The results also demonstrate that the MHSC technique performs better as compared to other well-known clustering techniques. For real-life datasets, analysis of variance statistical analysis is also performed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
30. Interference minimum network topologies for ad hoc networks.
- Author
-
Feng, Guinian, Fan, Pingyi, and Liew, Soung Chang
- Subjects
TOPOLOGY ,COMPUTER networks ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
This paper investigates the topology control problem with the goal of minimizing mutual interferences in wireless ad hoc networks. It is known that interference is considered as a relationship between link and node in previous works. In this paper, we attempt to capture the physical situation of space-division multiplex more realistically by defining interference as a relationship between any two bidirectional links. We formulate the pair-wise interference condition between any two bidirectional links, and demonstrate that the interference condition is equivalent by employing the equal-power allocation strategy and by employing the minimum-power allocation strategy. Then we further study the typical interference relationship between a link and its surrounding links. To characterize the extent of the interference between a link and its surrounding links, a new metric, the interference coefficient, is given, and its property is explored in detail by means of analysis and simulation. Based on the insight obtained, a centralized algorithm, BIMA, and a distributed algorithm, LIMA, are proposed to control the network interference. Our simulation indicates that BIMA can minimize the network interference while conserving energy and maintaining good spanner property, and LIMA has relatively good interference performance while keeping low node degree, compared with some well-known algorithms. Besides, both BIMA and LIMA show good robustness to additive noises in terms of interference performance. Copyright © 2010 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
31. Raster cellular neural network simulator for image processing applications with numerical integration algorithms.
- Author
-
Murugesh, V.
- Subjects
EVOLUTIONARY computation ,NUMERICAL analysis ,MATHEMATICAL analysis ,EQUATIONS ,IMAGE processing ,NUMERICAL integration ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
In this paper, a universal simulator for cellular neural network (CNN) is presented. This simulator is capable of performing Raster simulation for any size of input image, and thus is a powerful tool for researchers investigating potential applications of CNN. This paper reports the latency properties of CNNs along with popular numerical integration algorithms; results and comparisons are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
32. New Algorithms for Designing Unimodular Sequences With Good Correlation Properties.
- Author
-
Stoica, Petre, Hao He, and Jian Li
- Subjects
ALGORITHMS ,FOUNDATIONS of arithmetic ,ELECTRONIC systems ,DETECTORS ,SENSOR networks ,AUTOCORRELATION (Statistics) ,STOCHASTIC processes - Abstract
Unimodular (i.e., constant modulus) sequences with good autocorrelation properties are useful in several areas, including communications and radar. The integrated sidelobe level (ISL) of the correlation function is often used to express the goodness of the correlation properties of a given sequence. In this paper, we present several cyclic algorithms for the local minimization of ISL-related metrics. These cyclic algorithms can be initialized with a good existing sequence such as a Golomb sequence, a Frank sequence, or even a (pseudo)random sequence. To illustrate the performance of the proposed algorithms, we present a number of examples, including the design of sequences that have virtually zero autocorrelation sidelobes in a specified lag interval and of long sequences that could hardly be handled by means of other algorithms previously suggested in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
33. Improving multi-objective genetic algorithms with adaptive design of experiments and online metamodeling.
- Author
-
G. Li, M. Li, S. Azarm, S. Al Hashimi, T. Al Ameri, and N. Al Qasas
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,COMPUTER programming - Abstract
Abstract Applications of multi-objective genetic algorithms (MOGAs) in engineering optimization problems often require numerous function calls. One way to reduce the number of function calls is to use an approximation in lieu of function calls. An approximation involves two steps: design of experiments (DOE) and metamodeling. This paper presents a new approach where both DOE and metamodeling are integrated with a MOGA. In particular, the DOE method reduces the number of generations in a MOGA, while the metamodeling reduces the number of function calls in each generation. In the present approach, the DOE locates a subset of design points that is estimated to better sample the design space, while the metamodeling assists in estimating the fitness of design points. Several numerical and engineering examples are used to demonstrate the applicability of this new approach. The results from these examples show that the proposed improved approach requires significantly fewer function calls and obtains similar solutions compared to a conventional MOGA and a recently developed metamodeling-assisted MOGA. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
34. Fast and Stable YAST Algorithm for Principal and Minor Subspace Tracking.
- Author
-
Badeau, Roland, Richard, Gaël, and David, Bertrand
- Subjects
STOCHASTIC convergence ,MATRICES (Mathematics) ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,INVARIANT subspaces ,FUNCTIONAL analysis ,HILBERT space ,CALCULUS of variations ,FUNCTIONAL equations - Abstract
This paper presents a new implementation of the VAST algorithm for principal and minor subspace tracking. VAST was initially derived from the Subspace Projection (SP) algorithm by Davila, which was known for its exceptional convergence rate, compared with other classical principal subspace trackers. The novelty in the VAST algorithm was the lower computational cost (linear if the data correlation matrix satisfies a so-called shift-invariance property), and the extension to minor subspace tracking. However, the original implementation of the YAST algorithm suffered from a numerical stability problem (the subspace weighting matrix slowly loses its orthonormality). We thus propose in this paper a new implementation of VAST, whose stability is established theoretically and tested via numerical simulations. This algorithm combines all the desired properties for a subspace tracker: remarkably high convergence rate, lowest steady-state error, linear complexity, and numerical stability regarding the orthonormality of the subspace weighting matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
35. Improve the Stability and the Accuracy of Power Hardware-in-the-Loop Simulation by Selecting Appropriate Interface Algorithms.
- Author
-
Ren, Wei, Steurer, Michael, and Baldwin, Thomas L.
- Subjects
ALGORITHMS ,ELECTRIC currents ,ENGINEERING instruments ,MATHEMATICAL functions ,SCIENTIFIC apparatus & instruments ,ELECTRONIC instruments ,INDUSTRIAL electronics ,ELECTRONIC equipment ,FOUNDATIONS of arithmetic ,DIFFERENTIAL equations - Abstract
The closed-loop stability and the simulation accuracy are two paramount issues in power hardware-in-the-loop (HIL) simulation in regard to the operational safety and the experiment reliability. In this paper, the stability issue of the power HIL simulation is first introduced with a simple example. A stability analysis and accuracy estimation method based on the system's loop transfer function is later given. Five different interface algorithms are described, and their respective characteristics with respect to the system stability are compared. Through Matlab simulations and field experiments of two representative power HIL examples, it is revealed that certain interface algorithms exhibit higher stability and accuracy than the others under the given conditions. A recommendation for selecting appropriate interface algorithms is finally proposed at the end of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
36. Effective Wire Models for X-Architecture Placement.
- Author
-
Tung-Chieh Chen, Yi-Lin Chuang, and Yao-Wen Chang
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,RECURSIVE partitioning ,NONPARAMETRIC statistics ,REGRESSION analysis - Abstract
In this paper, we derive the X-half-perimeter wirelength (XHPWL) model for X-architecture placement and explore the effects of three different wire models on X-architecture placement, including the Manhattan-half-perimeter wirelength (MHPWL) model, the XHPWL model, and the X-Steiner wirelength (XStWL) model. For min-cut partitioning placement, we apply the XHPWL and XStWL models to the generalized net-weighting method that can exactly model the wirelength after partitioning by net weighting. For analytical placement, we smooth the XHPWL function using log-sum-exp functions to facilitate analytical placement. This paper shows that both the XHPWL and XStWL models can reduce the X wirelength effectively. In particular, our results reveal the effectiveness of the X architecture on wirelength reduction during placement and, thus, the importance of the study on the X-placement algorithms, which is different from the results given in the work of Ono et al. which suggests that the X-architecture placement might not improve the X-routing wirelength over the Manhattan-architecture placement. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
37. Distribution of distinguishable objects to bins: generating all distributions.
- Author
-
Adnan, MuhammadAbdullah and Saidur Rahman, Md.
- Subjects
ALGORITHMS ,DISTRIBUTION (Probability theory) ,FOUNDATIONS of arithmetic ,COMPUTER programming ,ALGEBRA - Abstract
In this paper we give an algorithm to generate all distributions of distinguishable objects to bins without repetition. Our algorithm generates each distribution in constant time. To the best of our knowledge, our algorithm is the first algorithm which generates each solution in O(1) time in the ordinary sense. As a byproduct of our algorithm, we obtain a new algorithm to enumerate all multiset partitions when the number of partitions is fixed and the partitions are numbered. In this case, the algorithm generates each multiset partitions in constant time (in the ordinary sense). Finally, we extend the algorithm to the case when the bins have priorities associated with them. Overall space complexity of the algorithm is O(mklgn), where there are m bins and the objects fall into k different classes. In a companion paper, the generation of all distributions of identical objects to bins is also considered. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
38. An Effective Method for Initialization of Lloyd–Max's Algorithm of Optimal Scalar Quantization for Laplacian Source.
- Author
-
Peric, Zoran and Nikolic, Jelena
- Subjects
ALGORITHMS ,LOGIC ,SCIENTIFIC method ,METHODOLOGY ,FOUNDATIONS of arithmetic - Abstract
In this paper an exact and complete analysis of the Lloyd–Max's algorithm and its initialization is carried out. An effective method for initialization of Lloyd–Max's algorithm of optimal scalar quantization for Laplacian source is proposed. The proposed method is very simple method of making an intelligent guess of the starting points for the iterative Lloyd–Max's algorithm. Namely, the initial values for the iterative Lloyd–Max's algorithm can be determined by the values of compandor's parameters. It is demonstrated that by following that logic the proposed method provides a rapid convergence of the Lloyd–Max's algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
39. Nesting Algorithm for Multi-Classification Problems.
- Author
-
Bo Liu, Zhifeng Hao, and Xiaowei Yang
- Subjects
ALGORITHMS ,LEAST squares ,FOUNDATIONS of arithmetic ,ALGEBRA - Abstract
Abstract Support vector machines (SVMs) are originally designed for binary classifications. As for multi-classifications, they are usually converted into binary ones. In the conventional multi-classifiable algorithms, One-against-One algorithm is a very power method. However, there exists a middle unclassifiable region. In order to overcome this drawback, a novel method called Nesting Algorithm is presented in this paper. Our ideas are as follows: firstly, construct the optimal hyperplanes based on One-against-One approach. Secondly, if there exist data points in the middle unclassifiable region, select them to construct the optimal hyperplanes with the same hyperparameters. Thirdly, repeat the second step until there are no data points in the unclassifiable region or the region is disappeared. In this paper, we also prove the validity of the proposed algorithm for unclassifiable region and give the computational complexity analysis of the method. In order to examine the training accuracy and the generalization performance of the proposed algorithm, One-against-One algorithm, fuzzy least square support vector machine (FLS-SVM) and the proposed algorithm are applied to five UCI datasets. The results show that the training accuracy of the proposed algorithm is higher than the others, and its generalization performance is also comparable with them. [ABSTRACT FROM AUTHOR]
- Published
- 2007
40. Kernel CMAC With Improved Capability.
- Author
-
Horváth, Gábor and Szabó, Tamás
- Subjects
KERNEL functions ,ALGORITHMS ,NUMERICAL analysis ,FOUNDATIONS of arithmetic ,INTERPOLATION - Abstract
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
41. An On-line Multi-CBR Agent Dispatching Algorithm.
- Author
-
Yan Li, Xi-Zhao Wang, and Ming-Hu Ha
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,CASE-based reasoning - Abstract
Abstract Case-based reasoning (CBR) is an effective and fast problem-solving methodology, which solves new problems by remembering and adaptation of past cases. With the increasing requests for useful references for all kinds of problems and from different locations, keeping a single CBR system seems to be outdated and not practical. Multi-CBR agents located in different places are of great support to fast meet these requests. In this paper, the architecture of a multi-CBR agent system is proposed, where the CBR agents locate at different places, and are assumed to have the same ability to deal with new problem independently. When the requests in a request queue from different places are coming one by one, we propose a new policy of dispatching which agent to satisfy the request queue. Throughout the paper, we assume that the system must solve the coming request by considering only past requests. In this context, the performance of traditional greedy algorithms is not satisfactory. We apply a new but simple approach – competitive algorithm for on-line problem (called On-line multi-CBR agent dispatching algorithm) to determine the dispatching policy to keep comparative low cost. The corresponding on-line dispatching algorithm is proposed and the competitive ratio is given. Based on the competitive algorithm, the dispatching of multi-CBR agents is optimized. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
42. New Grey Modeling Method by Extending Regression Conditions.
- Author
-
Lan Chin-Wu
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,COMPUTER programming ,MATHEMATICS - Abstract
In this paper, a novel algorithm of the grey model termed GME(1,1 ) is proposed. Based on the discrete form of developing coefficients â of grey model, the new grey modeling method is presented. The paper raises some numerical examples and real physical eases for model verification test. It is evident that the proposed approach can correct the defects of the original grey model and reduce the affects of the extreme value of monitoring data. It validates the superiority of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2006
43. Business process improvement using multi-objective optimisation.
- Author
-
K. Vergidis, A. Tiwari, and B. Majeed
- Subjects
COMPUTER software industry ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Business process redesign and improvement has become an increasingly attractive subject in the wider area of business process intelligence. Although there have been many attempts to establish a business process redesign framework, there is little work on the actual optimisation of business processes with given objectives. Furthermore, most of the attempts to optimise a business process are manual and do not involve a formal automated methodology. This paper proposes a process improvement approach for automated multi-objective optimisation of business processes. The proposed framework uses a generic business process model that is formally defined. The formal definition of business processes is necessary to ensure that the optimisation will take place in a clearly defined, repeatable and verifiable way. Multi-objectivity is expressed in terms of process cost and duration as two key objectives for any business process. The business process model is programmed and incorporated into a software optimisation platform where a selection of multi-objective optimisation algorithms can be applied to a business process design. This paper outlines a case study of business process design that is optimised by the state-of-the-art multi-objective optimisation algorithm NSGA2. The results indicate that, although business process optimisation is a highly constrained problem with fragmented search space, a number of alternative optimised business processes that meet the optimisation criteria can be produced. The paper also provides directions for future research in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2006
44. Evolutionary Testing Using an Extended Chaining Approach.
- Author
-
McMinn, P. and Holcombe, M.
- Subjects
EVOLUTIONARY computation ,ALGORITHMS ,MATHEMATICAL variables ,ARTIFICIAL neural networks ,FOUNDATIONS of arithmetic - Abstract
Fitness functions derived from certain types of white-box test goals can be inadequate for evolutionary software test data generation (Evolutionary Testing), due to a lack of search guidance to the required test data. Often this is because the fitness function does not take into account data dependencies within the program under test, and the fact that certain program statements may need to have been executed prior to the target structure in order for it to be feasible. This paper proposes a solution to this problem by hybridizing Evolutionary Testing with an extended Chaining Approach. The Chaining Approach is a method which identifies statements on which the target structure is data dependent, and incrementally develops chains of dependencies in an event sequence. By incorporating this facility into Evolutionary Testing, and by performing a test data search for each generated event sequence, the search can be directed into potentially promising, unexplored areas of the test object's input domain. Results presented in the paper show that test data can be found for a number of test goals with this hybrid approach that could not be found by using the original Evolutionary Testing approach alone. One such test goal is drawn from code found in the publicly available libpng library. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
45. Efficient solution of population balance models employing a hierarchical solution strategy based on a multi-level discretization.
- Author
-
Nanfeng Sun and Immanuel, Charles D.
- Subjects
ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic ,DIFFERENTIAL equations ,BESSEL functions ,CALCULUS - Abstract
This paper addresses the efficient solution of population balance models. Population balance equations are hyperbolic partial differential equations that lead to stiff problems with sharp moving fronts. Thus, a standard discretization algorithm such as the Method of Weighted Residuals is inadequate. The present paper is based on a hierarchical two-tier solution strategy that exploits the physics of the process. The paper examines a multi-level discretization strategy within the framework of the hierarchical algorithm with an aim to ensure efficient computation subject to acceptable accuracy levels. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
46. Multivariate Markov switching dynamic conditional correlation GARCH representations for contagion analysis.
- Author
-
Billio, Monica and Caporin, Massimiliano
- Subjects
MARKOV processes ,ALGORITHMS ,STOCHASTIC processes ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
This paper provides an extension of the Dynamic Conditional Correlation model of Engle (2002) by allowing both the unconditional correlation and the parameters to be driven by an unobservable Markov chain. We provide the estimation algorithm and perform an empirical analysis of the contagion phenomenon in which our model is compared to the traditional CCC and DCC representations. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
47. Maintaining Longest Paths Incrementally.
- Author
-
Irit Katriel, Laurent Michel, and Pascal Van Hentenryck
- Subjects
ALGORITHMS ,ACYCLIC model ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Abstract Modeling and programming tools for neighborhood search often support invariants, i.e., data structures specified declaratively and automatically maintained incrementally under changes. This paper considers invariants for longest paths in directed acyclic graphs, a fundamental abstraction for many applications. It presents bounded incremental algorithms for arc insertion and deletion which run in O(?d? + |d| log|d|) time and O(?d?) time respectively, where |d| and ?d? are measures of the change in the input and output. The paper also shows how to generalize the algorithm to various classes of multiple insertions/deletions encountered in scheduling applications. Preliminary experimental results show that the algorithms behave well in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2005
48. SAT-Based Unbounded Symbolic Model Checking.
- Author
-
Kang, Hyeong-Ju and Park, In-Cheol
- Subjects
BOOLEAN algebra ,ALGORITHMS ,COMPUTER programming ,MATHEMATICAL analysis ,MATHEMATICAL models ,FOUNDATIONS of arithmetic - Abstract
This paper describes a Boolean satisfiability checking (SAT)-based unbounded symbolic model-checking algorithm. The conjunctive normal form is used to represent sets of states and transition relation. A logical operation on state sets is implemented as an operation on conjunctive normal form formulas. A satisfy-all procedure is proposed to compute the existential quantification required in obtaining the preimage and fix point. The proposed satisfy-all procedure is implemented by modifying a SAT procedure to generate all the satisfying assignments of the input formula, which is based on new efficient techniques such as line justification to make an assignment covering more search space, excluding clause management, and two-level logic minimization to compress the set of found assignments. In addition, a cache table is introduced into the satisfy-all procedure. It is a difficult problem for a satisfy-all procedure to detect the case that a previous, result can be reused. This paper shows that the case can be detected by comparing sets of undetermined variables and clauses. Experimental results show that the proposed algorithm can check more circuits than binary decision diagram-based and previous SAT-based model-checking algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
49. Soft-computing combining of distributed power control algorithms.
- Author
-
Z. Uykan and H. N. Koivo
- Subjects
ALGORITHMS ,ELECTRONIC data processing ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
In traditional distributed power control (DPC) algorithms, every user in the system is treated in the same way, i.e., the same power control algorithm is applied to every user in the system. In this paper, we divide the users into different groups depending on their channel conditions and use different DPC accordingly. Our motivation comes from the fact that different DPC algorithms have its own advantages and drawbacks, and our aim in this paper is to “combine” the advantages of different DPC algorithms, and we use soft computing techniques for that. In the simulations results, we choose Foschini and Miljanic Algorithm in [3], which has relatively fast convergence but is not robust against time-varying link gain changes and CIR estimation errors, and fixed step algorithm of Kim [3], which is robust but its convergence is slow. By “combining” these two algorithms using soft computing techniques, the resulting algorithm has fast convergence and is robust. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
50. On Computation of State Avoidance Control for Infinite State Systems in Assignment Program Framework.
- Author
-
Kumar, Ratnesh and Garg, Vijay K.
- Subjects
ALGORITHMS ,VARIABLE annuities ,SUPERVISION ,MANAGEMENT ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
In this paper, we study supervisory control of discrete event systems with potentially infinite state-space using state variables for representation and specification. An assignment program model consisting of state variables and a finite set of conditional assignment statements is used for representing a discrete event system, and a predicate over state variables is used for representing a state avoidance control specification. The contribution of this paper is to show how to perform supervisory control computations symbolically. In the case of a Petri net (vector addition system) with the set of forbidden states being a right-closed set, we present a finitely terminating algorithm for maximally permissive supervision. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.