33 results on '"*SUBROUTINES (Computer programs)"'
Search Results
2. A COMPUTATIONAL MODEL FOR MULTI-CRITERIA DECISION MAKING IN TRAFFIC JAM PROBLEM.
- Author
-
Naeem, Ali and Abbas, Jabbar
- Subjects
- *
MULTIPLE criteria decision making , *TRAFFIC congestion , *DECISION making , *SUBROUTINES (Computer programs) , *RADIO frequency identification systems , *COMPUTER algorithms - Abstract
In this paper, we apply a computational model for multicriteria decision making in traffic jam problems. First, we propose a system to determine the optimal shortcut road by reading the number of cars in each street using Radio Frequency Identification (RFID). Then, we have processed the data of traffic jam problems using Choquet integral with writing algorithm and computer program as a working procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. PinMesh--Fast and exact 3D point location queries using a uniform grid.
- Author
-
Magalhães, Salles V. G., Andrade, Marcus V. A., Franklin, W. Randolph, and Wenli Li
- Subjects
- *
MESH networks , *GRID computing , *COMPUTER algorithms , *COMPUTER programming , *COMPUTER storage devices , *QUERYING (Computer science) , *SUBROUTINES (Computer programs) - Abstract
This paper presents PinMesh, a very fast algorithm with implementation to preprocess a polyhedral mesh, also known as a multi-material mesh, in order to perform 3D point location queries. PinMesh combines several innovative components to efficiently handle the largest available meshes. Because of a 2-level uniform grid, the expected preprocessing time is linear in the input size, and the code parallelizes well on a shared memory machine. Querying time is almost independent of the dataset size. PinMesh uses exact arithmetic with rational numbers to prevent roundoff errors, and symbolic perturbation with Simulation of Simplicity (SoS) to handle geometric degeneracies or special cases. PinMesh is intended to be a subroutine in more complex algorithms. It can preprocess a dataset and perform 1 million queries up to 27 times faster than RCT (Relative Closest Triangle), the current fastest algorithm. Preprocessing a sample dataset with 50 million triangles took only 14 elapsed seconds on a 16-core Xeon processor. The mean query time was 0.6μs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. A New Algorithm to Design Minimal Multi-Functional Observers for Linear Systems.
- Author
-
Mohajerpoor, Reza, Abdi, Hamid, and Nahavandi, Saeid
- Subjects
COMPUTER algorithms ,LINEAR systems ,OBSERVABILITY (Control theory) ,LINEAR time invariant systems ,SUBROUTINES (Computer programs) ,DEGREES of freedom - Abstract
Designing minimum possible order (minimal) observers for multi-input multi-output (MIMO) linear systems have always been an interesting subject. In this paper, a new methodology to design minimal multi-functional observers for linear time invariant (LTI) systems is proposed. The approach is applicable, and it also helps in regulating the convergence rate of the observed functions. It is assumed that the system is functional observable or functional detectable, which is less conservative than assuming the observability or detectability of the system. To satisfy the minimality of the observer, a recursive algorithm is provided that increases the order of the observer by appending the minimum required auxiliary functions to the desired functions that are going to be estimated. The algorithm increases the number of functions such that the necessary and sufficient conditions for the existence of a functional observer are satisfied. Moreover, a new methodology to solve the observer design interconnected equations is elaborated. Our new algorithm has advantages with regard to the other available methods in designing minimal order functional observers. Specifically, it is compared with the most common schemes, which are transformation based. Using numerical examples it is shown that under special circumstances, the conventional methods have some drawbacks. The problem partly lies in the lack of sufficient numerical degrees of freedom proposed by the conventional methods. It is shown that our proposed algorithm can resolve this issue. A recursive algorithm is also proposed to summarize the observer design procedure. Several numerical examples and simulation results illustrate the efficacy, superiority and different aspects of the theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. New Bucket Join Algorithm for Faster Join Query Results.
- Author
-
Gunasekaran, Hemalatha and Gowder, ThanushkodiKeppana
- Subjects
COMPUTER algorithms ,HASHING ,APPLICATION software ,SUBROUTINES (Computer programs) ,REACTION time - Abstract
Join is the most expensive and the frequent operation in database. Significant numbers of join queries are executed in the interactive applications. In interactive applications the first few thousand results need to be produced without any delay. The current join algorithms are mainly based on hash join or sort merge join which is less suitable for interactive applications because some pre-work is required by these algorithms before it could produce the join results. The nested loop join technique produces the results without any delay, but it needs more comparisons to produce the join results as it carries the tuples which will not yield any join results till the end of the join operation. In this paper we present a new join algorithm called bucket join which will over comes the limitations of hash based and sort based algorithms. In this new join algorithm the tuples are divided into buckets without any pre-work. The matched tuples and the tuples which will not produce the join results are eliminated during each phase thus the no. of comparison required to produce the join results are considerable low when compared to the other join algorithms. Thus, the bucket join algorithm can replace the other early join algorithms in any situation where a fast initial response time is required without any penalty in the memory usage and I/O operations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
6. Matrix Computations with Fortran and Paging.
- Author
-
Moler, Cleve B.
- Subjects
- *
MATRICES software , *SUBROUTINES (Computer programs) , *VIRTUAL storage (Computer science) , *PAGING (Computer science) , *DECOMPILERS (Computer programs) , *DYNAMIC storage allocation (Computer science) , *COMPUTER terminals , *COMPUTER operating systems , *COMPUTER algorithms - Abstract
Discusses matrix computations using Fortran computer program language and paging. Influence of the order of nested loops on the efficiency of conventional Fortran programs; Effect of nested loop modifications on large programs run under paging-based operating system.
- Published
- 1972
- Full Text
- View/download PDF
7. A faster algorithm for testing polynomial representability of functions over finite integer rings.
- Author
-
Guha, Ashwin and Dukkipati, Ambedkar
- Subjects
- *
POLYNOMIALS , *SUBROUTINES (Computer programs) , *COMPUTER algorithms , *RINGS of integers , *MODULES (Algebra) - Abstract
Given a function from Z n to itself one can determine its polynomial representability by using Kempner function. In this paper we present an alternative characterization of polynomial functions over Z n by constructing a generating set for the Z n -module of polynomial functions. This characterization results in an algorithm that is faster on average in deciding polynomial representability. We also extend the characterization to functions in several variables. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
8. EVL: A framework for multi-methods in C++.
- Author
-
Le Goc, Yannick and Donzé, Alexandre
- Subjects
- *
C++ , *RUN time systems (Computer science) , *PROGRAMMING languages , *COMPUTER algorithms , *SUBROUTINES (Computer programs) , *OBJECT-oriented methods (Computer science) - Abstract
Multi-methods are functions whose calls at runtime are resolved depending on the dynamic types of more than one argument. They are useful for common programming problems. However, while many languages provide different mechanisms to implement them in one way or another, there is still, to the best of our knowledge, no library or language feature that handles them in a general and flexible way. In this paper, we present the EVL (Extended Virtual function Library) framework which provides a set of classes in C++ aiming at solving this problem. The EVL framework provides a generalization of virtual function dispatch through the number of dimensions and the selection of the function to invoke using a so-called Function Comparison Operator . Our library provides both symmetric and asymmetric dispatch algorithms that can be refined by the programmer to include criteria other than class inheritance. For instance, the EVL framework provides multi-methods with predicate dispatch by defining a dedicated FCO based not only on the dynamic types of the arguments but also on their values. This flexibility greatly helps to resolve ambiguities without having to define new functions. Our multi-methods also unify dispatch tables and caching by introducing cache strategies for which the implementation is a balance between memory and speed. To define multi-methods in C++, we implement a non-intrusive reflection library providing fast dynamic casting and supporting dynamic class loading. Our multi-methods are policy-based class templates that support virtual but not repeated inheritance. They check the type compatibility of functions at compile-time, preserve type-safety and resolve function calls at runtime by invoking the cache or updating it by computing the selected function for the requested tuple of types. By default, our multi-methods handle dispatch errors at runtime by throwing exceptions but an error-code strategy can be set up by defining a dedicated policy class. Performance of our multi-methods is comparable with that of standard virtual functions when configured with fast cache. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. Improved algorithms to network p-center location problems.
- Author
-
Bhattacharya, Binay and Shi, Qiaosheng
- Subjects
- *
COMPUTER algorithms , *COMPUTER networks , *DATA analysis , *SUBROUTINES (Computer programs) , *COMPUTER simulation - Abstract
In this paper we show that a p ( ⩾ 2 ) -center location problem in general networks can be transformed to the well-known Kleeʼs measure problem (Overmars and Yap, 1991) [15] . This results in a significantly improved algorithm for the continuous case with running time O ( m p n p / 2 2 log ⁎ n log n ) for p ⩾ 3 , where n is the number of vertices, m is the number of edges, and log ⁎ n denotes the iterated logarithm of n (Cormen et al., 2001) [10] . For p = 2 , the running time of the improved algorithm is O ( m 2 n log 2 n ) . The previous best result for the problem is O ( m p n p α ( n ) log n ) where α ( n ) is the inverse Ackermann function (Tamir, 1988) [17] . When the underlying network is a partial k -tree ( k fixed), we exploit the geometry inherent in the problem and propose a two-level tree decomposition data structure which can be used to efficiently solve discrete p -center problems for small values of p . [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Algorithm 936.
- Author
-
Krogh, Fred T.
- Subjects
- *
FORTRAN , *SUBROUTINES (Computer programs) , *COMPUTER algorithms , *ERROR messages (Computer science) , *APPLICATION software research - Abstract
A code is presented which offers a simple clean way to get output that is very easy to read. Special support is given for the output of error messages which are a part of an application package or subprogram library. The code uses many of the features in Fortran 2003, and the "NEWUNIT=" in an open statement from Fortran 2008. The latter can easily be replaced with "UNIT=99". One goal here is to illustrate some of the nice features in recent incarnations of Fortran. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. Joint utility function-based scheduling for two-way communication services in wireless networks.
- Author
-
So, Jaewoo
- Subjects
- *
SUBROUTINES (Computer programs) , *COMPUTER scheduling , *WIRELESS communications , *COMPUTER networks , *QUALITY assurance , *COMPUTER algorithms - Abstract
Abstract: A type of joint utility function-based scheduling is proposed for two-way communication services in wireless networks. The scheduling of uplink and downlink services is done jointly so that the base station selects a user efficiently and fairly while considering the channel state of both the uplink and the downlink. Because a user generally has two communication links, an uplink and a downlink, the overall satisfaction with a communication service can be formulated as the sum of the quality of the uplink and downlink services. However, most of the previous types of scheduling for the uplink and downlink were designed separately and independently. This paper proposes a joint scheduling algorithm for integrated uplink and downlink services: a base station selects a user while simultaneously considering both the uplink channel state and the downlink channel state. An analytical model is developed for the purpose of determining the scheduling metric, the system throughput, and the level of fairness. The numerical and computer simulation results show that in comparison with conventional proportional fair scheduling the proposed joint scheduling achieves a better throughput while satisfying the fairness among users. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
12. Swallow swarm optimization algorithm: a new method to optimization.
- Author
-
Neshat, Mehdi, Sepidnam, Ghodrat, and Sargolzaei, Mehdi
- Subjects
- *
PARTICLE swarm optimization , *PARTICLES , *SUBROUTINES (Computer programs) , *COMPUTATIONAL intelligence , *ARTIFICIAL intelligence , *COMPUTER algorithms - Abstract
This paper presents an exposition of a new method of swarm intelligence-based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
13. Efficient All Top-$(k)$ Computation—A Unified Solution for All Top-$(k)$, Reverse Top-$(k)$ and Top-$(m)$ Influential Queries.
- Author
-
Ge, Shen, Hou U, Leong, Mamoulis, Nikos, and Cheung, David W.
- Subjects
- *
QUERY (Information retrieval system) , *COMPUTER algorithms , *PROGRAM transformation , *LINEAR programming , *SUBROUTINES (Computer programs) - Abstract
Given a set of objects $(P)$ and a set of ranking functions $(F)$ over $(P)$, an interesting problem is to compute the top ranked objects for all functions. Evaluation of multiple top-$(k)$ queries finds application in systems, where there is a heavy workload of ranking queries (e.g., online search engines and product recommendation systems). The simple solution of evaluating the top-$(k)$ queries one-by-one does not scale well; instead, the system can make use of the fact that similar queries share common results to accelerate search. This paper is the first, to our knowledge, thorough study of this problem. We propose methods that compute all top-$(k)$ queries in batch. Our first solution applies the block indexed nested loops paradigm, while our second technique is a view-based algorithm. We propose appropriate optimization techniques for the two approaches and demonstrate experimentally that the second approach is consistently the best. Our approach facilitates evaluation of other complex queries that depend on the computation of multiple top-$(k)$ queries, such as reverse top-$(k)$ and top-$(m)$ influential queries. We show that our batch processing technique for these complex queries outperform the state-of-the-art by orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. A chaotic particle swarm optimization exploiting a virtual quartic objective function based on the personal and global best solutions.
- Author
-
Tatsumi, Keiji, Ibuki, Takeru, and Tanino, Tetsuzo
- Subjects
- *
PARTICLE swarm optimization , *COMPUTER algorithms , *SUBROUTINES (Computer programs) , *MATHEMATICAL optimization , *COMPUTER systems , *APPROXIMATION theory - Abstract
Abstract: The particle swarm optimization method (PSO) is one of population-based optimization techniques for global optimization, where a number of candidate solutions called particles simultaneously move toward the tentative solutions found by particles so far, which are called the personal and global bests, respectively. Since, in the PSO, the exploration ability is important to find a desirable solution, various kinds of methods have been investigated to improve it. In this paper, we propose a PSO with a new chaotic system derived from the steepest descent method for a virtual quartic objective function with perturbations having its global minima at the personal and global bests, where elements of each particle’s position are updated by the proposed chaotic system or the standard update formula. Thus, the proposed PSO can search for solutions around the personal and global bests intensively without being trapped at any local minimum due to the chaoticness. Moreover, we show approximately the sufficient condition of parameter values of the proposed system under which the system is chaotic. Through computational experiments, we verify the performance of the proposed PSO by applying it to some global optimization problems. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
15. Tutorial: Simulating chromatography with Microsoft Excel Macros.
- Author
-
Kadjo, Akinde and Dasgupta, Purnendu K.
- Subjects
- *
CHROMATOGRAPHIC analysis , *ANALYTICAL chemistry , *SUBROUTINES (Computer programs) , *SIMULATION methods & models , *BIODEGRADATION , *COMPUTER algorithms - Abstract
Abstract: Chromatography is one of the cornerstones of modern analytical chemistry; developing an instinctive feeling for how chromatography works will be invaluable to future generation of chromatographers. Specialized software programs exist that handle and manipulate chromatographic data; there are also some that simulate chromatograms. However, the algorithm details of such software are not transparent to a beginner. In contrast, how spreadsheet tools like Microsoft Excel™ work is well understood and the software is nearly universally available. We show that the simple repetition of an equilibration process at each plate (a spreadsheet row) followed by discrete movement of the mobile phase down by a row, easily automated by a subroutine (a “Macro” in Excel), readily simulates chromatography. The process is readily understood by a novice. Not only does this permit simulation of isocratic and simple single step gradient elution, linear or multistep gradients are also easily simulated. The versatility of a transparent and easily understandable computational platform further enables the simulation of complex but commonly encountered chromatographic scenarios such as the effects of nonlinear isotherms, active sites, column overloading, on-column analyte degradation, etc. These are not as easily simulated by available software. Views of the separation as it develops on the column and as it is seen by an end-column detector are both available in real time. Excel 2010™ also permits a 16-level (4-bit) color gradation of numerical values in a column/row; this permits visualization of a band migrating down the column, much as Tswett may have originally observed, but in a numerical domain. All parameters of relevance (partition constants, elution conditions, etc.) are readily changed so their effects can be examined. Illustrative Excel spreadsheets are given in the Supporting Information; these are easily modified by the user or the user can write his/her own routine. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
16. Empirical Modelling of Linear Algebra Shared-Memory Routines.
- Author
-
Cámara, Jesús, Cuenca, Javier, García, Luis-Pedro, and Giménez, Domingo
- Subjects
LINEAR algebra ,COMPUTER storage devices ,SUBROUTINES (Computer programs) ,EMPIRICAL research ,COMPUTER algorithms ,PROBLEM solving - Abstract
Abstract: In this work the behavior of the multithreaded implementation of some LAPACK routines on PLASMA and Intel MKL is analyzed. The main goal is to develop a methodology for the installation and modelling of shared-memory linear algebra routines so that some decisions to reduce the execution time can be taken at running time. Typical decisions are: the number of threads to use, the block or tile size in algorithms by blocks or tiles, and the routine to use when there are several algorithms or implementations to solve the problem available. Experiments carried out with PLASMA and Intel MKL show that decisions can be taken automatically and satisfactory execution times are obtained. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
17. Heuristic repairing operators for 3D tetrahedral mesh generation using the advancing-front technique
- Author
-
Adamoudis, Lazaros D., Koini, Georgia, and Nikolos, Ioannis K.
- Subjects
- *
HEURISTIC programming , *NUMERICAL grid generation (Numerical analysis) , *APPLICATION software , *COMPUTER algorithms , *SUBROUTINES (Computer programs) , *INFORMATION technology - Abstract
Abstract: In various applications of the advancing front based algorithms, involving complicated three dimensional geometries, the algorithm fails to complete the mesh generation process, as a number of small regions cannot be meshed using the standard procedure. These regions, formed by faces of the current front, are usually disconnected, highly non-convex and cannot be handled with simple actions. A strategy involving different novel and already in use operators is presented in this work, to successfully discretize such regions after the completion of the standard advancing front technique (AFT), in order to improve the robustness of the mesh generation procedure for difficult and complicated geometries. Examples of generated meshes using the proposed methodology are also presented for the validation of the proposed strategy. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
18. Macar Algoritmasının Sıfırları Kapatma Alt Yordamı Üzerine.
- Author
-
Berberler, Murat Ersen, Uğurlu, Onur, and Kizilateş, Gözde
- Subjects
- *
SUBROUTINES (Computer programs) , *ASSIGNMENT problems (Programming) , *COMPUTER algorithms , *COMPUTER science literature , *PROBLEM solving , *MATCHING theory , *ZERO (The number) - Abstract
The Hungarian algorithm is one of the most well-known methods in computer science literature. By this method, in each step the cost matrix is systematically reduced to a new matrix in order to obtain an optimal solution for the assignment problem. The subroutine of the algorithm includes determining the minimum number of lines needed to cover all zeros in the reduced cost matrix and modifying the matrix according to the number of lines. In this paper, firstly the methods in literature including the covering all zeros with a minimum number of lines are examined, then a new method is proposed and computational experiments are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2012
19. Determination of Sparse Representations of Multiple-Valued Logic Functions by Using Covering Codes.
- Author
-
ASTOLA, JAAKKO T. and STANKOVIĆ, RADOMIR S.
- Subjects
CODING theory ,FUNCTIONAL analysis ,SUBROUTINES (Computer programs) ,BINARY number system ,COMPUTER algorithms - Abstract
The paper points out the relationships and similarities between some problems in the theory of covering codes and the determination of sparse functional expressions for logic functions. Based on these connections we propose a method to derive functional expressions that have an a priory specified number of product terms. The method can be applied to either binary or multiple-valued functions with different sets for values of variables or function values by selecting appropriately the underlying covering code. The number of product terms in the related functional expression is determined by the covering radius of the code. We present algorithms to determine the coefficients in these expressions, discuss their complexities, and provide a direct construction to extend the application of this approach to binary and multiple-valued functions for a large number of variables. [ABSTRACT FROM AUTHOR]
- Published
- 2012
20. Refined typing to localize the impact of forced strictness on free theorems.
- Author
-
Seidel, Daniel and Voigtländer, Janis
- Subjects
- *
PROGRAMMING languages , *PROOF theory , *SUBROUTINES (Computer programs) , *COMPUTER algorithms , *COMPUTER programming , *FUNCTIONAL programming (Computer science) , *HASKELL (Computer program language) - Abstract
Free theorems establish interesting properties of parametrically polymorphic functions, solely from their types, and serve as a nice proof tool. For pure and lazy functional programming languages, they can be used with very few preconditions. Unfortunately, in the presence of selective strictness, as provided in languages like Haskell, their original strength is reduced. In this paper we present an approach for overcoming this weakness in specific situations. Employing a refined type system which tracks the use of enforced strict evaluation, we rule out unnecessary restrictions that otherwise emerge. Additionally, we provide (and implement) an algorithm determining all refined types for a given term. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
21. A k-means type clustering algorithm for subspace clustering of mixed numeric and categorical datasets
- Author
-
Ahmad, Amir and Dey, Lipika
- Subjects
- *
COMPUTER algorithms , *CLUSTER analysis (Statistics) , *INVARIANT subspaces , *SUBROUTINES (Computer programs) , *DATA analysis , *CATEGORIES (Mathematics) , *COMPUTER programming - Abstract
Abstract: Almost all subspace clustering algorithms proposed so far are designed for numeric datasets. In this paper, we present a k-means type clustering algorithm that finds clusters in data subspaces in mixed numeric and categorical datasets. In this method, we compute attributes contribution to different clusters. We propose a new cost function for a k-means type algorithm. One of the advantages of this algorithm is its complexity which is linear with respect to the number of the data points. This algorithm is also useful in describing the cluster formation in terms of attributes contribution to different clusters. The algorithm is tested on various synthetic and real datasets to show its effectiveness. The clustering results are explained by using attributes weights in the clusters. The clustering results are also compared with published results. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
22. A Polynomial Approximation Algorithm for Real-Time Maximum-Likelihood Estimation.
- Author
-
Villien, Christophe and Ostertag, Eric P.
- Subjects
- *
APPROXIMATION theory , *COMPUTER programming , *COMPUTER algorithms , *MACHINE theory , *ESTIMATION theory software , *MATHEMATICAL statistics , *INSTRUMENTAL variables (Statistics) , *SUBROUTINES (Computer programs) , *DIFFERENTIAL equations - Abstract
Maximum-likelihood estimation subject to nonlinear measurement functions is generally performed through optimization algorithms when accuracy is required and enough processing time is available, or with recursive filters for real-time applications but at the expense of a loss of accuracy. In this paper, we propose a new estimator for parameter estimation based on a polynomial approximation of the measurement signal. The raw dataset is replaced by n + 1 independent polynomial samples (PS) for a smoothing polynomial of order n, resulting in a reduction of the computational burden. It is shown that the PSs must be sampled at some deterministic instants and an approximate formula for the variance of the PSs is also provided. Moreover, it is also proved and illustrated on three examples that the new estimator which processes the PSs is equivalent to the standard maximum-likelihood estimator based on the raw dataset, provided that the measurement function and its first derivatives can be approximated with a polynomial of order n. Since this algorithm proceeds from a compact representation of a measurement signal, it can find applications in real-time processing, power saving processing, or estimation based on compressed data, even if this latter field has not been investigated from a theoretical perspective. Its structure which is made up of several separate tasks is also adapted to distributed processing problems. Because the performance of the method is related to the polynomial approximation quality, the algorithm is well suited for smooth measurement functions like in trajectory estimation applications. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
23. XML Processing and Data Integration with XQuery.
- Author
-
Robie, Jonathan
- Subjects
XML (Extensible Markup Language) ,COMPUTER algorithms ,SUBROUTINES (Computer programs) ,RELATIONAL databases ,WEB services ,SCRIPTING languages (Computer science) ,SQL ,QUERY languages (Computer science) ,JAVA programming language ,XPATH (Computer program language) - Abstract
Most Web applications exchange data as XML, but they create and process this data with languages that don't have native support for XML. With appropriate middleware, XQuery can dramatically simplify this process, treating all data sources as though they were XML. This article shows how to use XQuery for native XML processing and data integration, briefly explores other technologies used in the same space, and discusses some XQuery extensions for scripting and updates that are under way. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
24. Real-time bas-relief generation from a 3D mesh.
- Author
-
Zhang, Yu-Wei, Zhou, Yi-Qi, Zhao, Xiao-Feng, and Yu, Gang
- Subjects
REAL-time computing ,COMPUTER algorithms ,FEATURE extraction ,CONTROL theory (Engineering) ,MATHEMATICAL mappings ,IMAGE analysis ,SUBROUTINES (Computer programs) - Abstract
Abstract: Most of the existing approaches to bas-relief generation operate in image space, which is quite time-consuming in practice. This paper presents a different bas-relief generation algorithm based on geometric compression and starting from a 3D mesh input. The feature details are first extracted from the original objects using a spatial bilateral filtering technique. Then, a view-dependent coordinate mapping method is applied to build the height domain for the current view. After fitting the compression datum plane, the algorithm uses an adaptive compression function to scale and combine the Z values of the base mesh and the fine details. This approach offers control over the level of detail, making it flexible for the adjustment of the appearance of details. For a typical input mesh with 100k triangles, this algorithm computes a bas-relief in 0.214s. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
25. An effective algorithm for calculating the Chandrasekhar function
- Author
-
Jablonski, A.
- Subjects
- *
COMPUTER algorithms , *ELECTRON transport , *COMPILERS (Computer programs) , *COMPUTER operating systems , *SUBROUTINES (Computer programs) , *MATHEMATICAL models , *FORTRAN , *LICENSE agreements - Abstract
Abstract: Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Program summary: Program title: CHANDRAS Catalogue identifier: AEMC_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queenʼs University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: and , which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at , the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
26. Regularization of multi-soliton form factors in sine-Gordon model
- Author
-
Pálmai, T.
- Subjects
- *
MATHEMATICAL regularization , *SOLITONS , *MATHEMATICAL models , *PROGRAMMING languages , *SUBROUTINES (Computer programs) , *COMPUTER algorithms , *OPERATOR theory , *CROSS-platform software development - Abstract
Abstract: A general and systematic regularization is developed for the exact solitonic form factors of exponential operators in the ()-dimensional sine-Gordon model by analytical continuation of their integral representations. The procedure is implemented in Mathematica. Test results are shown for four- and six-soliton form factors. Program summary: Program title: SGFF Catalogue identifier: AEMG_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMG_v1_0.html Program obtainable from: CPC Program Library, Queenʼs University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1462 No. of bytes in distributed program, including test data, etc.: 15 488 Distribution format: tar.gz Programming language: Mathematica [1] Computer: PC Operating system: Cross-platform Classification: 7.7, 11.1, 23 Nature of problem: The multi-soliton form factors of the sine-Gordon model (relevant in two-dimensional physics) were given only by highly non-trivial integral representation with a limited domain of convergence. Practical applications of the form factors, e.g. calculation of correlation functions in two-dimensional condensed matter systems, were not possible in general. Solution method: Using analytic continuation techniques an efficient algorithm is found and implemented in Mathematica, which provides a general and systematic way to calculate multi-soliton form factors in the sine-Gordon model. The package contains routines to compute the two-, four- and six-soliton form factors. Running time: Strongly dependent on the desired accuracy and the number of solitons. For physical rapidities after an initialization of about 30 s, the calculation of the two-, four- and six-soliton form factors at a single point takes approximately 0.5 s, 2.5 s and 8 s, respectively. Reference: [[1]] Wolfram Research, Inc., Mathematica Edition: Version 7.0, Wolfram Research, Inc., Champaign, Illinois, 2008. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
27. Negotiated Interfaces for Software Reuse.
- Author
-
Novak Jr., Gordon S., Hill, Fredrick N., Man-Lee Wan, and Sayrs, Brian G.
- Subjects
- *
COMPUTER software , *SUBROUTINES (Computer programs) , *COMPUTER programming , *COMPUTER algorithms , *ELECTRONIC data processing - Abstract
A significant barrier to the reuse of software is the rigid interface presented by a subroutine. For nontrivial data structures, it is unlikely that the existing form of the data of an application will match the requirements of a separately written subroutine. We describe two methods of interfacing existing data to a subroutine: generation of a program to convert the data to the form needed by the subroutine, and rewriting the subroutine, through compilation, to fit the existing data. Both methods can be invoked through easily used menu-based negotiation with the user. These methods have been implemented using the GLISP language and compiler. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
28. Computing Exact Distributions for Several Ordered 2 x K Tables.
- Author
-
Hirji, Karim F. and Vollset, Stein E.
- Subjects
FORTRAN ,COMPUTER algorithms ,DISTRIBUTION (Probability theory) ,SUBROUTINES (Computer programs) - Abstract
Provides a Fortran code, based on an efficient algorithm, for computing the exact distribution of the sufficient statistic for the common logistic parameter in a series of ordered 2 x K tables. Two principal Fortran subroutines of algorithm; Use of double-precision arithmetic; Optimal performance of the code in terms of speed and memory requirement.
- Published
- 1994
- Full Text
- View/download PDF
29. Number Tally.
- Author
-
Royston, J. P. and Altman, D. G.
- Subjects
COMPUTER algorithms ,SUBROUTINES (Computer programs) ,STATISTICS - Abstract
Presents an algorithm for tallying values or tabulating data using subroutine RTALLY. Description and purpose of the algorithm; Numerical method used; Description of the integer function BRANCH; Execution time for RTALLY.
- Published
- 1988
- Full Text
- View/download PDF
30. Exploratory Functions on Nondeterministic Strategies, up to Lower Bisimilarity.
- Author
-
Levy, Paul Blain and Weldemariam, Kidane Yemane
- Subjects
SUBROUTINES (Computer programs) ,COMPUTER algorithms ,LAMBDA calculus ,ITERATIVE methods (Mathematics) ,COMPUTER simulation ,DECIDABILITY (Mathematical logic) ,MATHEMATICAL continuum ,COMPUTATIONAL mathematics - Abstract
Abstract: We consider a typed lambda-calculus with no function types, only alternating sum and product types, so that closed terms represent strategies. We add nondeterminism and consider strategies up to lower (i.e. divergence-insensitive) bisimilarity. We investigate the question: when is a function on strategies definable by an open term (with sufficiently large nondeterminism)? The answer is: when it is “exploratory”. This is a kind of iterated continuity property, coinductively defined, that is decidable in the case of a function between finite types. In particular, any exploratory function between countably nondeterministic strategies is definable by a continuum nondeterministic term. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
31. Dynamic Computation of Derivatives.
- Author
-
Lesk, Arthur M.
- Subjects
- *
COMPILERS (Computer programs) , *SYSTEMS software , *SILICON compilers , *COMPUTER programming , *COMPUTER algorithms , *SUBROUTINES (Computer programs) - Abstract
It is shown how Wengert's procedure for computation of derivatives can be implemented conveniently by use of compiler-generated complex addition, subtraction, and linkage to complex arithmetic subroutines, Evaluation of a function and derivative proceed in parallel, as in Wengert's procedure, but with the "imaginary" parts of variables declared complex bearing the values of the derivatives of the real pads. This technique provides a simple way to compute the derivatives of a function, without the need for deriving and programming the evaluation of explicit formulas for the derivatives. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
32. Element distinctness revisited.
- Author
-
Portugal, Renato
- Subjects
- *
COMPUTER science , *SUBROUTINES (Computer programs) , *QUERY languages (Computer science) , *COMPUTER algorithms , *PROBABILITY theory - Abstract
The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x1,…,xN)
is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N2/3) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis’ seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007 ). The first critical parameter is the number of repetitions of the algorithm’s main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm’s success probability is 1-O(N1/(k+1)), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm. [ABSTRACT FROM AUTHOR] - Published
- 2018
- Full Text
- View/download PDF
33. Erratum to “FaCE: a tool for three body Faddeev calculations with core excitation” [Comput. Phys. Commun. 161 (2004) 87–107]
- Author
-
Thompson, I.J., Nunes, F.M., and Danilin, B.V.
- Subjects
- *
SUBROUTINES (Computer programs) , *COMPUTER software , *COMPUTER programming , *COMPUTER algorithms - Abstract
Abstract: The subroutine FANCDEF3 in the code FaCE published as ADTW in CPC 161 (2004) 87, is corrected. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.