64 results
Search Results
2. Software Implementation of an Algorithm for Automatic Detection of Lineaments and Their Properties in Open-Pit Dumps.
- Author
-
Popov, S. E., Potapov, V. P., and Zamaraev, R. Y.
- Subjects
- *
CONVOLUTIONAL neural networks , *OBJECT recognition (Computer vision) , *ALGORITHMS , *IMAGE recognition (Computer vision) , *AERIAL photography , *PYTHON programming language , *SOURCE code - Abstract
This paper presents an algorithm and description of its software implementation for detection of lineaments (ground erosions or cracks) in aerial images of open pits. The proposed approach is based on the apparatus of convolutional neural networks for semantic classification of binarized images of lineament objects, as well as graph theory for determining the geometric location of linearized lineament objects with subsequent calculation of their lengths and areas. As source data, three-channel RGB images of high-resolution aerial photography (10×10 cm) are used. The software module of the model is logically divided into three levels: preprocessing, detection, and post-processing. The first level implements the preprocessing of input data to form a training sample based on successive transformations of RGB images into binary images by using the OpenCV library. A neural network of the U-Net type, which includes convolutional (Encoder) and scanning (Decoder) blocks, represents the second level of the information model. At this level, automatic detection of objects is implemented. The third level of the model is responsible for calculating their areas and lengths. The result provided by the convolutional neural network is passed to it as input data. The lineament area is calculated by summing the total number of points and multiplying by the pixel size. The lineament length is calculated by linearizing the areal object into a segmented object with node pixels and, then, calculating the lengths between them while taking into account the resolution of the source image. The software module can work with fragments of the source image by combining them. The module is implemented in Python and its source code is available at https://gitlab.ict.sbras.ru/popov/lineaments/-/tree/master/lineaments-cnn. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. CGWO: An Improved Grey Wolf Optimization Technique for Test Case Prioritization.
- Author
-
Nayak, Gayatri, Barisal, Swadhin Kumar, and Ray, Mitrabinda
- Subjects
- *
MATHEMATICAL optimization , *WOLVES , *METAHEURISTIC algorithms , *ALGORITHMS - Abstract
The convergence rate has been widely accepted as a performance measure for choosing a better metaheuristic algorithm. So, we propose a novel technique to improve the performance of the existing Grey Wolf Optimization (GWO) algorithm in terms of its convergence rate. The proposed approach also prioritizes the test cases that are obtained after executing the input benchmark programs. This paper has three technical contributions. In our first contribution, we generate test cases for the input benchmark programs. Our second contribution prioritizes test cases using an improved version of the existing GWO algorithm (CGWO). Our third contribution analyzes the obtained result and compares it with state-of-the-art metaheuristic techniques. This work is validated after running the proposed model on six benchmark programs. The obtained results show that our proposed approach has achieved 48% better APFD score for the prioritized order of test cases than the non-prioritized order. We also achieved a better convergence rate, which takes around 4000 fewer iterations, when compared with the existing methods on the same platform. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Application of Computer Simulation to the Anonymization of Personal Data: Synthesis-Based Anonymization Model and Algorithm.
- Author
-
Borisov, A. V., Bosov, A. V., and Ivanov, A. V.
- Subjects
- *
PERSONALLY identifiable information , *APPLICATION software , *COMPUTER simulation , *ALGORITHMS , *ELECTRONIC data processing , *STATISTICAL sampling - Abstract
This paper describes the second part of our study devoted to automated anonymization of personal data. The overview and analysis of research prospects is supplemented with a practical result. An anonymization model is proposed, which reduces anonymization of personal data to manipulation of samples of random elements of different types. The key idea of our approach to anonymization of personal data with preservation of their usefulness is the use of the synthesis method, i.e., the complete replacement of all non-anonymized data with synthetic values. In the proposed model, a set of element types is selected, for which corresponding synthesys templates are proposed. The set of templates constitutes a synthesis-based anonymization algorithm. Technically, each template is based on a well-known statistical tool: frequency estimates of probabilities, Parzen–Rosenblatt kernel density estimates, statistical means, and covariances. The proposed approach is illustrated by a simple example from civil aviation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Parallel Approximation of Multidimensional Tensors Using GPUs.
- Author
-
Kapralov, N. S., Morozov, A. Yu., and Nikulin, S. P.
- Subjects
- *
PARALLEL algorithms , *ALGORITHMS - Abstract
When solving many applied research problems, it is necessary to work with multidimensional arrays (tensors). In practice, an efficient and compact representation of these objects in the form of so-called tensor trains is used. The paper considers a parallel implementation of the TT-cross algorithm, which allows one to obtain a decomposition of a multidimensional array into a tensor train, using a CUDA GPU. The main aspects and features of the parallel implementation of the algorithm are presented. The resulting parallel version of the algorithm was tested on a representative number of examples. A significant reduction in computational time is demonstrated compared to a similar sequential implementation of the algorithm, which indicates the efficiency of the proposed approaches to parallelization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Perfect Sets of Paths in the Full Graph of SDN Switches.
- Author
-
Burdonov, I. B., Vinarskii, E. M., Yevtushenko, N. V., and Kossatchev, A. S.
- Subjects
- *
VIRTUAL networks , *COMPLETE graphs , *DATA modeling , *ALGORITHMS , *ORDERED sets - Abstract
This paper investigates the implementation of virtual networks on the SDN data plane modeled by a graph of physical connections between network nodes. A virtual network is defined as a set of ordered host pairs (sender and receiver) and is implemented by a set of host–host paths that uniquely determine the settings of switches. A set of paths is said to be perfect if any subset of its paths can be implemented loop-free, i.e., without endless movement of packets in a loop, without duplicate paths (when the host receives the same packet several times), and without unintended paths (when the host receives a packet directed to some other host). In the case where the switchgraph is a complete graph, sufficient conditions for the existence of the largest perfect set of paths connecting all pairs of different hosts are established. Algorithms for constructing this largest perfect set are proposed and their complexity is estimated. The paper also reports preliminary results of conducted computer experiments, which suggest that the proposed sufficient conditions are not necessary ones. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Algorithm for Calculating Correctly Rounded Exponential Function in Double Precision Using Double-Extended Arithmetic.
- Author
-
Godunov, A. N.
- Subjects
- *
ALGORITHMS , *TIME measurements , *EXPONENTS - Abstract
Functions that calculate correctly rounded exponents provide the best approximation and ensure cross-platform portability. This paper presents an algorithm for calculating the correctly rounded value of the exponential function when its argument and value are double-precision numbers and the calculations use extended double-precision arithmetic. The formal description of the algorithm is provided. The proposed algorithm is implemented as a C function. The results of its testing, including the measurement of its time characteristics and comparison with other algorithms, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Binarization of the Swallow Swarm Optimization for Feature Selection.
- Author
-
Slezkin, A. O., Hodashinsky, I. A., and Shelupanov, A. A.
- Subjects
- *
FEATURE selection , *TWO-way analysis of variance , *ALGORITHMS , *TRANSFER functions , *DEGLUTITION - Abstract
In this paper, we propose six methods for binarization of the swallow swarm optimization (SSO) algorithm to solve the feature selection problem. The relevance of the selected feature subsets is estimated by two classifiers: a fuzzy rule-based classifier and a classifier based on k-nearest neighbors. To find an optimal subset of features, we take into account the number of features and classification accuracy. The developed algorithms are tested on datasets from the KEEL repository. For the statistical evaluation of the binarization methods, we use Friedman's two-way analysis of variance by ranks for related samples. The best feature selection result is shown by a hybrid method based on modified algebraic operations and MERGE operation introduced by the authors of this paper. The best classification accuracy is achieved with a V-shaped transfer function. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Algorithm for Solving the Cauchy Problem for a Two-Dimensional Difference Equation with Initial Data Defined in a "Strip".
- Author
-
Apanovich, M. S., Lyapin, A. P., and Shadrin, K. V.
- Subjects
- *
DIFFERENCE equations , *PROBLEM solving , *ALGORITHMS , *CAUCHY problem , *ALGEBRA , *POLYNOMIALS - Abstract
In this paper, we propose an algorithm for solving the Cauchy problem for a two-dimensional difference equation with constant coefficients at a point, based on its coefficients and the initial data for the Cauchy problem, which are defined in a "strip." For this purpose, computer algebra methods are employed. To automate the process of computing the solution, the algorithm is implemented in MATLAB, with the input data being the matrix of coefficients of the two-dimensional polynomial difference equation, the coordinates of the initial data matrix (which defines the structure of the difference equation), and the coordinates of the point that determines the dimension of the initial data matrix. The output of the algorithm is a solution of the Cauchy problem (with the initial data defined in a "strip") for the two-dimensional difference equation, which is a function value at the desired point. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. On the Possibility of Secure Program Obfuscation in Some Model of Cloud Computing.
- Author
-
Shokurov, A. V., Abramova, I. V., Varnovsky, N. P., and Zakharov, V. A.
- Subjects
- *
CLOUD computing , *DISTRIBUTED computing , *INFORMATION technology security , *ALGORITHMS , *COMPUTER systems , *IMAGE encryption - Abstract
In this paper, we study the possibility of using a certain cloud computing model supplied with cryptoservers to obfuscate software programs. Earlier, we proposed this cloud computing model in our study of some information security problems for multi-client distributed computing over encrypted data. Based on this model, we proposed a new approach involving the use of threshold homomorphic cryptosystems for program obfuscation. The main result of this paper is a new definition of the resistance of obfuscation of programs in the cloud computing model and a theorem proving the cryptographic strength of the proposed algorithm of obfuscation of programs under the assumption of the existence of cryptographically strong threshold homomorphic encryption systems. The paper is organized as follows. In the introducing section we discuss the main aspects of the information security problems for cloud computing systems. Section 2 provides a description of the obfuscation program objectives, as well as a brief overview of the main achievements in its study. The next section provides general information about homomorphic cryptosystems. Section 4 describes a more special class of homomorphic cryptosystems - threshold homomorphic encryption systems. Section 5 introduces the cloud computing model, which is used as a framework for our program obfuscation techniques. For this computing environment, in Section 6, the definition of the cryptographic strength of program obfuscation is formulated, a new method of program obfuscation using threshold homomorphic cryptosystems is described, and the cryptographic strength of the proposed obfuscation algorithm is proved. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Algorithm for Constructing Modular Projections for Correcting Multiple Errors Based on a Redundant Residue Number System Using Maximum Likelihood Decoding.
- Author
-
Babenko, M., Nazarov, A., Tchernykh, A., Pulido-Gaytan, B., Cortés-Mendoza, J. M., and Vashchenko, I.
- Subjects
- *
NUMBER systems , *FAULT tolerance (Engineering) , *DATA warehousing , *ALGORITHMS , *PESTICIDE residues in food - Abstract
One of the most important applications of the Redundant Residual Numbers System (RRNS) is to improve the fault tolerance of the data storage, processing, and transmission. Correcting multiple errors is a challenging computational task. This complexity is mainly due to the numerous combinations of erroneous residuals at the error localization stage. In this paper, we propose an approach for constructing modular projections to correct any number of errors. The algorithm uses the Maximum Likelihood Decoding (MLD) and the Approximate Rank (AR) to reduce the number of projections and processing time. AR-RRNS with MLD algorithm can provide the number of modular projections close to the theoretical lower bound of the most efficient state-of-the-art algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Two-Stage Method for Grouping News with Similar Topics.
- Author
-
Skorniakov, K. A., Laskina, A. S., and Turdakov, D. Yu.
- Subjects
- *
NEWS agencies , *ALGORITHMS , *EVALUATION methodology - Abstract
This paper is devoted to event detection as part of news stream analysis. An event is regarded as a group of news texts dedicated to one real-world fact or situation. We analyze news from Russian news agencies. A two-stage clustering method is proposed, which combines a rough clustering algorithm used at the first stage and a refinement classifier used at the second stage. In addition, we present an open labeled dataset for news event detection, which is based on the Yandex News service. Empirical evaluation of the proposed method on this dataset proves its effectiveness for event detection in news texts. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Effective Implementations of Topic Modeling Algorithms.
- Author
-
Apishev, M. A.
- Subjects
- *
ALGORITHMS , *FAULT-tolerant computing , *MACHINE learning , *DATA warehousing , *PARALLEL programming , *BATCH processing - Abstract
In this paper, we provide an overview of effective EM-like learning algorithms for latent Dirichlet allocation (LDA) models and additively regularized topic models (ARTMs). First, we review 11 techniques for efficient topic modeling based on synchronous and asynchronous parallel computing, distributed data storage, streaming, batch processing, RAM optimization, and fault tolerance improvement. Second, we review 14 effective implementations of topic modeling algorithms proposed in the literature over the past 10 years that use different combinations of the techniques mentioned above. Their comparative analysis is carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. A Fast Search Algorithm for SqueeSAR Distributed Scatterers in the Problem of Calculating Displacement Velocities.
- Author
-
Popov, S. E. and Potapov, V. P.
- Subjects
- *
SEARCH algorithms , *ALGORITHMS , *PARALLEL programming , *BIVECTORS , *COMPUTER systems , *PARALLEL processing , *PYTHON programming language - Abstract
This paper describes a software implementation of a fast distributed scatterer search algorithm for the problem of displacement velocity calculation based on the Apache Spark platform. A complete scheme for calculating displacement velocities by the persistent scatterer method is considered. The proposed algorithm is integrated into the scheme after the stage of subpixel-accuracy alignment of a stack of time-series images. The search for distributed scatterers is carried out independently in shift windows over the entire area of the image. The presence of distributed scatterers is determined based on the assumption that pairs of samples in the window, which are composed of vectors of complex pixel values in each of the N images, are homogeneous. This assumption stems from the fulfillment of the Kolmogorov–Smirnov criterion for each pair. Toestimate phases of homeogenic pixels, the maximization problem is solved. It is shown that the proposed algorithm is not iterative and can be implemented in the framework of the parallel computing paradigm. Toenable distributed in-memory processing of radar data arrays (from 60 images) across many physical nodes in a network environment, we use the Apache Spark parallel processing platform. In this case, the time it takes to find distributed scatterers is reduced by a factor of 10 on average as compared to a single-processor implementation of the algorithm. The comparative results of testing the computing system on a demo cluster are presented. The algorithm is implemented in Python with a detailed description of the objects and methods of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Model of Pseudo-Random Sequences Generated by Encryption and Compression Algorithms.
- Author
-
Kozachok, A. V. and Spirin, A. A.
- Subjects
- *
SHIFT registers , *EMAIL security , *PROBLEM solving , *MATHEMATICAL statistics , *DATA encryption , *DIGITAL signatures , *EMAIL systems , *INFORMATION technology security , *ALGORITHMS - Abstract
Classification of high-entropy data sources is one of the key problems in the field of information security. Currently, there are many methods for classification of encrypted and compressed sequences; however, they mostly use digital signatures or service information found in the headers of the containers used to store or transfer data. This paper analyzes the state of research in the field of classification of encrypted and compressed data and develops a model of encrypted and compressed sequences. Our experiments demonstrate a high accuracy of the proposed approach, which allows us to conclude that the methods for classifying encrypted and compressed data used in our study have been improved. The approach can be implemented in data leak prevention systems or corporate email systems to analyze the attachments sent outside the controlled perimeter of a government agency or enterprise. Purpose of the research – develop a model of pseudo-random sequences generated by data encryption and compression algorithms that most accurately reflects statistical properties of these sequences. Methods of the research – statistical data analysis, mathematical statistics, and machine learning. Result of the research – An analysis of the studies aimed at solving the problem of classification for encrypted and compressed sequences in the field of information security is carried out. A model of pseudo-random sequences generated by encryption and compression algorithms is developed taking into account their statistical features: distribution of bytes and distribution of subsequences of limited length, which constitute a new probabilistic space. The choice of the statistical features used in the pseudo-random sequence model is justified. Experiments for determining the hyperparameters of the classifier on a dataset generated from encrypted and compressed files without taking their headers into account are carried out. The constraints used in the pseudo-random sequence model, namely, the length of pseudo-random sequences (approximately 600 Kb), are defined. Experiments for determining the effect of the statistical features used in the model on classification accuracy are conducted. The proposed approach allows encrypted and compressed data to be classified with an accuracy of 0.97. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Constructing Onboard Switched Networks of Minimum Complexity.
- Author
-
Kostenko, V. A. and Morkvin, A. A.
- Subjects
- *
MAXIMA & minima , *ALGORITHMS , *MULTICASTING (Computer networks) , *SWITCHING systems (Telecommunication) - Abstract
In this paper, we formulate the problem of constructing an onboard switched network of minimum complexity required for real-time transmission of periodic messages, as well as propose algorithms for its solution (generating the network structure and system of virtual links). Results of the experimental evaluation of the proposed algorithms for constructing onboard AFDX networks are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Modification of the Marching Cubes Algorithm to Obtain a 3D Representation of a Planar Image.
- Author
-
Hernández Farías, Delia Irazú, Cabrera, Rafael Guzmán, Fraga, Teodoro Cordova, Luna, José Zacarías Huamaní, and Gomez Aguilar, Jose Francisco
- Subjects
- *
IMAGE representation , *ALGORITHMS , *ROTATIONAL motion - Abstract
The registration of a 3D model over an image can be seen as the alignment of visual correspondences extracted from these two data. This is a challenging task and it is even more complex when the two images have a different modality. This paper introduces an approach that allows matching features detected in two different modalities: photographs and 3D models, by using a common 2D representation. Our approach is based on a modification of the Marching Cubes algorithm aiming to remove ambiguous cases without adding further calculations in each cube. We share the idea about the crucial importance of splitting the equivalence cases into two classes. Considering all the possible states inside/outside in the four corners of a cube side, indeed, there are only four non-trivial cases after eliminating those equivalences through the rotation. The obtained results allow us to validate the feasibility of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Regular Expressions for Web Advertising Detection Based on an Automatic Sliding Algorithm.
- Author
-
Riaño, D., Piñon, R., Molero-Castillo, G., Bárcenas, E., and Velázquez-Mena, A.
- Subjects
- *
INTERNET advertising , *OPTICAL character recognition , *ALGORITHMS , *INTERNET marketing , *AUTOMATION software , *MARKETING models - Abstract
This paper presents the automation of a Web advertising recognition algorithm, using regular expressions. Currently, the use of regular expressions, optical character recognition, Databases, and automation tests have been critical for multiple Software implementations. The tests were carried out in three Web browsers. As a result, the detection of advertisements in Spanish, that distract attention and that above all extract information from users was achieved. The main feature of the algorithm is that automatic and versatile execution does not require access to the code of the page in question and that in the future it can be an application with background operation. Being supported by optical character recognition gives us acceptable efficiency in detecting advertising. Thanks to this identification, it may be possible to generate different applications, both in favor of the user and the brands, always with the aim of improving current online marketing models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Occlusion Culling Algorithm Based on Software Visibility Checks.
- Author
-
Gonakhchyan, V. I.
- Subjects
- *
ALGORITHMS , *VISIBILITY , *COMPUTER software , *OBJECT tracking (Computer vision) , *RENDERING (Computer graphics) , *HARDWARE - Abstract
Rendering of 3D scenes with a large number of objects is computationally intensive. Occlusion culling methods are used to improve rendering performance by reducing the number of objects to be processed on the GPU. In this paper, we consider occlusion culling methods that employ spatial and temporal coherence of visibility. Some of the most efficient methods are based on hardware occlusion queries. However, recording of occlusion queries and receiving their results for a large number of objects can take considerable time. We propose an algorithm that removes occlusion query transmission overhead by performing occlusion checks on the CPU against a depth buffer downloaded from the GPU. The proposed algorithm is compared with widespread algorithm based on hardware occlusion queries. It improves rendering performance of 3D scenes with a large number of objects by reducing the number of transmitted rendering commands. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Video-Based Content Recognition of Bank Cards with Mobile Devices.
- Author
-
Chen, H., Ye, S., Kurilovich, A., Bohush, R., and Ablameyko, S.
- Subjects
- *
MOBILE banking industry , *ALGORITHMS , *OBJECT recognition (Computer vision) , *VIDEOS - Abstract
In this paper, we propose an algorithm for detection and recognition of all information fields at the bank card front side based on video sequences. The algorithm is intended for use on mobile devices. This algorithm consists of the following basic steps: detection of the card boundaries in a frame, segmenting the information fields, improving the quality of segments, localizing the boundaries of symbols, and recognizing blocks of symbols. We also conduct a series of experiments. Experimental results show that our algorithm can achieve higher detection rates of 88% for all information fields and 92.5% for the bank card number and expiration date. The processing time per frame at different resolutions for each step by using iPhone 7 is presented. The experimental results confirm the efficiency of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Adaptation Algorithm for Application Menus.
- Author
-
Chujkova, E. N., Aidinyan, A. R., and Tsvetkova, O. L.
- Subjects
- *
ALGORITHMS , *MENUS , *APPLICATION software , *INDIVIDUAL needs , *USER interfaces , *HIERARCHIES - Abstract
A software application's menu is an important part of its user interface. User satisfaction and application usage efficiency depend on how well the menu is designed. Application menus designed by the developer are intended for the average user and do not take into account the needs and characteristics of individual users. A solution to this problem is the creation of user interfaces capable of adapting to the needs of an individual user. The approach to adapting the user menu of an application program proposed in this paper is based on a user model generated by monitoring user actions in the process of working with the program. A mathematical model of the application menu and a model of the application user are constructed, and an algorithm for adaptive modification of the application menu is proposed. The proposed algorithm for creating menu item links at higher hierarchy levels and hiding unused menu items makes it possible to reduce the activation time of menu items, thus increasing the user's productivity. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Functional Programming Library for C++.
- Author
-
Krasnov, M. M.
- Subjects
- *
C++ , *PROGRAMMING languages , *ALGORITHMS - Abstract
Modern functional programming languages, such as Haskell, Scala, ML, and F#, have properties that make it possible to implement logically complicated algorithms relatively easily. Among such properties is the composition of functions, currying, metafunctions (functions over functions), and some others. This makes it possible to obtain complex functions by combining simpler functions. An example of a complicated algorithm is a parser. In Haskell, there is a library called Parsec, which is a set of elementary parsers; by combining these parsers, one can create more complicated parsers. This fact makes the library relatively simple, yet powerful. The majority of these tools are not explicitly included in C++. However, C++ is sufficiently powerful for implementing many properties of functional programming languages. In this paper, an attempt is made to develop such a library. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Algorithm for Constructing an Analogue of the Binet Formula.
- Author
-
Kuzovatov, V. I., Kytmanov, A. A., and Kuzovatova, O. I.
- Subjects
- *
COMPUTER systems , *ALGORITHMS - Abstract
In this paper, we describe an algorithm for constructing an analogue of the Binet formula, which is essential in deriving a functional relation to the classical Riemann zeta-function. The algorithm is implemented in the Maple computer algebra system. An example that illustrates the operation of the algorithm is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Minimal Basis of the Syzygy Module of Leading Terms.
- Author
-
Shokurov, A. V.
- Subjects
- *
GROBNER bases , *ALGORITHMS , *ADMISSIBLE sets , *ALGEBRAIC equations , *COMPUTATIONAL complexity , *GENERATORS of groups , *DIOPHANTINE equations - Abstract
Systems of polynomial equations are one of the most universal mathematical objects. Almost all problems of cryptographic analysis can be reduced to solving systems of polynomial equations. The corresponding direction of research is called algebraic cryptanalysis. In terms of computational complexity, systems of polynomial equations cover the entire range of possible variants, from the algorithmic insolubility of Diophantine equations to well-known efficient methods for solving linear systems. Buchberger's method [5] brings the system of algebraic equations to a system of a special type defined by the Gröbner original system of equations, which enables the elimination of dependent variables. The Gröbner basis is determined based on an admissible ordering on a set of terms. The set of admissible orderings on the set of terms is infinite and even continual. The most time-consuming step in finding the Gröbner basis by using Buchberger's algorithm is to prove that all S-polynomials represent a system of generators of KX-module S-polynomials. Thus, a natural problem of finding this minimal system of generators arises. The existence of this system follows from Nakayama's lemma. In this paper, we propose an algorithm for constructing this basis for any ordering. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Parallelization of Implementations of Purely Sequential Algorithms.
- Author
-
Bugerya, A. B., Kim, E. S., and Solovev, M. A.
- Subjects
- *
BINARY codes , *ALGORITHMS , *PARALLEL algorithms , *PARALLEL programming - Abstract
The work is dedicated to the parallelization of programs in especially difficult cases when the used algorithm is purely sequential, there are no parallel alternatives to this algorithm, and its execution time is unacceptably high. Various parallelization methods for software implementations of such algorithms and resulting computational load balancing are considered that make it possible to considerably accelerate the execution of application programs using purely sequential algorithms. The proposed methods are illustrated with examples of their application to two algorithms used in a dynamic binary code analysis toolset. The main goal of this paper is to show that the use of a purely sequential algorithm in a software implementation does not necessarily imply that its execution is inevitably sequential. The proposed methods of parallelizing implementations of such algorithms and balancing the resulting computational load can help develop efficient parallel programs that fully utilize the hardware capabilities of modern computers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Partial algorithms for satellite unknowns determination.
- Author
-
Panferov, A.
- Subjects
- *
ALGORITHMS , *DIFFERENTIAL equations , *COMPUTATIONAL complexity , *ALGEBRA software , *PICARD groups - Abstract
The concept of satellite unknowns with respect to a set of selected unknowns in linear homogeneous differential systems was introduced earlier by the author of this paper, and an algorithm for satellite unknowns testing was proposed. On one of the stages of this algorithm, it is required to compare Picard-Vessiot extensions of two differential systems constructed in the course of the algorithm operation. The commonly accepted method of solving this problem, which is based on Hrushovski's algorithm, has rather high algorithmic complexity, which hampers its use in practice. In the paper, some partial algorithms for satellite unknowns testing are described. These algorithms are not always applicable but have relatively low computational complexity and can be implemented in computer algebra systems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Algorithms and Software Increasing the Efficiency of Processing Multidimensional Heterogeneous Data.
- Author
-
Zakharova, A. A., Nebaba, S. G., and Zavyalov, D. A.
- Subjects
- *
ALGORITHMS , *HUMAN facial recognition software , *INTERPOLATION algorithms , *COMPUTER software , *DATA - Abstract
An algorithm of methods selection for processing multidimensional heterogeneous data based on the general properties of the data used and the methods included in the review is proposed in this paper. The algorithm is implemented in the form of software and a group of interpolation algorithms is compared by the example of the problem of constructing a 3D human face model for a person's recognition system. It is shown that the proposed algorithm for selecting data processing methods works successfully for a group of data interpolation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Optimization Method for Cell Image Registration.
- Author
-
Guryanov, F. A. and Krylov, A. S.
- Subjects
- *
IMAGE registration , *MICROSCOPY , *CELL imaging , *AFFINE transformations , *ALGORITHMS , *COEFFICIENTS (Statistics) - Abstract
Abstract: Image registration problem often arises in microscopy when analyzing cell images. The most popular registration methods are rigid methods that use affine transformations. These methods are good enough for different types of images and image modalities, but they are very slow. This makes speed optimization techniques for these methods of particular importance. In this paper, we propose an algorithm for finding the optimal image downsampling coefficient to speedup image registration methods. The algorithm is tested for different rigid registration methods on HeLa cell images. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. Refinement of the Coherent Point Drift Registration Results by the Example of Cephalometry Problems.
- Author
-
Lachinov, D. A., Getmanskaya, A. A., and Turlapov, V. E.
- Subjects
- *
CEPHALOMETRY , *ALGORITHMS , *IMAGE registration , *IMAGE reconstruction , *TIKHONOV regularization , *REGRESSION analysis - Abstract
Abstract: Personalizing the three-dimensional atlas model (template) of an organ under analysis based on the data of patient’s three-dimensional examination is an important task for modern digital medicine. The template should be represented by a set of points Y and marked with the key points for the model (landmarks). The template is personalized by the non-rigid registration of the set Y with the set of points X that represents patient’s tomogram. Presently, the coherent point drift (CPD) is the most popular method for solving the problem of non-rigid alignment. In this paper, we propose and explore an approach that substantially improves the CPD result in the problem of automatic registration of cephalometric points (CPs). The proposed algorithm remains robust in the presence of significant skull deformations. Traditionally, 3D cephalometry uses the geometric descriptor of a CP, which refines the position of the point on the bone surface. However, the result of applying the descriptor depends significantly on the accuracy of its anatomical basis reconstruction. The proposed approach solves this problem by clarifying the basis of geometric descriptors, the supporting elements of which are orbital planes and the Frankfurt (orbital-ear) horizontal. For this purpose, additionally marked points of the orbitals (YO) are included into the template Y. Once Y and X are aligned by the CPD method, the plane positions of the orbitals are refined by solving a linear regression problem on the subsets of YO for the left and right orbitals. Cases of refinement with and without use of Tikhonov regularization (ridge-regression) are analyzed. The effect of the increase in the cardinality of the set X relative to Y on the registration accuracy is investigated. It is found that the condition |X| < |Y| has a negative effect on the accuracy, which increases when the cardinality of X relative to Y decreases. The refinement of CPs by the geometric descriptor is carried out on tomogram data in the region around CPs that is found by the CPD method. The dimensions of this region along three coordinates are determined by the anatomical domain of a particular CP descriptor. The quality of the algorithm is measured by the Euclidean distance between hand-marked and automatically found points. The template Y for the algorithm is built on a trauma-deformed skull. The algorithm is quantitatively verified by registering the CPs of orbitals and cheekbones from the data of four tomograms for the deformed skull. A key feature and source of high accuracy of the approach is the use of linear regression with Tikhonov regularization to refine it. As a result, 81.25% of the points found fall within a radius of 2 mm from the hand-marked points and 100% of the points fall within a radius of 4 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Revealing matrices of linear differential systems of arbitrary order.
- Author
-
Abramov, S., Ryabenko, A., and Khmelnov, D.
- Subjects
- *
EXTERIOR differential systems , *ARBITRARY constants , *MATRICES (Mathematics) , *ALGORITHMS , *RANK correlation (Statistics) - Abstract
If the leading matrix of a linear differential system is nonsingular, then its determinant is known to bear useful information about solutions of the system. Of interest is also the frontal matrix. However, each of these matrices (we call them revealing matrices) may occur singular. In the paper, a brief survey of algorithms for transforming a system of full rank to a system with a nonsingular revealing matrix of a desired type is given. The same transformations can be used to check whether the rank of the original system is full. A Maple implementation of these algorithms (package EGRR) is discussed, and results of comparison of estimates of their complexity with actual operation times on a number of examples are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
31. Algorithms for the analysis and visualization of high dynamic range images based on human perception.
- Author
-
Zipa, K. and Ignatenko, A.
- Subjects
- *
ALGORITHMS , *VISUALIZATION , *HIGH dynamic range imaging , *GLOBAL optimization , *CARTOGRAPHY - Abstract
High dynamic range images are used to store and transfer an extended range of intensities to render them on a display. To reproduce such images on displays with a lower range, tone mapping algorithms are used. The tone mapping algorithm described in this paper is a modification of the globally optimized linear windowed tone mapping algorithm. This modification is based on the human vision system model; it makes it possible to improve the results produced by the algorithm and replaces the nonintuitive parameters with a number of intuitively clear ones the variation of which in a high range does not visually distort the image. The high quality of the results produced by the algorithm is confirmed by the high TMQI index and the low value of the DRIM metric. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
32. On deriving test suites for nondeterministic finite state machines with time-outs.
- Author
-
Shabaldina, N. and Galimullin, R.
- Subjects
- *
ALGORITHMS , *FINITE state machines , *COMPUTER simulation , *COMPUTER science , *SCIENCE , *ALGEBRA - Abstract
In the paper, the non-separability relation for finite state machines with time-outs is studied. A specific feature of such machines is integer-valued delays, or time-outs, which determine how long the finite state machine will stay in one or another state if there are no input actions. If the time-out is over and no input symbol has been applied, then the TFSM state is changed according to the transition under time delay. In the paper, an algorithm for constructing a separating sequence for such finite state machines is presented. Here, the separating sequence is a timed input sequence for which sets of input sequences of the TFSM do not intersect; hence, it is sufficient to apply the separating sequence once in order to distinguish the TFSMs by their output reactions. This algorithm underlies the algorithm for construction of test suites with respect to non-separability relation in the case where the fault domain is specified by means of a mutation machine. Test suite derivation with respect to non-separability relation by way of 'TFSM to FSM' transformation is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
33. A new double sorting-based node splitting algorithm for R-tree.
- Author
-
Korotkov, A.
- Subjects
- *
ALGORITHMS , *SPATIAL data structures , *ONE-dimensional flow , *ALGEBRA , *FLUID dynamics , *MATHEMATICS - Abstract
A storing of spatial data and processing of spatial queries are important tasks for modern data-bases. The execution efficiency of spatial query depends on underlying index structure. R-tree is a well-known spatial index structure. Currently there exist various versions of R-tree, and one of the most common variations between them is node splitting algorithm. The problem of node splitting in one-dimensional R-tree may seem to be too trivial to be considered separately. One-dimensional intervals can be split on the base of their sorting. Some of the node splitting algorithms for R-tree with two or more dimensions comprise one-dimensional split as their part. However, under detailed consideration, existing algorithms for one-dimensional split do not perform ideally in some complicated cases. This paper introduces a novel one-dimensional node splitting algorithm based on two sortings that can handle such complicated cases better. Also this paper introduces node splitting algorithm for R-tree with two or more dimensions that is based on the one-dimensional algorithm mentioned above. The tests show significantly better behavior of the proposed algorithms in the case of highly overlapping data. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
34. BIBasis, a package for reduce and Macaulay2 computer algebra systems to compute Boolean involutive and Gröbner bases.
- Author
-
Zinin, M.
- Subjects
- *
ALGEBRA , *MATHEMATICAL analysis , *DIFFERENTIAL equations , *COMPUTERS , *MATHEMATICS , *ALGORITHMS , *USER interfaces - Abstract
In this paper, we describe the BIBasis package designed for REDUCE and Macaulay2 computer algebra systems, which allows one to compute Boolean involutive bases and Gröbner bases. The implementations and user interfaces of the package for both systems are described in the respective sections of the paper. Also, we present results of comparisons of BIBasis with other packages and algorithms for constructing Boolean Gröbner bases available in the computer algebra systems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
35. The parametric solution of underdetermined linear ODEs.
- Author
-
Wolf, T.
- Subjects
- *
DIFFERENTIAL equations , *LINEAR differential equations , *LINEAR systems , *EUCLIDEAN algorithm , *GROBNER bases , *PARTIAL differential equations , *MATHEMATICAL symmetry , *ALGORITHMS , *FRACTIONAL calculus - Abstract
The purpose of this paper is twofold. An immediate practical use of the presented algorithm is its applicability to the parametric solution of underdetermined linear ordinary differential equations (ODEs) with coefficients that are arbitrary analytic functions in the independent variable. A second conceptual aim is to present an algorithm that is in some sense dual to the fundamental Euclids algorithm, and thus an alternative to the special case of a Gröbner basis algorithm as it is used for solving linear ODE-systems. In the paper Euclids algorithm and the new 'dual version' are compared and their complementary strengths are analysed on the task of solving underdetermined ODEs. An implementation of the described algorithm is interactively accessible under [7]. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
36. On complementary principles of object-oriented constraint programming.
- Author
-
Semenov, V., Dragalov, K., Ilyin, D., Morozov, S., and Sidyaka, O.
- Subjects
- *
CONSTRAINT programming , *OBJECT-oriented methods (Computer science) , *DATA modeling , *PROGRAMMING languages , *INDUSTRIAL management , *SYSTEMS design , *PARADIGM (Theory of knowledge) , *INNOVATIONS in business , *ALGORITHMS - Abstract
The paper is devoted to the implementation of the paradigm of the object-oriented constraint programming (OOCP), which combines complementary ideas and principles of the object-oriented programming (OOP) and constraint logical programming (CLP). Although the idea looks attractive and there have been attempts to implement it with the use of logical and functional languages, its future outline is still not clear. In the paper, a survey of the existing technologies of the constraint programming is given, and a new systematic approach to the implementation of the object-oriented constraint programming based on the use of declarative data modeling languages is discussed. The advantages of the approach related to the expressiveness and generality of the constraint problem declarations are demonstrated on the example of the classical mathematical queen problem. A general algorithmic strategy of solving the constraint problems is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
37. Generation of test data for verification of caching mechanisms and address translation in microprocessors.
- Author
-
Kornykhin, E. V.
- Subjects
- *
MICROPROCESSORS , *CACHE memory , *ALGORITHMS , *COMPUTER storage devices , *COMPUTER programming - Abstract
This paper considers the problem of test data generation for the core-level verification of micro-processors; namely, the problem of constructing a test program on the basis of its abstract form (test template). To solve this problem, we propose an algorithm reducing it to a problem of resolving constraints. This paper addresses the verification of memory-handling instructions (taking into account such microprocessor features as caching and address translation). [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
38. Software pipelining of loops by the method of modulo scheduling.
- Author
-
N. V’yukova, V. Galatenko, and S. Samborskii
- Subjects
- *
DATA pipelining , *ELECTRONICS , *ALGORITHMS , *COMPUTER science , *COMPUTER software - Abstract
Software pipelining is an efficient method of loop optimization that allows for parallelism of operations related to different loop iterations. Currently, most commercial compilers use loop pipelining methods based on modulo scheduling algorithms. This paper reviews these algorithms and considers algorithmic solutions designed for overcoming the negative effects of software pipelining of loops (a significant growth in code size and increase in the register pressure) as well as methods making it possible to use some hardware features of a target architecture. The paper considers global-scheduling mechanisms allowing one to pipeline loops containing a few basic blocks and loops in which the number of iterations is not known before the execution. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
39. Algorithmic issues of AND-decomposition of boolean formulas.
- Author
-
Emelyanov, P. and Ponomaryov, D.
- Subjects
- *
BOOLEAN functions , *LOGIC circuits , *MATHEMATICAL decomposition , *ALGORITHMS , *FACTORIZATION , *POLYNOMIALS - Abstract
AND-decomposition of a boolean formula means finding two (or several) formulas such that their conjunction is equivalent to the given one. Decomposition is called disjoint if the component formulas do not have variables in common. In the paper, we show that deciding AND-decomposability is intractable for boolean formulas given in CNF or DNF and prove tractability of computing disjoint AND-decomposition components of boolean formulas given in positive DNF, Full DNF, and ANF. The latter result follows from tractability of multilinear polynomial factorization over the finite field of order 2, for which we provide a polytime factorization algorithm based on identity testing for partial derivatives of multilinear polynomials. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. Compact representation of polynomials for algorithms for computing Gröbner and involutive bases.
- Author
-
Yanovich, D.
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *GROBNER bases , *COMMUTATIVE algebra , *MATHEMATICS theorems , *BOOLEAN functions - Abstract
In the computation of involutive and Gröbner bases with rational coefficients, the major part of the memory is occupied by precision numbers; however, in the case of modular operations (especially, in the computation of Gröbner bases), of most importance is the problem of compact representation of monomials composing polynomials of the system. For this purpose, for example, ZDD diagrams and other structures are used, which make execution of typical operations-multiplication by a monomial and reduction of polynomials-more complicated. In this paper, an attempt is made to develop convenient (in the sense of computation of bases) and compact representation of polynomials that is based on hash-tables. Results of test runs are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
41. Termination of the F5 algorithm.
- Author
-
Galkin, V.
- Subjects
- *
ALGORITHMS , *CRYPTOGRAPHY , *DATA analysis , *MATHEMATICAL sequences , *POLYNOMIALS , *MATHEMATICAL proofs - Abstract
The F5 algorithm, which calculates the Gröbner basis of an ideal generated by homogeneous polynomials, was proposed by Faugère in 2002; simultaneously, the correctness of this algorithm was proved under the condition of termination. However, termination itself was demonstrated only for a regular sequence of polynomials. In this paper, it is proved that the algorithm terminates for any input data. First, it is shown that if the algorithm does not terminate, it eventually generates two polynomials where the first is a reductor for the second. However, it is not argued that such a reduction is permitted by all the criteria introduced in F5. Next, it is shown that if such a pair exists, then there exists another pair for which the reduction is permitted by all the criteria, which is impossible. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. On the Goursat classification problem.
- Author
-
Kaptsov, O.
- Subjects
- *
GOURSAT problem , *ALGORITHMS , *ALGEBRA , *DIFFERENTIAL equations , *HYPERBOLIC differential equations , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
In the paper, the Goursat problem-classification of nonlinear hyperbolic differential equations possessing two characteristic invariants-is considered. An algorithm for finding the characteristic invariants is described. On the basis of the algorithm implemented in REDUCE, the characteristic invariants of two Laine's equations are verified. One of them is shown to have invariants of the second and third orders. This equation shows that the Goursat problem, apparently, is still open. Computer calculations show that the characteristic invariants of the second Laine's equation given in his paper are incorrect. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
43. On laplace and Dini transformations for multidimensional equations with a decomposable principal symbol.
- Author
-
Ganzha, E.
- Subjects
- *
ALGORITHMS , *DIFFERENTIAL equations , *HARMONIC functions , *ALGEBRA software , *MATHEMATICAL analysis , *BESSEL functions - Abstract
Algorithms for solving linear PDEs implemented in modern computer algebra systems are usually limited to equations with two independent variables. In this paper, we propose a generalization of the theory of Laplace transformations to second-order partial differential operators in ℝ (and, generally, ℝ) with a principal symbol decomposable into the product of two linear (with respect to derivatives) factors. We consider two algorithms of generalized Laplace transformations and describe classes of operators in ℝ to which these algorithms are applicable. We correct a mistake in [8] and show that Dini-type transformations are in fact generalized Laplace transformations for operators with coefficients in a skew (noncommutative) Ore field. Keywords: computer algebra, partial differential equations, algorithms for solution. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. Laplace transformations as the only degenerate Darboux transformations of first order.
- Author
-
Shemyakova, Ekaterina
- Subjects
- *
MATHEMATICAL transformations , *ALGORITHMS , *DIFFERENTIAL geometry , *DIFFERENTIAL equations , *BESSEL functions , *DARBOUX transformations , *PARTIAL differential equations - Abstract
The paper is devoted to the Darboux transformations, an effective algorithm for finding analytical solutions of partial differential equations. It is proved that Wronskian-like formulas suggested by G. Darboux for the second-order linear operators on the plane describe all possible differential transformations with M of the form D + m( x, y) and D + m( x, y), except for the Laplace transformations. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. Programming for modular reconfigurable robots.
- Author
-
Gorbenko, A. and Popov, V.
- Subjects
- *
ROBOTICS , *ROBOT control systems , *ALGORITHMS , *COMPUTER programming , *COMPUTER software , *AUTOMATION - Abstract
Composed of multiple modular robotic units, self-reconfigurable modular robots are metamorphic systems that can autonomously rearrange the modules and form different configurations depending on dynamic environments and tasks. The goal of self-reconfiguration is to determine how to change connectivity of modules to transform the robot from the current configuration to the goal configuration subject to restrictions of physical implementation. The existing reconfiguration algorithms use different methods, such as divide-and-conquer, graph matching, and the like, to reduce the reconfiguration cost. However, an optimal solution with a minimal number of reconfiguration steps has not been found yet. The optimal reconfiguration planning problem consists in finding the least number of reconfiguration steps transforming the robot from one configuration to another. This is an NP-complete problem. In this paper, we describe an approach to solve this problem. The approach is based on constructing logical models of the problem under study. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. An algorithm of automatic workflow optimization.
- Author
-
Kalenkova, A.
- Subjects
- *
WORKFLOW , *ELECTRONIC data processing , *ALGORITHMS , *BOOLEAN functions , *COMPUTER programming , *APPLICATION software , *INDUSTRIAL management - Abstract
This paper considers an algorithm of automatic workflow optimization that, unlike well-known redesign algorithms for workflows [1, 2], can analyze arbitrary structures containing conditional branches and cycles. This algorithm operates with workflows without structural conflicts and, in the course of operation, uses execution conditions obtained as a result of application of the Boolean verification algorithm (BVA) proposed earlier in [3]. A modified BVA is proposed and its computational complexity is estimated. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
47. Reconstruction of structure and texture of city building facades.
- Author
-
Yakubenko, A., Kononov, V., Mizin, I., Konushin, V., and Konushin, A.
- Subjects
- *
FACADES , *THREE-dimensional imaging in architecture , *TEXTURE in architecture , *ALGORITHMS , *PIXELS , *IMAGE databases - Abstract
An important problem in creating photorealistic 3D city maps is to interpret and clear building facades from foreground objects. Modern image recovery algorithms either yield poor quality and require much time or need substantial interaction with the user. In this paper, a new algorithm for recovering texture of building facades, which is based on information on regularity of their structures, is presented. The structure is described by a set of nonintersecting lattices of similar rectangular fragments. An algorithm for searching such structures is proposed. The algorithms are compared with the existing ones on available image databases. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. A survey of methods for constructing covering arrays.
- Author
-
Kuliamin, V. V. and Petukhov, A. A.
- Subjects
- *
INFORMATION technology , *COMPUTER programming , *TECHNOLOGICAL complexity , *ALGORITHMS , *COMPUTER interfaces , *FERROMAGNETIC materials , *MAGNETIC domain , *MATHEMATICAL optimization , *RECURSIVE functions , *COMPUTER storage devices - Abstract
The paper presents a survey of methods for constructing covering arrays used in generation of tests for interfaces with a great number of parameters. The application domain of these methods and algorithms used in them are analyzed. Specific characteristics of the methods, including time complexity and estimates of the required memory, are presented. Various-direct, recursive, optimization, genetic, and backtracking-algorithms used for constructing covering arrays are presented. Heuristics are presented that allow one to reduce arrays without loss of completeness, and application domains of these heuristics are outlined. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. Partial differential equations with finite-dimensional solution manifolds.
- Author
-
Kaptsov, O.
- Subjects
- *
PARTIAL differential equations , *FINITE differences , *MATHEMATICAL variables , *BEZOUT'S identity , *DIFFERENTIAL equations , *LINEAR differential equations , *ALGORITHMS , *WOLFRAM language (Computer program language) , *MATRICES (Mathematics) - Abstract
In the paper, overdetermined systems of nonlinear partial differential equations with two independent variables and an arbitrary number of unknown functions are considered. An efficient criterion of finite dimensionality of the solution manifold for a certain class of such systems is obtained. An upper bound of the solution manifold dimensionality of the Bezout type is presented, and algebraic conditions ensuring that this bound is achieved are found. Based on this, a method for constructing the matrix Darboux transform of systems of linear second-order equations is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. Specification completion for IOCO.
- Author
-
Bourdonov, I. and Kossatchev, A.
- Subjects
- *
ERRORS & omissions insurance , *TECHNICAL specifications , *ALGORITHMS , *FALLIBILITY , *ERRORS , *REQUIREMENTS engineering - Abstract
The paper is devoted to the ioco relation, which determines conformance of an implementation to the specification. Problems related to nonreflexivity of the ioco relation, presence of nonconformal traces (which are lacking in any conformal implementation) in the specification, lack of the ioco preservation upon composition (composition of implementations conformal to their specifications may be not conformal to the composition of these specifications), and 'false' errors when testing in a context are considered. To solve these problems, an algorithm of specification completion preserving ioco is proposed (the class of conformal implementations is preserved). The above-specified problems are lacking in the class of completed specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.