171 results on '"Exponential complexity"'
Search Results
2. A New Approach for Optimal Selection of Features for Classification Based on Rough Sets, Evolution and Neural Networks
- Author
-
Torres-Constante, Eddy, Ibarra-Fiallo, Julio, Intriago-Pazmiño, Monserrate, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Efficiency Analysis of OLAP-data Hypercube Decomposition for Exponential Computational Complexity Methods
- Author
-
A. P. Nosov, A. A. Akhrem, and V. Z. Rakhmankulov
- Subjects
analytical olap-systems ,decomposition ,computational performance ,olap-data hypercube ,exponential complexity ,Mathematics ,QA1-939 - Abstract
The paper studies problems of reduction (decomposition) of OLAP-hypercube multidimensional data models. When decomposing large hyper-cubes of multidimensional data into sub-cube components the goal is to increase the computational performance of analytical OLAP systems, which is related to decreasing computational complexity of reduction methods for solving OLAP-data analysis problems with respect to the computational complexity of non-reduction methods, applied to data directly all over the hypercube. The paper formalizes the concepts of reduction and non-reduction methods and gives a definition of the upper bound for the change in the computational complexity of reduction methods in the decomposition of the problem of analyzing multidimensional OLAP-data in comparison with non-reduction methods in the class of exponential degree of computational complexity.The exact values of the upper bound for changing computational complexity are obtained for the hypercube decomposition into two sub-cubes on sets consisting of an even and an odd number of sub-cube structures, and its main properties are given, which are used to determine the decomposition efficiency. A formula for the efficiency of decomposition into two sub-cube structures for reduction of OLAP data analysis problems is obtained, and it is shown that with an increase in the dimension “n” of the lattice specifying the number of sub-cubes in the hypercube data structure, the efficiency of such a decomposition obeys an exponential law with an exponent “n/2”, regardless of the parity “n”. The examples show the possibility to use the values (found) of the upper bound for the change in computational complexity to establish the effectiveness criteria for reduction methods and the expediency of decomposition in specific cases.The paper results can be used in processing and analysis of information arrays of hypercube structures of analytical OLAP systems belonging to the Big-Data or super-large computer systems of multidimensional data.
- Published
- 2021
- Full Text
- View/download PDF
4. Aggregation with dependencies: Capacities and fuzzy integrals
- Author
-
Dmitriy Divakov and Gleb Beliakov
- Subjects
Exponential complexity ,0209 industrial biotechnology ,Mathematical optimization ,Logic ,Scale (descriptive set theory) ,02 engineering and technology ,Fuzzy logic ,020901 industrial engineering & automation ,Choquet integral ,Artificial Intelligence ,Obstacle ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,State (computer science) ,Mathematics - Abstract
We outline recent trends in capacity-based aggregation in large universes. Capacities (fuzzy measures) model dependencies among the inputs, and aggregation by the discrete Choquet, Sugeno and other fuzzy integrals accounts for synergies and redundancies. For large number of inputs the exponential complexity of all interactions is a major obstacle. We exemplify the need for aggregation of a large number of dependent inputs on several applications and discuss the challenges and approaches to reducing the complexity of capacity-based aggregation. We also state which mathematical and computational tools are required for large scale capacity modelling.
- Published
- 2022
5. Nonlinear-Dynamic Cryptology Versus Steganography and Cryptografics
- Author
-
Izmailov, Igor, Poizner, Boris, Romanov, Ilia, Smolskiy, Sergey, Izmailov, Igor, Poizner, Boris, Romanov, Ilia, and Smolskiy, Sergey
- Published
- 2016
- Full Text
- View/download PDF
6. Growth Properties of Power-Free Languages
- Author
-
Shur, Arseny M., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Mauri, Giancarlo, editor, and Leporati, Alberto, editor
- Published
- 2011
- Full Text
- View/download PDF
7. On the Exact Complexity of Evaluating Quantified k-CNF
- Author
-
Calabro, Chris, Impagliazzo, Russell, Paturi, Ramamohan, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Raman, Venkatesh, editor, and Saurabh, Saket, editor
- Published
- 2010
- Full Text
- View/download PDF
8. On Moderately Exponential Time for SAT
- Author
-
Dantsin, Evgeny, Wolpert, Alexander, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Strichman, Ofer, editor, and Szeider, Stefan, editor
- Published
- 2010
- Full Text
- View/download PDF
9. k-SAT Is No Harder Than Decision-Unique-k-SAT
- Author
-
Calabro, Chris, Paturi, Ramamohan, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Frid, Anna, editor, Morozov, Andrey, editor, Rybalchenko, Andrey, editor, and Wagner, Klaus W., editor
- Published
- 2009
- Full Text
- View/download PDF
10. A Linear Complexity Algorithm for the Generation of Multiple Input Single Output Instructions of Variable Size
- Author
-
Galuzzi, Carlo, Bertels, Koen, Vassiliadis, Stamatis, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Vassiliadis, Stamatis, editor, Bereković, Mladen, editor, and Hämäläinen, Timo D., editor
- Published
- 2007
- Full Text
- View/download PDF
11. Optimization for MASK Scheme in Privacy Preserving Data Mining for Association Rules
- Author
-
Andruszkiewicz, Piotr, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Kryszkiewicz, Marzena, editor, Peters, James F., editor, Rybinski, Henryk, editor, and Skowron, Andrzej, editor
- Published
- 2007
- Full Text
- View/download PDF
12. A Linear Complexity Algorithm for the Automatic Generation of Convex Multiple Input Multiple Output Instructions
- Author
-
Galuzzi, Carlo, Bertels, Koen, Vassiliadis, Stamatis, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Diniz, Pedro C., editor, Marques, Eduardo, editor, Bertels, Koen, editor, Fernandes, Marcio Merino, editor, and Cardoso, João M. P., editor
- Published
- 2007
- Full Text
- View/download PDF
13. Оценка возможности применения квантовой криптографии в сетях межбанковского обмена данными
- Author
-
Ю. И. Кобылина
- Subjects
quantum cryptography ,quantum network ,qubit ,the factorization task ,quantum alignment ,exponential complexity ,Economics as a science ,HB71-74 ,Business ,HF5001-6182 - Abstract
В статье приводиться краткий анализ квантовых сетей, а также сравнительные данные по определению сложности взлома алгоритмов на классических компьютерах и на компьютерах с использованием квантовых вычислений.
- Published
- 2014
14. Preliminaries
- Author
-
Peled, Doron A. and Peled, Doron A.
- Published
- 2001
- Full Text
- View/download PDF
15. Explicit Pieri Inclusions
- Author
-
Markus Hunziker, John A. Miller, and Mark R. Sepanski
- Subjects
Exponential complexity ,Classical group ,Applied Mathematics ,General linear group ,Theoretical Computer Science ,Combinatorics ,Tensor product ,Computational Theory and Mathematics ,Irreducible representation ,Bounded function ,Discrete Mathematics and Combinatorics ,Geometry and Topology ,Polynomial time complexity ,Mathematics - Abstract
By the Pieri rule, the tensor product of an exterior power and a finite-dimensional irreducible representation of a general linear group has a multiplicity-free decomposition. The embeddings of the constituents are called Pieri inclusions and were first studied by Weyman in his thesis and described explicitly by Olver. More recently, these maps have appeared in the work of Eisenbud, Fløystad, and Weyman and of Sam and Weyman to compute pure free resolutions for classical groups. In this paper, we give a new closed form, non-recursive description of Pieri inclusions. For partitions with a bounded number of distinct parts, the resulting algorithm has polynomial time complexity whereas the previously known algorithm has exponential time complexity.
- Published
- 2021
16. ON THE COMPLEXITY OF LOCAL SEARCH IN UNCONSTRAINED QUADRATIC BINARY OPTIMIZATION.
- Author
-
PAPP, DÁVID
- Subjects
- *
BINARY number system , *SEARCH algorithms , *MATHEMATICAL optimization , *EXPONENTIAL functions , *MATHEMATICAL analysis - Abstract
We consider the problem of finding a local minimum of a binary quadratic function and show by an elementary construction that every descending local search algorithm takes exponential time in the worst case. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Linear Complexity Entropy Regions
- Author
-
Yirui Liu and John MacLaren Walsh
- Subjects
Set (abstract data type) ,Exponential complexity ,Discrete mathematics ,Linear complexity ,Network information theory ,Decomposition (computer science) ,Entropy (information theory) ,Uncountable set ,Random variable ,Mathematics - Abstract
A great many problems in network information theory, both fundamental and applied, involve determining a minimal set of inequalities linking the Shannon entropies of a certain collection of subsets of random variables. In principle this minimal set of inequalities could be determined from the entropy region, whose dimensions are all the subsets of random variables, by projecting out dimensions corresponding to the irrelevant subsets. As a general solution technique, however, this method is plagued both by the incompletely known nature of the entropy region as well as the exponential complexity of its bounds. Even worse, for four or more random variables, it is known that the set of linear information inequalities necessary to completely describe the entropy region must be uncountably infinite. A natural question arises then, if there are certain nontrivial collections of subsets where the inequalities linking only these subsets is both completely known, and have inequality descriptions that are linear in the number of random variables. This paper answers this question in the affirmative. A decomposition expressing the collection of inequalities linking a larger collection of subsets from that of smaller collections of subsets is first proven. This decomposition is then used to provide systems of subsets for which it both exhaustively determines the complete list of inequalities, which is linear in the number of variables.
- Published
- 2021
18. Proceedings Third Joint Workshop on Developments in Implicit Computational complExity and Foundational & Practical Aspects of Resource Analysis
- Author
-
Brian F. Redmond
- Subjects
Exponential complexity ,Theoretical computer science ,Computer science ,Resource analysis ,Hierarchical control system ,Implicit computational complexity ,Joint (audio engineering) ,Linear logic - Published
- 2019
19. Diagnosability of a class of discrete event systems based on observations
- Author
-
Reshmila S, Hindustan Institute of Technology and Science HITS, and Rajagopalan Devanathan
- Subjects
Exponential complexity ,0209 industrial biotechnology ,Class (set theory) ,Linear complexity ,Control and Optimization ,Theoretical computer science ,Computer science ,020208 electrical & electronic engineering ,Aerospace Engineering ,Computational intelligence ,Observable ,02 engineering and technology ,Power set ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Event (probability theory) - Abstract
The diagnosability of discrete event systems has been a topic of interest to many researchers. The diagnosability conditions for various systems have evolved based on a regularity condition that is imposed on faulty traces with respect to their observable continuations. Improving upon this weak but necessary condition, a new model of diagnosability that is based on sensor outputs, which are called observations, upon a command input is proposed in this paper. Necessary and sufficient conditions are derived for the proposed diagnosability model. The search performance of the proposed diagnosability condition is of linear complexity in terms of the power set of the system events and observations, compared to the exponential complexity of the search with the existing diagnosability regularity condition. Moreover, a system that is not diagnosable according to the existing diagnosability condition may be diagnosable in the proposed diagnosability model, which includes observations.
- Published
- 2019
20. Robot Mission Planning using Co-evolutionary Optimization
- Author
-
Kala Rahul
- Subjects
Exponential complexity ,0209 industrial biotechnology ,Cooperative coevolution ,Computer science ,business.industry ,General Mathematics ,Evolutionary robotics ,02 engineering and technology ,Computer Science Applications ,020901 industrial engineering & automation ,Control and Systems Engineering ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Temporal logic ,Artificial intelligence ,Motion planning ,business ,Heuristics ,Software - Abstract
SummaryMission planning is a complex motion planning problem specified by using Temporal Logic constituting of Boolean and temporal operators, typically solved by model verification algorithms with an exponential complexity. The paper proposes co-evolutionary optimization thus building an iterative solution to the problem. The language for mission specification is generic enough to represent everyday missions, while specific enough to design heuristics. The mission is broken into components which cooperate with each other. The experiments confirm that the robot is able to outperform the search, evolutionary and model verification techniques. The results are demonstrated by using a Pioneer LX robot.
- Published
- 2019
21. Computational analysis of non-coding RNAs in Alzheimer's disease
- Author
-
Ghulam Md Ashraf, Magdah Ganash, and Alexiou Athanasios
- Subjects
0301 basic medicine ,Exponential complexity ,BACE1-AS ,Structural alignment ,Disease ,Computational biology ,Biology ,03 medical and health sciences ,0302 clinical medicine ,microRNA ,17A ,strict β-turns ,Computational analysis ,long noncoding RNAs ,hnRNP Q ,RAD18 ,NAT-Rad18 ,RNA ,structural alignment ,General Medicine ,Alzheimer's disease ,030104 developmental biology ,secondary structure prediction ,030217 neurology & neurosurgery ,Coding (social sciences) ,Research Article - Abstract
Latest studies have shown that Long Noncoding RNAs corresponds to a crucial factor in neurodegenerative diseases and next-generation therapeutic targets. A wide range of advanced computational methods for the analysis of Noncoding RNAs mainly includes the prediction of RNA and miRNA structures. The problems that concern representations of specific biological structures such as secondary structures are either characterized as NP-complete or with high complexity. Numerous algorithms and techniques related to the enumeration of sequential terms of biological structures and mainly with exponential complexity have been constructed until now. While BACE1-AS, NATRad18, 17A, and hnRNP Q lnRNAs have been found to be associated with Alzheimer's disease, in this research study the significance of the most known β-turn-forming residues between these proteins is computationally identified and discussed, as a potentially crucial factor on the regulation of folding, aggregation and other intermolecular interactions.
- Published
- 2019
22. A polynomial recognition of unit forms using graph-based strategies
- Author
-
Diane Castongay, Thomas Brüstle, and Jesmmer Alves
- Subjects
Exponential complexity ,Weakly positive ,Applied Mathematics ,Graph based ,Breadth-first search ,0211 other engineering and technologies ,021107 urban & regional planning ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Polynomial algorithm ,Representation theory ,Combinatorics ,010201 computation theory & mathematics ,Discrete Mathematics and Combinatorics ,Algebraic expression ,Time complexity ,Mathematics - Abstract
The units forms are algebraic expressions that have important role in representation theory of algebras. We identified that existing algorithms have exponential time complexity for weakly nonnegative and weakly positive types. In this paper we introduce a polynomial algorithm for the recognition of weakly nonnegative unit forms. The related algorithm identifies hypercritical restrictions in a given unit form, testing every subgraph of 9 vertices of the unit form associated graph. By adding Depth First Search approach, a similar strategy could be used in the recognition of weakly positive unit forms. We also present the most popular methods to decide whether or not a unit form is weakly nonnegative or weakly positive, we analyze their time complexity and we compare the results with our algorithms.
- Published
- 2019
23. Uncertainty Constrained Robotic Exploration: An Integrated Exploration Planner
- Author
-
Alexander Ivanov and Mark Campbell
- Subjects
Exponential complexity ,0209 industrial biotechnology ,Computer science ,Distributed computing ,Probabilistic logic ,02 engineering and technology ,Planner ,020901 industrial engineering & automation ,Robotic systems ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Motion planning ,Electrical and Electronic Engineering ,Probabilistic framework ,computer ,Hamiltonian path problem ,computer.programming_language - Abstract
Efficient robotic exploration of unknown sensor limited global-information-deficient environments poses unique challenges to path planning algorithms. In these difficult environments, no deterministic guarantees on path completion and mission success can be made in general. Integrated exploration (IE), which strives to combine localization and exploration, must be solved in order to create an autonomous robotic system capable of long-term operation in new and challenging environments. This paper formulates a probabilistic framework that allows the creation of exploration algorithms providing probabilistic guarantees of success. A novel connection is made between the Hamiltonian path problem and exploration. The guaranteed probabilistic information explorer (G-PIE) is developed for the IE problem, providing a probabilistic guarantee on path completion, and asymptotic optimality of exploration. A receding horizon probabilistic information explorer (RH-PIE) is presented, which addresses the exponential complexity present in G-PIE. Finally, RH-PIE planner is verified via autonomous hardware-in-the-loop experiments.
- Published
- 2019
24. A Comparative Study of Two Algorithms for Computing the Shortest Reducts: MiLIT and MinReduct
- Author
-
Vladimir Rodríguez-Diez, José Fco. Martínez-Trinidad, Jesús Ariel Carrasco-Ochoa, Manuel S. Lazo-Cortés, and J. Arturo Olvera-López
- Subjects
Exponential complexity ,Range (mathematics) ,Task (computing) ,Decision system ,Computer science ,Asymptotic complexity ,Rough set ,Algorithm - Abstract
Rough set reducts are irreducible attribute subsets preserving discernibility information of a decision system. Computing all reducts has exponential complexity regarding the number of attributes in the decision system. Given the high computational cost of this task, computing only the reducts of minimum length (the shortest reducts) becomes relevant for a wide range of applications. Two recent algorithms have been reported, almost simultaneously, for computing these irreducible attribute subsets with minimum length: MiLIT and MinReduct. MiLIT was designed at the top of the Testor Theory while MinReduct comes from the Rough Set Theory. Thus, in this paper, we present a comparative study of these algorithms in terms of asymptotic complexity and runtime performance.
- Published
- 2021
25. Hierarchical Learning of Dependent Concepts for Human Activity Recognition
- Author
-
Pegah Alizadeh, Aomar Osmani, and Massinissa Hamidi
- Subjects
Exponential complexity ,Hierarchy ,Process (engineering) ,business.industry ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Class (biology) ,Separable space ,Reduction (complexity) ,Activity recognition ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Natural (music) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
In multi-class classification tasks, like human activity recognition, it is often assumed that classes are separable. In real applications, this assumption becomes strong and generates inconsistencies. Besides, the most commonly used approach is to learn classes one-by-one against the others. This computational simplification principle introduces strong inductive biases on the learned theories. In fact, the natural connections among some classes, and not others, deserve to be taken into account. In this paper, we show that the organization of overlapping classes (multiple inheritances) into hierarchies considerably improves classification performances. This is particularly true in the case of activity recognition tasks featured in the SHL dataset. After theoretically showing the exponential complexity of possible class hierarchies, we propose an approach based on transfer affinity among the classes to determine an optimal hierarchy for the learning process. Extensive experiments show improved performances and a reduction in the number of examples needed to learn.
- Published
- 2021
26. Finding Pythons in Unexpected Places
- Author
-
Geoff Penington, Arvin Shahbazi-Moghaddam, and Netta Engelhardt
- Subjects
Physics ,Exponential complexity ,Surface (mathematics) ,High Energy Physics - Theory ,Code (set theory) ,Quantum Physics ,Physics and Astronomy (miscellaneous) ,FOS: Physical sciences ,State (functional analysis) ,General Relativity and Quantum Cosmology (gr-qc) ,General Relativity and Quantum Cosmology ,Theoretical physics ,High Energy Physics - Theory (hep-th) ,Excited state ,State dependence ,Quantum Physics (quant-ph) ,Quantum ,Subspace topology - Abstract
We argue that novel (highly nonclassical) quantum extremal surfaces play a crucial role in reconstructing the black hole interior even for isolated, single-sided, non-evaporating black holes (i.e. with no auxiliary reservoir). Specifically, any code subspace where interior outgoing modes can be excited will have a quantum extremal surface in its maximally mixed state. We argue that as a result, reconstruction of interior outgoing modes is always exponentially complex. Our construction provides evidence in favor of a strong Python's lunch proposal: that nonminimal quantum extremal surfaces are the exclusive source of exponential complexity in the holographic dictionary. We also comment on the relevance of these quantum extremal surfaces to the geometrization of state dependence in the typicality arguments for firewalls., Comment: 43 pages, 13 figures
- Published
- 2021
- Full Text
- View/download PDF
27. Double exponential inseparability of Robinson subsystem Q+ from the unsatisfiable sentences in the language of addition
- Author
-
Faglia, Giovanni, Goos, Gerhard, editor, Hartmanis, Juris, editor, Gottlob, Georg, editor, Leitsch, Alexander, editor, and Mundici, Daniele, editor
- Published
- 1993
- Full Text
- View/download PDF
28. How Bad Are Bad Templates? Optimistic Design-Stage Side-Channel Security Evaluation and its Cost
- Author
-
Rinat Breuer and Itamar Levi
- Subjects
Exponential complexity ,Computer Networks and Communications ,Computer science ,side-channel analysis ,template attacks ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,lcsh:Technology ,0202 electrical engineering, electronic engineering, information engineering ,Side channel attack ,Statistical distance ,Design stage ,business.industry ,lcsh:T ,Applied Mathematics ,020208 electrical & electronic engineering ,worst case security evaluation ,Adversary ,corners ,simulation ,Computer Science Applications ,Template ,Computational Theory and Mathematics ,Risk analysis (engineering) ,010201 computation theory & mathematics ,device mismatch ,statistical distance ,business ,Software - Abstract
Cryptographic designs are vulnerable to side-channel analysis attacks. Evaluating their security during design stages is of crucial importance. The latter is achieved by very expensive (slow) analog transient-noise simulations over advanced fabrication process technologies. The main challenge of such rigorous security-evaluation analysis lies in the fact that technologies are becoming more and more complex and the physical properties of manufactured devices vary significantly due to process variations. In turn, a detailed security evaluation process imposes exponential time complexity with the circuit-size, the number of physical implementation corners (statistical variations) and the accuracy of the circuit-simulator. Given these circumstances, what is the cost of not exhausting the entire implementation space? In terms of simulation-time complexity, the benefits would clearly be significant, however, we are interested in evaluating the security implications. This question can be formulated for many other interesting side-channel contexts such as for example, how would an attack-outcome vary when the adversary is building a leakage template over one device, i.e., one physical corner, and it performs an evaluation (attack) phase of a device drawn from a different statistical corner? Alternatively, is it safe to assume that a typical (average) corner would represent the worst case in terms of security evaluation or would it be advisable to perform a security evaluation over another specific view? Finally, how would the outcome vary concretely? We ran in-depth experiments to answer these questions in the hope of finding a nice tradeoff between simulation efforts and expertise, and security-evaluation degradation. We evaluate the results utilizing methodologies such as template-attacks with a clear distinction between profiling and attack-phase statistical views. This exemplary view of what an adversary might capture in these scenarios is followed by a more complete statistical evaluation analysis utilizing tools such as the Kullback&ndash, Leibler (KL) divergence and the Jensen-Shannon (JS) divergence to draw conclusions.
- Published
- 2020
- Full Text
- View/download PDF
29. Canonical Decomposition of Basic Belief Assignment for Decision-Making Support
- Author
-
Florentin Smarandache, Jean Dezert, DTIS, ONERA, Université Paris Saclay [Palaiseau], ONERA-Université Paris-Saclay, UNIVERSITY OF NEW MEXICO GALLUP USA, Partenaires IRSTEA, and Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)-Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)
- Subjects
Exponential complexity ,[PHYS]Physics [physics] ,Canonical decomposition ,Theoretical computer science ,Computer science ,Basic belief ,media_common.quotation_subject ,PCR5 ,020206 networking & telecommunications ,02 engineering and technology ,16. Peace & justice ,[SPI]Engineering Sciences [physics] ,Belief functions ,0202 electrical engineering, electronic engineering, information engineering ,Fuse (electrical) ,Decomposition (computer science) ,Decomposition method (queueing theory) ,020201 artificial intelligence & image processing ,Quality (business) ,Support system ,[INFO]Computer Science [cs] ,[MATH]Mathematics [math] ,media_common ,Decision-making - Abstract
International audience; We present a new methodology for decision-making support based on belief functions thanks to a new theoretical canonical decomposition of dichotomous basic belief assignments (BBAs) that has been developed recently. This decomposition based on proportional conflict redistribution rule no 5 (PCR5) always exists and is unique. This new PCR5-based decomposition method circumvents the exponential complexity of the direct fusion of BBAs with PCR5 rule and it allows to fuse quickly many sources of evidences. The method we propose in this paper provides both a decision and an estimation of the quality of the decision made, which is appealing for decision-making support systems.
- Published
- 2020
30. A Deterministic-Statistical Multiple-Defect Diagnosis Methodology
- Author
-
R. D. Shawn Blanton and Soumya Mittal
- Subjects
Exponential complexity ,Computer science ,business.industry ,Process development ,Process (computing) ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Fault injection ,Fault (power engineering) ,Machine learning ,computer.software_genre ,Chip ,020202 computer hardware & architecture ,0202 electrical engineering, electronic engineering, information engineering ,Software diagnosis ,Artificial intelligence ,business ,computer - Abstract
Software diagnosis is the process of locating and characterizing a defect in a failing chip. It is the cornerstone of failure analysis that consequently enables yield learning and monitoring. However, multiple-defect diagnosis is challenging due to error masking and unmasking effects, and exponential complexity of the solution search process. This paper describes a three-phase, physically-aware diagnosis methodology called MDLearnX to effectively diagnose multiple defects, and in turn, aid in accelerating the design and process development. The first phase identifies a defect that resembles traditional fault models. The second and the third phases utilize the X-fault model and machine learning to identify correct candidates. Results from a thorough fault injection and simulation experiment demonstrate that MD-LearnX returns an ideal diagnosis 2X more often than commercial diagnosis. Its effectiveness is further evidenced through a silicon experiment, where, on average, MD-LearnX returns 5.3 fewer candidates per diagnosis as compared to state-of-the-art commercial diagnosis without losing accuracy.
- Published
- 2020
31. Multiple Sequence Alignment using a Multiobjective Artificial Bee Colony Algorithm
- Author
-
Najwa Altwaijry, Areej Almalki, Isra Al-Turaiki, and Malak Almasoud
- Subjects
Artificial bee colony algorithm ,Exponential complexity ,Multiple sequence alignment ,Computer science ,Open problem ,Entropy (information theory) ,Multi-objective optimization ,Algorithm - Abstract
Multiple sequences alignment is an essential task in many bioinformatics analyses. However, it is a challenging problem due to its exponential complexity. Although many approaches have been proposed for solving the MSA problem, finding the best alignment of the MSA remains an open problem. In this work, we propose a new multiobjective optimization approach based on the Artificial Bee Colony (ABC) algorithm to solve the MSA problem. The ABC algorithm is inspired by the intelligent behavior of honey bee swarms. Our proposed approach optimizes two objective functions; the sum-of-pairs (SP) function and entropy. Optimizing these objective functions represents a trade off between maximizing the SP function and minimizing the entropy to choose the most suitable alignment. Experiments are conducted on 12 datasets from BAliBASE 3.0. The experimental results show that our proposed approach achieves better alignments than Clustal in four of the tested datasets. In general, the proposed approach performs better on RV12 datasets.
- Published
- 2020
32. Computing the Homology of Semialgebraic Sets. I: Lax Formulas
- Author
-
Felipe Cucker, Peter Bürgisser, and Josué Tonelli-Cueto
- Subjects
Exponential complexity ,Computational Geometry (cs.CG) ,FOS: Computer and information sciences ,Betti number ,010103 numerical & computational mathematics ,Homology (mathematics) ,01 natural sciences ,Mathematics - Algebraic Geometry ,14P10, 65D18, 65Y20, 68Q25 ,Polynomial inequalities ,Exponential growth ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,0101 mathematics ,Algebraic Geometry (math.AG) ,Mathematics ,Discrete mathematics ,Applied Mathematics ,Numerical analysis ,010102 general mathematics ,Exponential function ,Computational Mathematics ,Computational Theory and Mathematics ,Torsion (algebra) ,Computer Science - Computational Geometry ,Analysis - Abstract
We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of closed semialgebraic sets given by Boolean formulas without negations over lax polynomial inequalities. The algorithm works in weak exponential time. This means that outside a subset of data having exponentially small measure, the cost of the algorithm is single exponential in the size of the data. All previous algorithms solving this problem have doubly exponential complexity (and this is so for almost all input data). Our algorithm thus represents an exponential acceleration over state-of-the-art algorithms for all input data outside a set that vanishes exponentially fast., 43 pages
- Published
- 2020
- Full Text
- View/download PDF
33. Towards Selecting Reducts for Building Decision Rules for Rule-Based Classifiers
- Author
-
Jesús Ariel Carrasco-Ochoa, José Fco. Martínez-Trinidad, Nelva N. Almanza-Ortega, and Manuel S. Lazo-Cortés
- Subjects
Exponential complexity ,business.industry ,Computer science ,010401 analytical chemistry ,Sample (statistics) ,Rule-based system ,02 engineering and technology ,Decision rule ,Machine learning ,computer.software_genre ,01 natural sciences ,0104 chemical sciences ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Artificial intelligence ,business ,Focus (optics) ,computer - Abstract
In rule-based classifiers, calculating all possible rules of a learning sample consumes many resources due to its exponential complexity. Therefore, finding ways to reduce the number and length of the rules without affecting the efficacy of a classifier remains an interesting problem. Reducts from rough set theory have been used to build rule-based classifiers by their conciseness and understanding. However, the accuracy of the classifiers based on these rules depends on the selected rule subset. In this work, we focus on analyzing three different options for using reducts for building decision rules for rule-based classifiers .
- Published
- 2020
34. Constant-time higher-order Boolean-to-arithmetic masking
- Author
-
Michael Hutter and Michael Tunstall
- Subjects
Exponential complexity ,Computer Networks and Communications ,business.industry ,Computer science ,Quadratic complexity ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Masking (Electronic Health Record) ,020202 computer hardware & architecture ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Arithmetic ,business ,Computer communication networks ,Versa ,Software ,Order of magnitude ,Computer Science::Cryptography and Security - Abstract
Converting a Boolean mask to an arithmetic mask, and vice versa, is often required in implementing side-channel-resistant instances of cryptographic algorithms that mix Boolean and arithmetic operations. In this paper, we describe a method for converting a Boolean mask to an arithmetic mask that runs in constant time for a fixed order and has quadratic complexity as the security order increases, a significant improvement in previous work that has exponential complexity. We propose explicit algorithms for a second-order secure Boolean-to-arithmetic mask conversion that uses 31 instructions and for a third-order secure mask conversion that uses 74 instructions. We show that our second-order secure algorithm is at least an order of magnitude faster and our third-order secure algorithm is more than twice as fast as other algorithms in the literature.
- Published
- 2018
35. Formalization of the problem of protection against reconnaissance in conflict systems
- Author
-
T. N. Yamskikh, M. A. Styugin, and A. A. Kytmanov
- Subjects
Information awareness ,Exponential complexity ,021110 strategic, defence & security studies ,Algebra and Number Theory ,Theoretical computer science ,Optimization problem ,Steganography ,Computer science ,Applied Mathematics ,0211 other engineering and technologies ,Active systems ,02 engineering and technology ,Information security ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Analysis - Abstract
This paper attempts to identify recent trends in counteraction to the intruder’s penetration into the system at reconnaissance stage as the sphere of information security. The problem of actions interrelation, agents’ information awareness and system transition from one state into another is formulated.Based on the functional model, formalization of the three tasks is presented. The first task formulates the conditions in which the system can not transform into undesirable state for a given set of indiscernible elements. The second task formalizes the way to achieve a safe state in the system and allows one to formulate the optimization problem. The third task formalises the conditions under which the system agents actions may be indiscernible to each other.The scope of study for functional steganography is defined. Exponential complexity of agents actions factorization in a system is proved. The definition of absolutely indiscernible actions is provided
- Published
- 2018
36. Topological entropy and geometric entropy and their application to the horizontal visibility graph for financial time series
- Author
-
Lei Rong and Pengjian Shang
- Subjects
Exponential complexity ,Applied Mathematics ,Mechanical Engineering ,Visibility graph ,Aerospace Engineering ,Ocean Engineering ,Topological entropy ,01 natural sciences ,Symbolic method ,010305 fluids & plasmas ,Correlation ,Control and Systems Engineering ,Simulated data ,0103 physical sciences ,Electrical and Electronic Engineering ,Logistic map ,Symbolic algorithm ,010301 acoustics ,Algorithm ,Mathematics - Abstract
In this paper, we introduce topological entropy (TE) based on time series, which characterizes the total exponential complexity of a quantified system with a single number. Combined with multiscale theory, we propose geometric entropy (GE), aiming to examine the correlation among different time series. In order to detect the properties of TE and GE, we apply them to an original symbolic method utilized to measure time series irreversibility, namely horizontal visibility algorithm. On this basis, we propose a time series irreversibility measure, i.e., normalized index. Then, we employ TE and GE based on the horizontal visibility graph symbolic algorithm to simulated time series, which is generated by the logistic map with different parameters. Through the comparison of the results, we find out that different simulated data have the same variation tendency of TE, which means that TE is capable of reflecting the similarity among different time series. On the basic of these results, we further analyze the irreversibility of simulated data and also get some interesting findings. From the GE results comparison, we conclude that the GE method can distinguish different time series and expose their correlation efficiently. As a farther validation, we explore the effects of these methods on the analysis of different stock time series. Results show that they can reflect a large number of interrelationships, and successfully quantify the changes in the complexity of different stock market data.
- Published
- 2018
37. Resource Allocation for Delay Differentiated Traffic in Multiuser OFDM Systems.
- Author
-
Meixia Tao, Ying-Chang Liang, and Fan Zhang
- Abstract
Most existing work on adaptive allocation of sub- carriers and power in multiuser orthogonal frequency division multiplexing (OFDM) systems has focused on homogeneous traffic consisting solely of either delay-constrained data (guaranteed service) or non-delay-constrained data (best-effort service). In this paper, we investigate the resource allocation problem in a heterogeneous multiuser OFDM system with both delay-constrained (DC) and non-delay-constrained (NDC) traffic. The objective is to maximize the sum-rate of all the users with NDC traffic while maintaining guaranteed rates for the users with DC traffic under a total transmit power constraint. Through our analysis we show that the optimal power allocation over subcarriers follows a multi-level water-filling principle; moreover, the valid candidates competing for each subcarrier include only one NDC user but all DC users. By converting this combinatorial problem with exponential complexity into a convex problem or showing that it can be solved in the dual domain, efficient iterative algorithms are proposed to find the optimal solutions. To further reduce the computational cost, a low-complexity suboptimal algorithm is also developed. Numerical studies are conducted to evaluate the performance of the proposed algorithms in terms of service outage probability, achievable transmission rate pairs for DC and NDC traffic, and multiuser diversity. [ABSTRACT FROM PUBLISHER]
- Published
- 2008
- Full Text
- View/download PDF
38. SHORTEST PATH AMIDST DISC OBSTACLES IS COMPUTABLE.
- Author
-
CHANG, EE-CHIEN, CHOI, SUNG WOO, KWON, DO YONG, PARK, HYUNGJU, and YAP, CHEE K.
- Subjects
- *
MODEL-integrated computing , *COMPUTATIONAL complexity , *RATIONAL numbers , *ALGORITHMS , *NUMBER theory - Abstract
An open question in Exact Geometric Computation is whether there are transcendental computations that can be made "geometrically exact". Perhaps the simplest such problem in computational geometry is that of computing the shortest obstacle-avoiding path between two points p,q in the plane, where the obstacles are a collection of n discs. This problem can be solved in O(n2log n) time in the Real RAM model, but nothing was known about its computability in the standard (Turing) model of computation. We first give a direct proof of the Turing-computability of this problem, provided the radii of the discs are rationally related. We make the usual assumption that the numerical input data are real algebraic numbers. By appealing to effective bounds from transcendental number theory, we further show a single-exponential time upper bound when the input numbers are rational. Our result appears to be the first example of a non-algebraic combinatorial problem which is shown computable. It is also a rare example of transcendental number theory yielding positive computational results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
39. Efficient computation of basic sums for random polydispersed composites
- Author
-
Wojciech Nawalaniec
- Subjects
Exponential complexity ,Polynomial ,Plane (geometry) ,Applied Mathematics ,Computation ,02 engineering and technology ,01 natural sciences ,010305 fluids & plasmas ,Computational Mathematics ,Packing problems ,020303 mechanical engineering & transports ,Complex algorithm ,0203 mechanical engineering ,0103 physical sciences ,Representative elementary volume ,Composite material ,Mathematics ,Abstraction (linguistics) - Abstract
The main goal of the paper was to develop algorithms and methods for computation of basic sums, the mathematical objects of great importance in computational materials science having applications to description of the representative volume element (RVE) and to the effective properties of 2D composites. The previously used algorithm had the exponential complexity. We propose a linearly complex algorithm. All the presented algorithms can be easily implemented in modern scientific computing packages, while maintaining both efficient calculations and a high level of abstraction. The proposed approach is applied to derivation of a polynomial approximation of the effective conductivity formula for 2D random material with non-overlapping circular inclusions with normally distributed radii. The obtained formulas are applied to the optimal packing problem of disks on the plane.
- Published
- 2017
40. On the macroscopic optimization of multicell wireless systems with multiuser detection and multiple antennas - uplink analysis.
- Author
-
Lau, V.K.N.
- Abstract
In this paper, we consider a system with K single-antenna client users, nB base stations (each base station has nR antennas), as well as a centralized controller. A client user could be associated with a single base station at any time. All the base stations operate at the same frequency and have optimal multiuser detection per base station which cancels intracell interference only. We consider a general problem of uplink macroscopic resource management where the centralized controller dynamically determines an appropriate association mapping of the K users with respect to the nB base stations over a macroscopic time scale. We propose a novel analytical framework for the above macroscopic scheduling problems. A simple rule is to associate a user with the strongest base station (camp-on-the-strongest-cell), and this has been widely employed in conventional cellular systems. However, based on the optimization framework, we found that this conventional approach is in fact not optimal when multiuser detection is employed at the base station. We show that the optimal macroscopic scheduling algorithm is of exponential complexity, and we propose a simple greedy algorithm as a feasible solution. [ABSTRACT FROM PUBLISHER]
- Published
- 2005
- Full Text
- View/download PDF
41. Fast retrieval of similar configurations.
- Author
-
Papadias, D., Mantzourogiannis, M., and Ahmad, I.
- Abstract
Configuration similarity is a special form of content-based image retrieval that considers relative object locations. It can be used as a standalone method, or to complement retrieval based on visual or semantic features. The corresponding queries ask for sets of objects that satisfy some spatio-temporal constraints, e.g., "find all triplets of objects (v1, v2, v3), such that v1 is northeast of v2, which is inside v3." Exhaustive processing (i.e., retrieval of the best solutions) of configuration similarity queries, in general, has exponential complexity and fast search for sub-optimal solutions is the only way to deal with the vast amounts of multimedia information in several real-time applications. In this paper we first discuss the utilization of nonsystematic search heuristics, based on genetic algorithms, simulated annealing and hill climbing approaches. An extensive experimentation with real and synthetic datasets reveals that hill climbing techniques are the best for the current problem; therefore, as a subsequent step we study the search space, and develop improved variations of hill climbing that take advantage of the special structure of the problem to enhance speed. The proposed heuristic methods significantly outperform systematic search when there is only limited time for query processing. [ABSTRACT FROM PUBLISHER]
- Published
- 2003
- Full Text
- View/download PDF
42. Accelerated Exhaustive Algorithm Implementation for Channel Assignment in 802.11 Networks
- Author
-
Mariusz Jakubowski, Iwona Dolinska, and Antoni Masiukiewicz
- Subjects
Exponential complexity ,Speedup ,Interference (communication) ,Wireless network ,Computer science ,Algorithm ,Communication channel - Abstract
The 2.4 GHz band is still heavily used for domestic wireless networks. As the band offers only three non-overlapping channels, in crowded environments, users can suffer from high interference level. To minimize the total interference, an exhaustive channel assignment algorithm, able to find the optimal assignment, can be considered. However, its exponential complexity makes it impractical for sets larger than just several access points (APs). In this paper, a few modifications are introduced which speed up the basic algorithm execution without deteriorating its optimality. The modifications are divided into implementation-related and algorithm-related. Introduced improvements result in over 20 times speed-up of the algorithm execution making it possible to find the optimal channel assignments for over 20-AP sets in minutes instead of hours.
- Published
- 2019
43. Calculating Response-Time Bounds for OpenMP Task Systems with Conditional Branches
- Author
-
Nan Guan, Yaoyao Chi, Jingchang Sun, and Jinghao Sun
- Subjects
Branching (version control) ,Metrical task system ,Exponential complexity ,020203 distributed computing ,Exponential growth ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,Response time ,02 engineering and technology ,Parallel computing ,020202 computer hardware & architecture ,Scheduling (computing) - Abstract
Existing DAG-based task models in real-time scheduling research assume well-nested structures recursively composed by single-source-single-sink parallel and conditional components. However, realistic OpenMP task systems in general have more flexible structures that do not comply with this assumption. In this paper, we model the behaviors of general OpenMP task systems with non-well-nested branching structures and study the problem of how to bound their worst-case response times (WCRT). A naive solution is to apply the established WCRT bound for DAG tasks without conditional branches to the exponentially many possible execution flows, which has exponential time complexity. In this paper, we develop a linear-time algorithm to efficiently calculate WCRT bounds for OpenMP task systems with non-well-nested branching structures. Experiments with both synthetic task graphs and realistic OpenMP programs are conducted to evaluate the performance of our method.
- Published
- 2019
44. SMARTGRAPH: AN ARTIFICIALLY INTELLIGENT GRAPH DATABASE
- Author
-
Ching-Yung Lin, Garud Iyengar, and Hal James Cooper
- Subjects
Router ,Exponential complexity ,Optimization problem ,Graph database ,Theoretical computer science ,Computer science ,Graph (abstract data type) ,Communicating sequential processes ,Data structure ,computer.software_genre ,computer ,Global optimization problem ,computer.programming_language - Abstract
Graph databases and distributed graph computing systems have traditionally abstracted the design and execution of algorithms by encouraging users to take the perspective of lone graph objects, like vertices and edges. In this paper, we introduce the SmartGraph, a graph database that instead relies upon thinking like a smarter device often found in real-life computer networks, the router. Unlike existing methodologies that work at the subgraph level, the SmartGraph is implemented as a network of artificially intelligent Communicating Sequential Processes. The primary goal of this design is to give each “router” a large degree of autonomy. We demonstrate how this design facilitates the formulation and solution of an optimization problem which we refer to as the “router representation problem”, wherein each router selects a beneficial graph data structure according to its individual requirements (including its local data structure, and the operations requested of it). We demonstrate a solution to the router representation problem wherein the combinatorial global optimization problem with exponential complexity is reduced to a series of linear problems locally solvable by each AI router.
- Published
- 2019
45. Rewriting of plain SO tgds into nested tgds
- Author
-
Rihan Hai and Christoph Quix
- Subjects
Exponential complexity ,Theoretical computer science ,Logical equivalence ,Computer science ,General Engineering ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Decidability ,Schema (genetic algorithms) ,010201 computation theory & mathematics ,020204 information systems ,Schema (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,Rewriting ,ddc:004 ,Query Rewriting - Abstract
Proceedings of the VLDB Endowment 12(11), 1526-1538 (2019). doi:10.14778/3342263.3342631, Published by Assoc. of Computing Machinery, [New York, NY]
- Published
- 2019
- Full Text
- View/download PDF
46. Efficient and Accurate Non-exhaustive Pattern-Based Change Detection in Dynamic Networks
- Author
-
Angelo Impedovo, Michelangelo Ceci, and Toon Calders
- Subjects
Change over time ,Exponential complexity ,Computer science ,020204 information systems ,Detector ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Data mining ,computer.software_genre ,computer ,Change detection ,Rendering (computer graphics) - Abstract
Pattern-based change detectors (PBCDs) are non-parametric unsupervised change detection methods that are based on observed changes in sets of frequent patterns over time. In this paper we study PBCDs for dynamic networks; that is, graphs that change over time, represented as a stream of snapshots. Accurate PBCDs rely on exhaustively mining sets of patterns on which a change detection step is performed. Exhaustive mining, however, has worst case exponential time complexity, rendering this class of algorithms inefficient in practice. Therefore, in this paper we propose non-exhaustive PBCDs for dynamic networks. The algorithm we propose prunes the search space following a beam-search approach. The results obtained on real-world and synthetic dynamic networks, show that this approach is surprisingly effective in both increasing the efficiency of the mining step as in achieving higher detection accuracy, compared with state-of-the-art approaches.
- Published
- 2019
47. The Exponential-Time Complexity of Counting (Quantum) Graph Homomorphisms
- Author
-
Hubie Chen, Holger Dell, and Radu Curticapean
- Subjects
Discrete mathematics ,Exponential complexity ,Exponential time hypothesis ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Graph ,Surjective function ,010201 computation theory & mathematics ,020204 information systems ,Quantum graph ,0202 electrical engineering, electronic engineering, information engineering ,Homomorphism ,Linear combination ,Mathematics - Abstract
Many graph parameters can be expressed as homomorphism counts to fixed target graphs; this includes the number of independent sets and the number of k-colorings for any fixed k. Dyer and Greenhill (RSA 2000) gave a sweeping complexity dichotomy for such problems, classifying which target graphs render the problem polynomial-time solvable or \(\#\mathrm {P}\)-hard. In this paper, we give a new and shorter proof of this theorem, with previously unknown tight lower bounds under the exponential-time hypothesis. We similarly strengthen complexity dichotomies by Focke, Goldberg, and Živný (SODA 2018) for counting surjective homomorphisms to fixed graphs. Both results crucially rely on our main contribution, a complexity dichotomy for evaluating linear combinations of homomorphism numbers to fixed graphs. In the terminology of Lovasz (Colloquium Publications 2012), this amounts to counting homomorphisms to quantum graphs.
- Published
- 2019
48. Method for generating decision implication canonical basis based on true premises
- Author
-
Yanhui Zhai, Deyu Li, and Shaoxia Zhang
- Subjects
Exponential complexity ,Theoretical computer science ,05 social sciences ,Complex system ,050301 education ,Computational intelligence ,02 engineering and technology ,Artificial Intelligence ,Standard basis ,0202 electrical engineering, electronic engineering, information engineering ,Formal concept analysis ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,0503 education ,Algorithm ,Time complexity ,Software ,Mathematics ,Optimal decision ,Decision analysis - Abstract
Formal concept analysis is able to visualize and represent knowledge using concept lattice and (attribute) implication. Decision implication is a counterpart of implication in the setting of decision-making. Decision implication canonical basis is a complete, non-redundant and optimal set of decision implications. At present, decision implication canonical basis can be generated with the help of minimal generators; however, this method is not efficient because of its exponential complexity. To solve this problem, we propose an algorithm to generate decision implication canonical basis based on true premises and analyze its time complexity. Experimental results verify the efficiency of this algorithm.
- Published
- 2016
49. Structural Properties of Nonautoreducible Sets
- Author
-
Alan L. Selman and Dung Nguyen
- Subjects
Exponential complexity ,Discrete mathematics ,NEXPTIME ,Truth table ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Set (abstract data type) ,Combinatorics ,Computational Theory and Mathematics ,Integer ,010201 computation theory & mathematics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Constant (mathematics) ,Mathematics - Abstract
We investigate autoreducibility properties of complete sets for NEXP under different polynomial-time reductions. Specifically, we show under some polynomial-time reductions that there are complete sets for NEXP that are not autoreducible. We obtain the following main results: —For any positive integers s and k such that 2 s − 1 > k , there is a ≤ s-T p -complete set for NEXP that is not ≤ k-tt p -autoreducible. —For every constant c > 1, there is a ≤ 2-T p -complete set for NEXP that is not autoreducible under nonadaptive reductions that make no more than three queries, such that each of them has a length between n 1/c and n c , where n is input size. —For any positive integer k , there is a ≤ k-tt p -complete set for NEXP that is not autoreducible under ≤ k-tt p -reductions whose truth table is not a disjunction or a negated disjunction. Finally, we show that settling the question of whether every ≤ dtt p -complete set for NEXP is ≤ NOR-tt p -autoreducible either positively or negatively would lead to major results about the exponential time complexity classes.
- Published
- 2016
50. The Relative Exponential Time Complexity of Approximate Counting Satisfying Assignments
- Author
-
Patrick Traxler
- Subjects
FOS: Computer and information sciences ,Exponential complexity ,Discrete mathematics ,General Computer Science ,Applied Mathematics ,010102 general mathematics ,Hash function ,0102 computer and information sciences ,Computational Complexity (cs.CC) ,01 natural sciences ,Satisfiability ,Computer Science Applications ,Reduction (complexity) ,Computer Science - Computational Complexity ,#SAT ,010201 computation theory & mathematics ,Theory of computation ,0101 mathematics ,Boolean function ,Mathematics - Abstract
We study the exponential time complexity of approximate counting satisfying assignments of CNFs. We reduce the problem to deciding satisfiability of a CNF. Our reduction preserves the number of variables of the input formula and thus also preserves the exponential complexity of approximate counting. Our algorithm is also similar to an algorithm which works particular well in practice for which however no approximation guarantee was known. Towards an analysis of our reduction we provide a new inequality similar to the Bonami-Beckner hypercontractive inequality.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.