217 results
Search Results
2. Implementations and the independent set polynomial below the Shearer threshold.
- Author
-
Galanis, Andreas, Goldberg, Leslie Ann, and Štefankovič, Daniel
- Subjects
- *
POLYNOMIALS , *PARTITION functions , *REAL numbers , *STATISTICAL physics , *COMBINATORICS , *INDEPENDENT sets - Abstract
The independent set polynomial is important in many areas of combinatorics, computer science, and statistical physics. For every integer Δ ≥ 2 , the Shearer threshold is the value λ ⁎ (Δ) = (Δ − 1) Δ − 1 / Δ Δ. It is known that for λ < − λ ⁎ (Δ) , there are graphs G with maximum degree Δ whose independent set polynomial, evaluated at λ , is at most 0. Also, there are no such graphs for any λ > − λ ⁎ (Δ). This paper is motivated by the computational problem of approximating the independent set polynomial when λ < − λ ⁎ (Δ). The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number λ ′ is a graph whose hard-core partition function, evaluated at λ , simulates a vertex-weight of λ ′ in the sense that λ ′ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any λ < − λ ⁎ (Δ) , it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any λ > λ ⁎ (Δ). Our result has already been used in a paper with Bezáková (STOC 2018) to show that it is #P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most Δ at any value λ < − λ ⁎ (Δ). In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. The longest cycle problem is polynomial on interval graphs.
- Author
-
Shang, Jianhui, Li, Peng, and Shi, Yi
- Subjects
- *
INTERSECTION graph theory , *POLYNOMIALS , *POLYNOMIAL time algorithms , *DYNAMIC programming , *HAMILTONIAN graph theory , *GRAPH connectivity - Abstract
• The longest cycle problem on interval graphs is solvable in the first simple polynomial algorithm. • The properties of normal orderings of interval graphs are proposed to solve the longest cycle problem. • The longest Hamiltonian connected subgraph problem on interval graphs may be solved by using a similar approach. The longest cycle problem is the problem of finding a cycle with maximal vertices in a graph. Although it is solvable in polynomial time on few trivial graph classes, the longest cycle problem is well known as NP-complete. A lot of efforts have been devoted to the longest cycle problem. To the best of our knowledge however, there are no polynomial algorithms that can solve any of the non-trivial graph classes. Interval graphs, the intersection of chordal graphs and asteroidal triple-free graphs, are known to be the non-trial graph classes that have polynomial algorithm of the longest cycle problem. In 2009, K. Ioannidou, G.B. Mertzios and S.D. Nikolopoulos presented a polynomial algorithm for the longest path problem on interval graphs in Ioannidou et al. (2009) [19]. Inspired by their work, we investigate the longest cycle problem of interval graphs. In this paper, we present the first polynomial algorithm for the longest cycle problem on interval graphs. A dynamic programming approach is proposed in the polynomial algorithm that runs in O (n 8) time, where n is the number of vertices of the input graph. Using a similar approach, we design a polynomial algorithm to solve the longest k -thick subgraph problem on interval graphs which will be presented in another separate work. According to the interesting properties of k -thick interval graphs that we discovered (e.g., an interval graph G is traceable if and only if G is 1-thick, G is hamiltonian if and only if G is 2-thick, G is hamiltonian connected if and only if G is 3-thick and so on), the algorithm presented in this paper can be important in studying the spanning connectivity on interval graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Deterministic normal position transformation and its applications.
- Author
-
M.-Alizadeh, Benyamin and Hashemi, Amir
- Subjects
- *
DETERMINISTIC algorithms , *ALGORITHMS , *LINEAR algebra , *GROBNER bases , *POLYNOMIALS - Abstract
In this paper, we address the problem of transforming an ideal into normal position. We present a deterministic algorithm (based on linear algebra techniques) that finds a suitable linear change of variables to transform a given zero-dimensional (not necessarily radical) ideal into normal position. The main idea of this algorithm is to achieve this position via a sequence of elementary linear changes; i.e. each change involves only two variables. Furthermore we give complete complexity analysis of all the algorithms described in this paper based on a sharp upper bound for the maximal number of required elementary linear changes in the special case of the bi-variate polynomial ideal. As an application of this investigation, we conclude the paper by giving a sharper complexity bound for computing a primary decomposition for a zero-dimensional ideal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Placing quantified variants of 3-SAT and Not-All-Equal 3-SAT in the polynomial hierarchy.
- Author
-
Döcker, Janosch, Dorn, Britta, Linz, Simone, and Semple, Charles
- Subjects
- *
NP-complete problems , *POLYNOMIAL time algorithms , *HIERARCHIES , *POLYNOMIALS , *COMPUTATIONAL complexity - Abstract
The complexity of variants of 3 -SAT and Not-All-Equal 3 -SAT is well studied. However, in contrast, very little is known about the complexity of the problems' quantified counterparts. In the first part of this paper, we show that ∀∃ 3 -SAT is Π 2 P -complete even if (1) each variable appears exactly twice unnegated and exactly twice negated, (2) each clause is a disjunction of exactly three distinct variables, and (3) the number of universal variables is equal to the number of existential variables. Furthermore, we show that the problem remains Π 2 P -complete if (1a) each universal variable appears exactly once unnegated and exactly once negated, (1b) each existential variable appears exactly twice unnegated and exactly twice negated, and (2) and (3) remain unchanged. On the other hand, the problem becomes NP-complete for certain variants in which each universal variable appears exactly once. In the second part of the paper, we establish Π 2 P -completeness for ∀∃ Not-All-Equal 3 -SAT even if (1′) the Boolean formula is linear and monotone, (2′) each universal variable appears exactly once and each existential variable appears exactly three times, and (3′) each clause is a disjunction of exactly three distinct variables that contains at most one universal variable. On the positive side, we uncover variants of ∀∃ Not-All-Equal 3 -SAT that are co-NP-complete or solvable in polynomial time. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Universal enzymatic numerical P systems with small number of enzymatic rules.
- Author
-
Liu, Jun, Wang, Leiya, Zhang, Gexiang, Verlan, Sergey, and Zhu, Ming
- Subjects
- *
NUMBER systems , *MOTION control devices , *NONLINEAR functions , *POLYNOMIALS , *SPACE robotics - Abstract
Enzymatic Numerical P Systems (ENPSs) are a model of membrane computing that is well-suited for the simulation of physical processes and that has been used for the design and the implementation of motion controllers for wheeled robots and flying drones. The ENPSs model has been proven to be Turing universal and the theoretical effort was focused on minimizing various descriptional complexity parameters. In this paper, we explore the minimum number of enzymatic rules needed to achieve universality in ENPSs, specifically focusing on the all-parallel derivation mode where all applicable rules are applied at the same time. We show that in the case of a linear restriction for production functions, the universality can be obtained using 21 enzymatic rules, substantially improving previously known results. If production functions are allowed to be polynomials of degree 2, we show that a single enzymatic rule is sufficient to achieve universality. To obtain these results, a new proof method is introduced based on the translation of ENPSs to systems of conditional recurrences. • The number of enzymatic rules with linear production functions for ENPSs to reach universality is reduced to 21 from 39. • Only one enzymatic rule is sufficient for ENPSs to achieve universality when using degree-2 polynomials in nonlinear production functions. • Conditional recurrences are introduced to prove the universality of ENPSs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Revisiting RSA-polynomial problem and semiprime factorization.
- Author
-
Zheng, Mengce
- Subjects
- *
PUBLIC key cryptography , *FACTORIZATION , *INTEGERS , *POLYNOMIALS - Abstract
This paper focuses on the RSA-polynomial problem, a cryptographic hard problem that has been recently proposed and studied in, along with its various applications. We revisit this problem and conduct a refined analysis to address an ambiguous condition that was previously introduced in the context of RSA-polynomial based semiprime factorization. By deriving an accurate attack condition, we are able to identify weak cases of the RSA-polynomial problem and expand the vulnerable bound. To facilitate this, we propose two optimized factoring attacks that leverage improved lattice-based theorems for solving bivariate integer polynomials of a specific form. The validity and effectiveness of our proposed factoring attacks are verified through both theoretical analysis and experimental results. Additionally, we examine the RSA-polynomial based commitment scheme and identify deficiencies that compromise its reliability. To address the limitations, we propose enhancements to the commitment phase of the scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Polynomial expressions for non-binomial structures.
- Author
-
Sabeti, Rostam
- Subjects
- *
BINOMIAL distribution , *BINOMIAL coefficients , *POLYNOMIALS , *COMPUTER algorithms , *BINOMIAL theorem - Abstract
Abstract Following recent paper of the author about polynomial expressions with respect to binomial ideals, the current paper is devoted to the non-binomial case. Beside the established facts in the underlying theory, the correctness and termination of the algorithms are addressed. From the collection of examples, the main large-scale non-binomial one is of particular importance. Application of the new algorithms for the cases of moderately large binomial like cyclic 16-roots and cyclic 18-roots is discussed. A combinatorial analysis for complexity of the proposed algorithms is shown to be at least O (log (log n) n 2 (log n) 2). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Efficient algorithms for the minmax regret path center problem with length constraint on trees.
- Author
-
Wang, Biing-Feng
- Subjects
- *
TREES , *POLYNOMIALS , *TREE graphs - Abstract
When there is no constraint on the length, efficient algorithms are known for the minmax regret path center, path median, and path centdian problems on trees. In a recent review on location problems, these problems with length constraint were considered as open. The focus of this paper is the minmax regret path center problem with length constraint on a tree. Efficient algorithms are presented for both the continuous and discrete models, which require, respectively, O (n lg 2 n) and O (n lg n) time. Our algorithms are based on an approach introduced by Averbakh and Berman for solving the minmax regret p -center problem. To apply their approach, we give a sufficient condition under which the approach works. This result is of independent interest. According to the condition, polynomial algorithms are obtained immediately for the minmax regret versions of many other center location problems, for which no algorithms were known before. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Fine-grained polynomial functional encryption.
- Author
-
Zhu, Ziqi, Gong, Junqing, Wang, Yuyu, and Qian, Haifeng
- Subjects
- *
POLYNOMIALS , *CRYPTOGRAPHY , *ALGORITHMS - Abstract
In this work, we present fine-grained secure polynomial functional encryption (PFE) for degree d ≥ 1 over a field F : a ciphertext encrypts x ∈ F n , a key is associated with a degree- d polynomial P and decryption recovers P (x) ∈ F. Fine-grained cryptographic primitives are secure against a resources bounded class of adversaries and computed by honest users with less resources than adversaries. In this paper, we construct the fine-grained PFE in these two fine-grained settings: (1) NC 1 PFE: Based on the worst-case assumption NC 1 ⊊ ⊕ L / poly , we construct public-key polynomial functional encryption and achieve (i) selective simulation-based security and (ii) static function-hiding against adversary in NC 1 where all honest algorithms are computable in AC 0 [ 2 ] and ciphertext sizes are O (n). (2) AC 0 PFE: We construct a private-key polynomial functional encryption achieve unconditionally selective simulation-based security against adversary in AC 0 where all honest algorithms are computable in AC 0 and ciphertext sizes are O (n). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A new algorithm for computing the nearest polynomial to multiple given polynomials via weighted ℓ2,q-norm minimization and its complex extension.
- Author
-
Hu, Wenyu, Huang, Huiying, Zhang, Rong, Huang, Jinhong, and Yi, Yun
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *PROBLEM solving , *CONCAVE functions - Abstract
A new algorithm is proposed in this paper for computing the nearest polynomial to multiple given polynomials with a given zero in the real case, where the distance between polynomials is defined by the weighted ℓ 2 , q norm (0 < q ≤ 2). First, the problem is formulated as a univariate constrained non-convex minimization problem, where prior information of the coefficients of the polynomial can be embedded by selecting proper weights. Then, an iteratively reweighted algorithm is designed to solve the obtained problem, and also the convergence and rate of convergence are uniformly demonstrated for all q in (0 , 2 ]. Since all the existing methods for computing the nearest polynomial to multiple given polynomials with a given zero are limited to the real case, we ingeniously extend the results to the complex case. Finally, two representative examples that separately compute the nearest real and complex polynomials are presented to show the effectiveness of the proposed algorithm. • A weighted ℓ 2 , q -norm based minimization problem is constructed with prior information of polynomial embedded. • After converting the problem into a univariate problem, an iteratively reweighted algorithm is proposed to solve it. • Using properties of a concave function, convergence and rate of convergence of the proposed algorithm are provided. • It is the first time to study computation of the nearest polynomial to multiple given polynomials with a given zero in the complex case. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. On some FPT problems without polynomial Turing compressions.
- Author
-
Luo, Weidong
- Subjects
- *
POLYNOMIALS , *LINEAR programming , *INTEGER programming , *HARDNESS - Abstract
An impressive hardness theory which can prove compression lower bounds for a large number of FPT problems has been established under the assumption that NP ⊈ coNP/poly. However, there are no problems in FPT for which the existence of polynomial Turing compressions under any widely believed complexity assumptions has been excluded. In this paper, we provide a technique which can be used to prove that some FPT problems have no small Turing compressions under the assumption that there exists a problem in NP which does not have small-sized circuits. These FPT problems, which include edge clique cover parameterized by the number of cliques, integer linear programming and a -choosability parameterized by some structural parameters, etc. have the property that they remain NP-hard under Cook reductions even if their parameter values are small. Moreover, a trade-off between the size of the Turing compression lower bound and the robustness of the complexity assumption is obtained. In particular, we demonstrate that these FPT problems have no polynomial Turing compressions unless every set in NP has quasi-polynomial-sized circuits, and have no 2 o (k) Turing compressions unless every set in NP has sub-exponential-sized circuits. Additionally, Turing kernelization lower bounds for these FPT problems are provided under some weaker complexity assumptions. Lastly, compression lower bounds for the above-mentioned FPT problems are proved under some complexity assumptions which are weaker than NP ⊈ coNP/poly, moreover, these results are proved under a method which is different from the previous hardness theory for compression lower bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Sampling polynomial trajectories for LTL verification.
- Author
-
Selvaratnam, Daniel, Cantoni, Michael, Davoren, J.M., and Shames, Iman
- Subjects
- *
POLYNOMIALS , *ATOMIC number , *ALGEBRA - Abstract
This paper concerns the verification of continuous-time polynomial spline trajectories against linear temporal logic specifications (LTL without 'next'). Each atomic proposition is assumed to represent a state space region described by a multivariate polynomial inequality. The proposed approach samples a trajectory strategically, to capture every one of its region transitions. This yields a discrete word called a trace, which is amenable to established formal methods for path checking. The original continuous-time trajectory is shown to satisfy the specification if and only if its trace does. General topological conditions on the sample points are derived that ensure a trace is recorded for arbitrary continuous paths, given arbitrary region descriptions. Using techniques from computer algebra, a trace generation algorithm is developed to satisfy these conditions when the path and region boundaries are defined by polynomials. The proposed PolyTrace algorithm has polynomial complexity in the number of atomic propositions, and is guaranteed to produce a trace of any polynomial path. Its performance is demonstrated via numerical examples and a case study from robotics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Improved algorithms for the general exact satisfiability problem.
- Author
-
Hoi, Gordon and Stephan, Frank
- Subjects
- *
ALGORITHMS , *POLYNOMIALS , *POLYNOMIAL time algorithms , *ASSIGNMENT problems (Programming) - Abstract
The Exact Satisfiability problem asks if we can find a satisfying assignment to each clause such that exactly one literal in each clause is assigned 1, while the rest are all assigned 0. We can generalise this problem further by defining that a C j clause is solved iff exactly j of the literals in the clause are 1 and all others are 0. We now introduce the family of Generalised Exact Satisfiability problems called G i XSAT as the problem to check whether a given instance consisting of C j clauses with j ∈ { 0 , 1 , ... , i } for each clause has a satisfying assignment. In this paper, we present faster exact polynomial space algorithms, using a nonstandard measure, to solve G i XSAT, for i ∈ { 2 , 3 , 4 } , in O (1.3674 n) time, O (1.5687 n) time and O (1.6545 n) time, respectively, using polynomial space, where n is the number of variables. This improves the current state of the art for polynomial space algorithms from O (1.4203 n) time for G2XSAT by Zhou, Jiang and Yin and from O (1.6202 n) time for G3XSAT by Dahllöf and from O (1.6844 n) time for G4XSAT which was by Dahllöf as well. In addition, we present faster exact algorithms solving G2XSAT, G3XSAT and G4XSAT in O (1.3188 n) time, O (1.3407 n) time and O (1.3536 n) time respectively at the expense of using exponential space. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. A novel characterization of the complexity class [formula omitted] based on counting and comparison.
- Author
-
Lukasiewicz, Thomas and Malizia, Enrico
- Subjects
- *
TURING machines , *POLYNOMIALS , *COUNTING , *COMPUTATIONAL complexity , *BOOLEAN algebra - Abstract
The complexity class Θ 2 P , which is the class of languages recognizable by deterministic Turing machines in polynomial time with at most logarithmic many calls to an NP oracle, received extensive attention in the literature. Its complete problems can be characterized by different specific tasks, such as deciding whether the optimum solution of an NP problem is unique, or whether it is in some sense “odd” (e.g., whether its size is an odd number). In this paper, we introduce a new characterization of this class and its generalization Θ k P to the k -th level of the polynomial hierarchy. We show that problems in Θ k P are also those whose solution involves deciding, for two given sets A and B of instances of two Σ k − 1 P -complete (or Π k − 1 P -complete) problems, whether the number of “yes”-instances in A is greater than those in B . Moreover, based on this new characterization, we provide a novel sufficient condition for Θ k P -hardness. We also define the general problem Comp-Valid k , which is proven here Θ k + 1 P -complete. Comp-Valid k is the problem of deciding, given two sets A and B of quantified Boolean formulas with at most k alternating quantifiers, whether the number of valid formulas in A is greater than those in B . Notably, the problem Comp-Sat of deciding whether a set contains more satisfiable Boolean formulas than another set, which is a particular case of Comp-Valid 1 , demonstrates itself as a very intuitive Θ 2 P -complete problem. Nonetheless, to our knowledge, it eluded its formal definition to date. In fact, given its strict adherence to the count-and-compare semantics here introduced, Comp-Valid k is among the most suitable tools to prove Θ k P -hardness of problems involving the counting and comparison of the number of “yes”-instances in two sets. We support this by showing that the Θ 2 P -hardness of the Max voting scheme over m CP-nets is easily obtained via the new characterization of Θ k P introduced in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. On mixed polynomials of bidegree (n,1).
- Author
-
Elkadi, Mohamed and Galligo, André
- Subjects
- *
POLYNOMIALS , *COMPLEX variables , *MATHEMATICAL functions , *COEFFICIENTS (Statistics) , *MULTIVARIATE analysis , *VANDERMONDE matrices - Abstract
Specifying the bidegrees ( n , m ) of mixed polynomials P ( z , z ¯ ) of the single complex variable z , with complex coefficients, allows to investigate interesting roots structures and counting; intermediate between complex and real algebra. Multivariate mixed polynomials appeared in recent papers dealing with Milnor fibrations, but in this paper we focus on the univariate case and m = 1 , which is closely related to the important subject of harmonic maps. Here we adapt, to this setting, two algorithms of computer algebra: Vandermonde interpolation and a bissection-exclusion method for root isolation. Implemented in Maple, they are used to explore some interesting classes of examples. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. Functional encryption for cubic polynomials and implementation.
- Author
-
Zhang, Zheng and Zhang, Fangguo
- Subjects
- *
PUBLIC key cryptography , *POLYNOMIALS , *ACCESS control , *CRYPTOGRAPHY - Abstract
• A FULL-SIM secure Q-bounded functional encryption scheme for cubic polynomials based on RLWE assumption is proposed. • A FULL-SIM secure functional encryption scheme for all circuits is constructed. • The above two schemes are implemented by C++ language and NTL library. Functional encryption (FE), which provides fine-grained access control on encrypted data, is becoming a new hot spot in the field of cryptography. Recent applications, such as outsourcing computation, searchable encryption and so on, suggest that FE has unlimited possibilities. It especially shows great feasibility to construct indistinguishability obfuscation and reuseable garbled circuits. Furthermore, bounded collusion functional encryption is an extension of FE which is against more than one key query and protects the security of messages under more than one function keys. In this paper, we proposed a bounded collusion FE for cubic polynomials, which follows from Agrawal and Rosen's work on TCC 2017. Our construction only invokes the Regev public key encryption and a linear FE scheme which avoids complex encodings defined recursively. What's more, we proposes an FE scheme for all circuit with FULL-SIM security. Finally, we also implement these schemes and do some analyses on parameters' size, time and space performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. (Imperfect) strategies to generate primitive polynomials over GF(2).
- Author
-
Adak, Sumit and Das, Sukanta
- Subjects
- *
POLYNOMIALS , *CELLULAR automata - Abstract
• The paper introduces three greedy strategies for finding primitive polynomials of a large degree over GF(2). • Maximal length cellular automata (CAs) are used as tools to generate primitive polynomials. • The proposed strategies greedily synthesize (linear) CAs of different sizes, which are almost always of maximal length. • And, also the characteristic polynomials of the synthesized cellular automata are claimed as primitive. • We conclude this work by a Conjecture that for Sophie Germain Primes, the strategies always generate primitive polynomials. We present three greedy strategies for finding primitive polynomials of large degree over GF(2). Maximal length cellular automata (CAs) are used as tools to generate the primitive polynomials. The proposed strategies greedily synthesize (linear) CAs of different sizes, which are almost always of maximal length. Since the characteristic polynomial of a maximal length cellular automaton is primitive, characteristic polynomials of the synthesized CAs are claimed as primitive. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Differentially private high dimensional sparse covariance matrix estimation.
- Author
-
Wang, Di and Xu, Jinhui
- Subjects
- *
SPARSE matrices , *POLYNOMIALS , *COVARIANCE matrices , *THRESHOLDING algorithms - Abstract
• We give the first study on estimating the sparse covariance matrix under DP model. • We propose a simple but nontrivial DP method, and show that it could achieve non-trivial error bound. • The method can also easily extend to local differential privacy model. • Experiments on synthetic datasets confirm our theoretical results. In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions. We propose a new method, called DP-Thresholding, to achieve a non-trivial ℓ 2 -norm based error bound whose dependence on the dimension drops to logarithmic instead of polynomial, it is significantly better than the existing ones, which add noise directly to the empirical covariance matrix. We also extend the ℓ 2 -norm based error bound to a general ℓ w -norm based one for any 1 ≤ w ≤ ∞ , and show that they share the same upper bound asymptotically. Our approach can be easily extended to local differential privacy. Experiments on the synthetic datasets show results that are consistent with theoretical claims. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Computational complexity of solving polynomial differential equations over unbounded domains.
- Author
-
Pouly, Amaury and Graça, Daniel S.
- Subjects
- *
COMPUTATIONAL complexity , *POLYNOMIALS , *DIFFERENTIAL equations , *MATHEMATICAL domains , *ORDINARY differential equations - Abstract
In this paper we investigate the computational complexity of solving ordinary differential equations (ODEs) y ′ = p ( y ) over unbounded time domains , where p is a vector of polynomials. Contrarily to the bounded (compact) time case, this problem has not been well-studied, apparently due to the “intuition” that it can always be reduced to the bounded case by using rescaling techniques. However, as we show in this paper, rescaling techniques do not seem to provide meaningful insights on the complexity of this problem, since the use of such techniques introduces a dependence on parameters which are hard to compute. We present algorithms which numerically solve these ODEs over unbounded time domains. These algorithms have guaranteed accuracy, i.e. given some arbitrarily large time t and error bound ε as input, they will output a value y ˜ which satisfies ‖ y ( t ) − y ˜ ‖ ≤ ε . We analyze the complexity of these algorithms and show that they compute y ˜ in time polynomial in several quantities including the time t , the accuracy of the output ε and the length of the curve y from 0 to t , assuming it exists until time t . We consider both algebraic complexity and bit complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Algorithms solving the Matching Cut problem.
- Author
-
Kratsch, Dieter and Le, Van Bang
- Subjects
- *
GRAPH theory , *PROBLEM solving , *BRANCHING processes , *EXPONENTIAL functions , *POLYNOMIALS - Abstract
In a graph, a matching cut is an edge cut that is a matching. Matching Cut is the problem of deciding whether or not a given graph has a matching cut, which is known to be NP-complete. This paper provides a first branching algorithm solving Matching Cut in time O ⁎ ( 2 n / 2 ) = O ⁎ ( 1.4143 n ) for an n -vertex input graph, and shows that Matching Cut parameterized by the vertex cover number τ ( G ) can be solved by a single-exponential algorithm in time 2 τ ( G ) O ( n 2 ) . Moreover, the paper also gives a polynomially solvable case for Matching Cut which covers previous known results on graphs of maximum degree three, line graphs, and claw-free graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Standard Sturmian words and automata minimization algorithms.
- Author
-
Castiglione, G. and Sciortino, M.
- Subjects
- *
MACHINE theory , *COMPUTER algorithms , *COMBINATORICS , *POLYNOMIALS , *NUMBER theory - Abstract
The study of some close connections between the combinatorial properties of words and the performance of the automata minimization process constitutes the main focus of this paper. These relationships have been, in fact, the basis of the study of the tightness and the extremal cases of Hopcroft's algorithm, that is, up to now, the most efficient minimization method for deterministic finite state automata. Recently, increasing attention has been paid to another minimization method that, unlike the approach proposed by Hopcroft, is not based on refinement of the set of states of the automaton, but on automata operations such as determinization and reverse, and is also applicable to non-deterministic finite automata. However, even for deterministic automata, it was proved that the method incurs, almost surely, in an explosion of the number of the states. Very recently, some polynomial variants of Brzozowski's method have been introduced. In this paper, by using some combinatorial properties of words, we analyze the performance of one of such algorithms when applied to a particular infinite family of automata, called standard word automata , constructed by using standard sturmian words. In particular, Θ ( n log n ) is the worst case time complexity when the algorithm is applied to the infinite family of word automata associated to Fibonacci words representing also the worst case of Hopcroft's minimization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Algebraic (trapdoor) one-way functions: Constructions and applications.
- Author
-
Catalano, Dario, Fiore, Dario, Gennaro, Rosario, and Vamvourellis, Konstantinos
- Subjects
- *
ALGEBRAIC functions , *HOMOMORPHISMS , *CYCLIC groups , *GROUP theory , *POLYNOMIALS - Abstract
In this paper we introduce the notion of Algebraic (Trapdoor) One Way Functions , which, roughly speaking, captures and formalizes many of the properties of number-theoretic one-way functions. Informally, a (trapdoor) one way function F : X → Y is said to be algebraic if X and Y are (finite) abelian cyclic groups, the function is homomorphic i.e. F ( x ) ⋅ F ( y ) = F ( x ⋅ y ) , and is ring-homomorphic , meaning that it is possible to compute linear operations “in the exponent” over some ring (which may be different from Z p where p is the order of the underlying group X ) without knowing the bases. Moreover, algebraic OWFs must be flexibly one-way in the sense that given y = F ( x ) , it must be infeasible to compute ( x ′ , d ) such that F ( x ′ ) = y d (for d ≠ 0 ). Interestingly, algebraic one way functions can be constructed from a variety of standard number theoretic assumptions, such as RSA, Factoring and CDH over bilinear groups. As a second contribution of this paper, we show several applications where algebraic (trapdoor) OWFs turn out to be useful. In particular: • Publicly Verifiable Secure Outsourcing of Polynomials : We present efficient solutions which work for rings of arbitrary size and characteristic. When instantiating our protocol with the RSA/Factoring based algebraic OWFs we obtain the first solution which supports small field size, is efficient and does not require bilinear maps to obtain public verifiability. • Linearly-Homomorphic Signatures : We give a direct construction of FDH-like linearly homomorphic signatures from algebraic (trapdoor) one way permutations. Our constructions support messages and homomorphic operations over arbitrary rings and in particular even small fields such as F 2 . While it was already known how to realize linearly homomorphic signatures over small fields (Boneh–Freeman, Eurocrypt 2011), from lattices in the random oracle model, ours are the first schemes achieving this in a very efficient way from Factoring/RSA. • Batch execution of Sigma protocols : We construct a simple and efficient Sigma protocol for any algebraic OWP and show a “batch” version of it, i.e. a protocol where many statements can be proven at a cost (slightly superior) of the cost of a single execution of the original protocol. Given our RSA/Factoring instantiations of algebraic OWP, this yields, to the best of our knowledge, the first batch verifiable Sigma protocol for groups of unknown order. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
24. The directed 2-linkage problem with length constraints.
- Author
-
Bang-Jensen, J., Bellitto, T., Lochet, W., and Yeo, A.
- Subjects
- *
NP-complete problems , *POLYNOMIALS - Abstract
The weak 2-linkage problem for digraphs asks for a given digraph and vertices s 1 , s 2 , t 1 , t 2 whether D contains a pair of arc-disjoint paths P 1 , P 2 such that P i is an (s i , t i) -path. This problem is NP-complete for general digraphs but polynomially solvable for acyclic digraphs [8]. Recently it was shown [3] that if D is equipped with a weight function w on the arcs which satisfies that all edges have positive weight, then there is a polynomial algorithm for the variant of the weak-2-linkage problem when both paths have to be shortest paths in D. In this paper we consider the unit weight case and prove that for every pair of constants k 1 , k 2 , there is a polynomial algorithm which decides whether the input digraph D has a pair of arc-disjoint paths P 1 , P 2 such that P i is an (s i , t i) -path of length no more than d (s i , t i) + k i , for i = 1 , 2 , where d (s i , t i) denotes the length of the shortest (s i , t i) -path. We prove that, unless the exponential time hypothesis (ETH) fails, there is no polynomial algorithm for deciding the existence of a solution P 1 , P 2 to the weak 2-linkage problem where each path P i has length at most d (s i , t i) + c log 1 + ϵ n for some constant c. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Refined analysis to the extended tower number field sieve.
- Author
-
Zhu, Yuqing, Wen, Jiejing, Zhuang, Jincheng, Lv, Chang, and Lin, Dongdai
- Subjects
- *
SIEVES , *FINITE fields , *TOWERS , *CRYPTOSYSTEMS , *POLYNOMIALS , *LOGARITHMS - Abstract
The hardness of discrete logarithm problem over finite fields is the security foundation of many cryptographic protocols. When the characteristic of the finite field is medium or large, the state-of-art algorithms for solving the corresponding problem are the number field sieve and its variants. In 2016, Kim and Barbulescu presented the extended tower number field sieve, which achieves a new complexity in the medium prime case and imposes a new estimation of the security of concrete parameters in certain cryptosystems such as pairing-based cryptosystems. In this paper, a refined analysis to this algorithm is given as follows. – Firstly, a uniform formula is given for the total complexity of the extended tower number field sieve. For a given polynomial selection method, this formula can directly give the complexity in this case. – Then, a method is proposed to improve the computation in the smoothing phase by exploring subfield structures when the extension degree is composite. – At last, the complexity of the descent phase is analyzed when sieving over degree-one polynomials and high-degree polynomials respectively and it is shown still negligible compared to the improved smoothing phase. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Type-two polynomial-time and restricted lookahead.
- Author
-
Kapron, Bruce M. and Steinberg, Florian
- Subjects
- *
TARDINESS , *TURING machines , *COMPUTATIONAL complexity , *COMPLEXITY (Philosophy) , *FUNCTIONALS , *POLYNOMIAL time algorithms , *POLYNOMIALS - Abstract
This paper provides an alternate characterization of type-two polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-polynomial-time introduced by Cook. We impose a further restriction on the oracle interactions to force feasibility. Both the restriction as well as its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not guarantee that polynomial-time computability is preserved. To mend this we restrict the number of lookahead revisions, that is the number of times a query can be asked that is bigger than any of the previous queries. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one is allowed to separate a task into efficiently solvable subtasks. Formally put: the closure of our class under lambda-abstraction and application includes all feasible operations. We also revisit the very similar class of strongly polynomial-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This can be attributed to properties of the limited recursion operator: It is not strongly polynomial-time computable but decomposes into two such operations and lies in our class. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Computing the nearest polynomial to multiple given polynomials with a given zero via l2,q-norm minimization.
- Author
-
Hu, Wenyu, Li, Shenghao, Huang, Jinhong, Wang, Tinghua, and Yu, Gaohang
- Subjects
- *
POLYNOMIALS , *CONSTRAINED optimization , *ZERO (The number) , *MATRIX norms , *PROBLEM solving - Abstract
Given multiple univariate or multivariate real polynomials f 1 , f 2 , ... , f m and a desired real zero z , we study the problem of computing a real polynomial f ˜ such that f ˜ (z) = 0 and the distance between f ˜ and f 1 , f 2 , ... , f m is minimal, where the distance is measured by a pair of norms (‖ ⋅ ‖ p , ‖ ⋅ ‖ q). As the difficulty of finding solutions to this problem relies on selection of the norm pair (p , q) , previous works just discussed some special cases, e.g., (p , q) = (2 , 2) , (2 , ∞) and (∞ , ∞) , where the corresponding problems were translated in geometric views. In this paper, we revisit this problem from an optimization view for all the cases of (2 , q) with q ∈ (0 , 2 ]. That is, instead of directly finding the nearest polynomial, we first transform the problem into solving a constrained minimization problem based on l 2 , q -norm minimization. For all q in (0 , 2 ] , then a unified iterative algorithm is proposed to solve the constrained optimization problem and also the convergence is uniformly demonstrated. Finally, a simple example is presented to validate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Safe recursion revisited I: Categorical semantics for lower complexity.
- Author
-
Burrell, Mike, Cockett, Robin, and Redmond, Brian
- Subjects
- *
RECURSION theory , *CATEGORIES (Mathematics) , *SEMANTICS , *COMPUTATIONAL complexity , *MATHEMATICAL proofs , *POLYNOMIALS - Abstract
Abstract: The objective of this paper is to prove that the initial Pola setting, with both inductive and coinductive data, is sound for polynomial size (PSIZE). Explicitly this means all programs written in Pola have their output size bounded by a polynomial in their input size. The paper describes the polarized categorical semantics for Pola and establishes the result by providing categorical models for various fragments of Pola which have explicit size bound information built into the maps. To obtain PSIZE soundness for Pola with just inductive data, the semantics in sized sets suffices. Sized sets are sets equipped with a “size map” which associates to each element a size. Size is usually just a natural number but, more generally, the size could be an element of a size rig. A polarized category consists of an opponent and a player category joined by a module. Sized sets can be used to create a polarized category by letting the opponent and module maps be bounded by polynomials while the player category consists of maps bounded by a constant. This gives a Pola setting with inductive data and immediately establishes the PSIZE soundness of the initial Pola setting. The main technical difficulty of the paper is to provide a semantics which correctly models coinductive data as well. For this “amortized” sets are introduced: these are set in which a higher-order size function is associated to each element, which given a size returns a size. This is amortized as one is concerned with the asymptotic behavior of these functions. While amortized sets have coinductive data they are not affine closed: a final step, using equivalence relations, is required to obtain a model which includes this aspect of Pola structure. While PSIZE by itself is a very weak bound it is a crucial step in establishing polynomial space (PSPACE) and polynomial time (PTIME) bounds for these settings. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
29. On testing monomials in multivariate polynomials.
- Author
-
Chen, Zhixiang, Fu, Bin, Liu, Yang, and Schweller, Robert
- Subjects
- *
MULTIVARIATE analysis , *POLYNOMIALS , *LINEAR systems , *COMPUTATIONAL complexity , *ALGEBRAIC geometry , *COMPUTER algorithms - Abstract
Abstract: This paper presents a summary of our initial work on developing a theory of testing monomials in multivariate polynomials. The central question is to ask whether a polynomial represented by certain economically compact structure has a multilinear monomial in its sum-product expansion. The complexity aspects of this problem and its variants are investigated with two objectives. One is to understand how this problem relates to critical problems in complexity, and if so to what extent. The other is to exploit possibilities of applying algebraic properties of polynomials to the study of those problems. A series of results about and polynomials is obtained in this paper, laying a basis for further study along this line. Several randomized and deterministic algorithms are devised for testing multilinear monomials or -monomials in certain respective types of polynomials, where is prime. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
30. Boolean nested canalizing functions: A comprehensive analysis.
- Author
-
Li, Yuan, Adeyeye, John O., Murrugarra, David, Aguilar, Boris, and Laubenbacher, Reinhard
- Subjects
- *
MATHEMATICAL functions , *SYSTEMS biology , *BOOLEAN functions , *CRYPTOGRAPHY , *POLYNOMIALS , *STABILITY theory - Abstract
Abstract: Boolean network models of molecular regulatory networks have been used successfully in computational systems biology. The Boolean functions that appear in published models tend to have special properties, in particular the property of being nested canalizing, a concept inspired by the concept of canalization in evolutionary biology. It has been shown that networks comprised of nested canalizing functions have dynamic properties that make them suitable for modeling molecular regulatory networks, namely a small number of (large) attractors, as well as relatively short limit cycles. This paper contains a detailed analysis of this class of functions, based on a novel normal form as polynomial functions over the Boolean field. The concept of layer is introduced that stratifies variables into different classes depending on their level of dominance. Using this layer concept a closed form formula is derived for the number of nested canalizing functions with a given number of variables. Additional metrics considered include Hamming weight, the activity number of any variable, and the average sensitivity of the function. It is also shown that the average sensitivity of any nested canalizing function is between 0 and 2. This provides a rationale for why nested canalizing functions are stable, since a random Boolean function in variables has average sensitivity . The paper also contains experimental evidence that the layer number is an important factor in network stability. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
31. Pseudospectra of exponential matrix polynomials.
- Author
-
Corless, Robert M.
- Subjects
- *
EXPONENTIAL functions , *POLYNOMIALS , *MATRIX functions , *COEFFICIENTS (Statistics) , *MATHEMATICAL statistics , *ALGORITHMS - Abstract
Abstract: This paper extends a result of Graillat, which was based on Rump’s Theorem, to show that pseudospectra of matrix polynomials expressed in other bases, and also some nonlinear matrix functions, are unaffected by drawing the matrix coefficients from certain structured families. We show by example that this behaviour is not universal. This result is of interest because computation of structured pseudospectra is much more expensive in general than computation of unstructured pseudospectra, and in the cases covered by Rump’s Theorem there is no point going to the extra effort. The paper also contains some examples that may be of interest to the community, including an analytical solution of a nested exponential equation that shows why exponential polynomial decomposition algorithms may be of interest. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
32. Arithmetic circuits: The chasm at depth four gets wider
- Author
-
Koiran, Pascal
- Subjects
- *
ARITHMETIC , *POLYNOMIALS , *MATHEMATICAL bounds , *PROOF theory , *ELECTRIC circuits , *MATHEMATICAL analysis - Abstract
Abstract: In their paper on the “chasm at depth four”, Agrawal and Vinay have shown that polynomials in variables of degree which admit arithmetic circuits of size also admit arithmetic circuits of depth four and size . This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of matrices has circuits of size polynomial in , then it also has depth 4 circuits of size . If the original circuit uses only integer constants of polynomial size, then the same is true for the resulting depth four circuit. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also use our techniques to reprove two results on: [–] the existence of nontrivial boolean circuits of constant depth for languages in ; [–] reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
33. Almost optimal distributed M2M multicasting in wireless mesh networks
- Author
-
Xin, Qin, Manne, Fredrik, Zhang, Yan, and Wang, Xin
- Subjects
- *
WIRELESS communications , *MESH networks , *MULTICASTING (Computer networks) , *COST effectiveness , *DETERMINISTIC algorithms , *COMPUTATIONAL complexity , *POLYNOMIALS - Abstract
Abstract: Wireless Mesh Networking (WMN) is an emerging communication paradigm to enable resilient, cost-efficient and reliable services for the future-generation wireless networks. In this paper, we study the problem of multipoint-to-multipoint (M2M) multicasting in a WMN which aims to use the minimum number of time slots to exchange messages among a group of mesh nodes in a multi-hop WMN with mesh nodes. We study the M2M multicasting problem in a distributed environment where each participant only knows that there are participants and it does not know who are other participants among mesh nodes. It is known that the computation of an optimal M2M multicasting schedule isNP-hard. We present a fully distributed deterministic algorithm for such an M2M multicasting problem and analyze its time complexity. We show that if the maximum hop distance between any two out of the participants is , then the studied M2M multicasting problem can be solved in time with a polynomial-time computation, which is an almost optimal scheme due to the lower bound given by Chlebus et al. (2009) . Our algorithm also improves the currently best known result with running time by Gąsieniec et al. (2006) . In this paper, we also propose a distributed deterministic algorithm which accomplishes the M2M multicasting in time with a polynomial-time computation in unit disk graphs. This is an asymptotically optimal algorithm in the sense that there exists a WMN topology, e.g., a line, a ring, a star or a complete graph, in which the M2M multicasting cannot be completed in less than units of time. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
34. Convergence and approximation in potential games
- Author
-
Christodoulou, George, Mirrokni, Vahab S., and Sidiropoulos, Anastasios
- Subjects
- *
APPROXIMATION theory , *GAME theory , *GRAPH theory , *STOCHASTIC convergence , *POLYNOMIALS , *NASH equilibrium , *SEARCH algorithms - Abstract
Abstract: We study the speed of convergence to approximately optimal states in two classes of potential games. We provide bounds in terms of the number of rounds, where a round consists of a sequence of movements, with each player appearing at least once in each round. We model the sequential interaction between players by a best-response walk in the state graph, where every transition in the walk corresponds to a best response of a player. Our goal is to bound the social value of the states at the end of such walks. In this paper, we focus on two classes of potential games: selfish routing games, and cut games (or party affiliation games (Fabrikant et al. 2004 )). Other than bounding the price of anarchy of selfish routing games (Roughgarden and Tardos, 2002 , Awerbuch et al. 2005 , Christodoulou and Koutsoupias, 2005 ), there are many interesting problems about game dynamics in these games. It is known that exponentially long best-response walks may exist to pure Nash equilibria (Fabrikant et al. 2004 ), and random best-response walks converge to solutions with good approximation guarantees after polynomially many best responses (Goemans et al. 2005 ). In this paper, we study the speed of convergence on deterministic best-response walks in these games and prove that starting from an arbitrary configuration, after one round of best responses of players, the resulting configuration is a -approximate solution. Furthermore, we show that starting from an empty configuration, the solution after any round of best responses is a constant-factor approximation. We also provide a lower bound for the multi-round case, where we show that for any constant number of rounds , the approximation guarantee cannot be better than , for some . We also study cut games, that provide an illustrative example of potential games. The convergence of potential games to locally optimum solutions has been studied in the context of local search algorithms (Johnson et al. 1988 , Schaffer and Yannakakis, 1991 ). In these games, we consider two social functions: the cut (defined as the weight of the edges in the cut), and the total happiness (defined as the weight of the edges in the cut, minus the weight of the remaining edges). For the cut social function, we prove that the expected social value after one round of a random best-response walk is at least a constant factor approximation to the optimal social value. We also exhibit exponentially long best-response walks with poor social value. For the unweighted version of this cut game, we prove and lower and upper bounds on the number of rounds of best responses to converge to a constant-factor cut. In addition, we suggest a way to modify the game to enforce a fast convergence in any fair best-response walk. For the total happiness social function, we show that for unweighted graphs of sufficiently large girth, starting from a random configuration, greedy behavior of players in a random order converges to an approximate solution after one round. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
35. A large population size can be unhelpful in evolutionary algorithms
- Author
-
Chen, Tianshi, Tang, Ke, Chen, Guoliang, and Yao, Xin
- Subjects
- *
ALGORITHMS , *POLYNOMIALS , *EXPONENTIAL functions , *COMPUTER science , *POPULATION , *PERFORMANCE evaluation - Abstract
Abstract: The utilization of populations is one of the most important features of evolutionary algorithms (EAs). There have been many studies analyzing the impact of different population sizes on the performance of EAs. However, most of such studies are based on computational experiments, except for a few cases. The common wisdom so far appears to be that a large population would increase the population diversity and thus help an EA. Indeed, increasing the population size has been a commonly used strategy in tuning an EA when it did not perform as well as expected for a given problem. He and Yao (2002) showed theoretically that for some problem instance classes, a population can help to reduce the runtime of an EA from exponential to polynomial time. This paper analyzes the role of population further in EAs and shows rigorously that large populations may not always be useful. Conditions, under which large populations can be harmful, are discussed in this paper. Although the theoretical analysis was carried out on one multimodal problem using a specific type of EAs, it has much wider implications. The analysis has revealed certain problem characteristics, which can be either the problem considered here or other problems, that lead to the disadvantages of large population sizes. The analytical approach developed in this paper can also be applied to analyzing EAs on other problems. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
36. The first polynomial self-stabilizing 1-maximal matching algorithm for general graphs.
- Author
-
Cohen, Johanne, Lefèvre, Jonas, Maâmra, Khaled, Manoussakis, George, and Pilard, Laurence
- Subjects
- *
GRAPH algorithms , *POLYNOMIALS , *MATCHING theory - Abstract
We present the first polynomial self-stabilizing algorithm for finding a 1-maximal matching in a general graph. The previous best known algorithm has been presented by Manne et al. [20] and we show in this paper it has a sub-exponential time complexity under the distributed adversarial daemon. Our new algorithm is an adaptation of the Manne et al. algorithm and works under the same daemon, but with a complexity in O (m × n 2) moves, with n is the number of nodes and m is the number of edges. This is the first self-stabilizing algorithm that solve this problem with a polynomial complexity. Moreover, our algorithm only needs one more boolean variable than the previous one. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. New amortized cell-probe lower bounds for dynamic problems.
- Author
-
Bhattacharya, Sayan, Henzinger, Monika, and Neumann, Stefan
- Subjects
- *
FINITE fields , *MATRIX multiplications , *DATA structures , *POLYNOMIALS - Abstract
We build upon the recent papers by Weinstein and Yu [11] , Larsen [7] , and Clifford et al. [3] to present a general framework that gives amortized lower bounds on the update and query times of dynamic data structures. Using our framework, we present two concrete results. 1. For the dynamic polynomial evaluation problem, where the polynomial is defined over a finite field of size n 1 + Ω (1) and has degree n , any dynamic data structure must either have an amortized update time of Ω ((lg n / lg lg n) 2) or an amortized query time of Ω ((lg n / lg lg n) 2). 2. For the dynamic online matrix vector multiplication problem, where we get an n × n matrix whose entires are drawn from a finite field of size n Θ (1) , any dynamic data structure must either have an amortized update time of Ω ((lg n / lg lg n) 2) or an amortized query time of Ω (n ⋅ (lg n / lg lg n) 2). For these two problems, the previous works by Larsen [7] and Clifford et al. [3] gave the same lower bounds, but only for worst case update and query times. Our bounds match the highest unconditional lower bounds known till date for any dynamic problem in the cell-probe model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. A tighter relation between sensitivity complexity and certificate complexity.
- Author
-
He, Kun, Li, Qian, and Sun, Xiaoming
- Subjects
- *
SENSITIVITY analysis , *ALGORITHMS , *POLYNOMIALS , *BOOLEAN algebra , *MATHEMATICS - Abstract
Abstract The sensitivity conjecture proposed by Nisan and Szegedy in 1994, which asserts that for any Boolean function, its sensitivity complexity is polynomially related to the block sensitivity complexity, is one of the most important and challenging problems in the study of decision tree complexity. Despite a lot of efforts, the best known upper bound of block sensitivity, as well as the certificate complexity, is still exponential in terms of sensitivity. In this paper, we give a better upper bound for certificate complexity and block sensitivity, b s (f) ≤ C (f) ≤ (8 9 + o (1)) s (f) 2 s (f) − 1 , where b s (f) , C (f) and s (f) are the block sensitivity, certificate complexity and sensitivity, respectively. The proof is based on a deep investigation on the structure of the sensitivity graph. We also provide a tighter relationship between the 0-certificate complexity C 0 (f) and 0-sensitivity s 0 (f) for functions with small 1-sensitivity s 1 (f). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Complex-demand scheduling problem with application in smart grid.
- Author
-
Khonji, Majid, Karapetyan, Areg, Elbassioni, Khaled, and Chau, Sid Chi-Kin
- Subjects
- *
SMART power grids , *PRODUCTION scheduling , *APPLICATION software , *DISCRETIZATION methods , *POLYNOMIALS , *ALTERNATING currents - Abstract
Abstract We consider the problem of scheduling complex-valued demands over a discretized time horizon. Given a set of users, each user is associated with a set of demands representing different power consumption preferences. A demand is represented by a complex number, a time interval, and a utility value obtained if it is satisfied. At each time slot, the magnitude of the total selected demands should not exceed a given generation capacity. This naturally captures the supply constraints in alternating current (AC) electric systems. In this paper, we consider maximizing the aggregate user utility subject to power supply limits over a time horizon. We present approximation algorithms characterized by the maximum angle ϕ between any two complex-valued demands. More precisely, a PTAS is presented for the case ϕ ∈ [ 0 , π 2 ] , a bi-criteria FPTAS for ϕ ∈ [ 0 , π - ε ] for any polynomially small ε , assuming the number of time slots in the discretized time horizon is a constant. Furthermore, if the number of time slots is part of the input, we present a reduction to the real-valued unsplittable flow problem on a path with only a constant approximation ratio. Finally, we present a practical greedy algorithm for the single time slot case with an approximation ratio of 1 2 cos ϕ 2 and a running time complexity of only O (N log N) , N standing for the aggregate number of user demands, which can be implemented efficiently in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Partial key exposure attacks on RSA: Achieving the Boneh–Durfee bound.
- Author
-
Takayasu, Atsushi and Kunihiro, Noboru
- Subjects
- *
RSA algorithm , *CYBERTERRORISM , *LATTICE theory , *POLYNOMIALS , *INFORMATION technology security - Abstract
Abstract Thus far, several lattice-based algorithms for partial key exposure attacks on RSA, i.e., given the most/least significant bits (MSBs/LSBs) of a secret exponent d and factoring an RSA modulus N , have been proposed such as Blömer and May (Crypto'03), Ernst et al. (Eurocrypt'05), and Aono (PKC'09). Due to Boneh and Durfee's small secret exponent attack, partial key exposure attacks should always work for d < N 0.292 even without any partial information. However, it was difficult task to make use of the given partial information without losing the quality of Boneh–Durfee's attack. In particular, known partial key exposure attacks fail to work for d < N 0.292 with only few partial information. Such unnatural situation stems from the fact that the additional information makes underlying modular equations involved. In this paper, we propose improved attacks when a secret exponent d is small. Our attacks are better than all known previous attacks in the sense that our attacks require less partial information. Specifically, our attack is better than all known ones for d < N 0.5625 and d < N 0.368 with the MSBs and the LSBs, respectively. Furthermore, our attacks fully cover the Boneh–Durfee bound, i.e., they always work for d < N 0.292. At a high level, we obtain the improved attacks by fully utilizing unraveled linearization technique proposed by Herrmann and May (Asiacrypt'09). Although Herrmann and May (PKC'10) already applied the technique to Boneh–Durfee's attack, we show elegant and impressive extensions to capture partial key exposure attacks. More concretely, we construct structured triangular matrices that enable us to recover more useful algebraic structures of underlying modular polynomials. We embed the given MSBs/LSBs to the recovered algebraic structures and construct our partial key exposure attacks. In this full version, we provide overviews and explicit proofs of the triangular matrix constructions. We believe that the additional explanations help readers to understand our techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Connected searching of weighted trees
- Author
-
Dereniowski, Dariusz
- Subjects
- *
DATABASE searching , *GRAPH theory , *PARALLEL algorithms , *TREE graphs , *POLYNOMIALS , *ALGORITHMS - Abstract
Abstract: In this paper we consider the problem of connected edge searching of weighted trees. Barrière et al. claim in [L. Barrière, P. Flocchini, P. Fraigniaud, N. Santoro, Capture of an intruder by mobile agents, in: SPAA’02: Proceedings of the Fourteenth Annual ACM Symposium on Parallel Algorithms and Architectures, ACM, New York, NY, USA, 2002, pp. 200–209] that there exists a polynomial-time algorithm for finding an optimal search strategy, that is, a strategy that minimizes the number of used searchers. However, due to some flaws in their algorithm, the problem turns out to be open. It is proven in this paper that the considered problem is strongly NP-complete even for node-weighted trees (the weight of each edge is 1) with one vertex of degree greater than 2. It is also shown that there exists a polynomial-time algorithm for finding an optimal connected search strategy for a given bounded degree tree with arbitrary weights on the edges and on the vertices. This is an FPT algorithm with respect to the maximum degree of a tree. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
42. Tight complexity analysis of the relocation problem with arbitrary release dates
- Author
-
Sevastyanov, Sergey V., Lin, Bertrand M.T., and Huang, Hsiao-Lan
- Subjects
- *
DYNAMIC programming , *COMPUTER algorithms , *POLYNOMIALS , *MATHEMATICAL optimization , *COMPUTER programming - Abstract
Abstract: The paper considers makespan minimization on a single machine subject to release dates in the relocation problem, originated from a resource-constrained redevelopment project in Boston. Any job consumes a certain amount of resource from a common pool at the start of its processing and returns to the pool another amount of resource at its completion. In this sense, the type of our resource constraints extends the well-known constraints on resumable resources, where the above two amounts of resource are equal for each job. In this paper, we undertake the first complexity analysis of this problem in the case of arbitrary release dates. We develop an algorithm, based on a multi-parametric dynamic programming technique (when the number of parameters that undergo enumeration of their values in the DP-procedure can be arbitrarily large). It is shown that the algorithm runs in pseudo-polynomial time when the number of distinct release dates is bounded by a constant. This result is shown to be tight: (1) it cannot be extended to the case when is part of the input, since in this case the problem becomes strongly NP-hard, and (2) it cannot be strengthened up to designing a polynomial time algorithm for any constant , since the problem remains NP-hard for . A polynomial-time algorithm is designed for the special case where the overall contribution of each job to the resource pool is nonnegative. As a counterpart of this result, the case where the contributions of all jobs are negative is shown to be strongly NP-hard. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
43. The algorithmic complexity of mixed domination in graphs
- Author
-
Zhao, Yancai, Kang, Liying, and Sohn, Moo Young
- Subjects
- *
COMPUTER algorithms , *POLYNOMIALS , *SET theory , *DOMINATING set , *GRAPH theory , *COMPUTER science - Abstract
Abstract: Let be a simple graph with vertex set and edge set . A subset is a mixed dominating set if every element is either adjacent or incident to an element of . The mixed domination problem is to find a minimum mixed dominating set of . In this paper we first prove that a connected graph is a tree if and only if its total graph is strongly chordal, and thus we obtain a polynomial-time algorithm for this problem in trees. Further we design another linear-time labeling algorithm for this problem in trees. At the end of the paper, we show that the mixed domination problem is NP-complete even when restricted to split graphs, a subclass of chordal graphs. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
44. On the approximability of robust spanning tree problems
- Author
-
Kasperski, Adam and Zieliński, Paweł
- Subjects
- *
SPANNING trees , *COMBINATORIAL optimization , *COMPUTATIONAL complexity , *ROBUST optimization , *UNCERTAINTY , *POLYNOMIALS , *ALGORITHMS - Abstract
Abstract: In this paper the minimum spanning tree problem with uncertain edge costs is discussed. In order to model the uncertainty a discrete scenario set is specified and a robust framework is adopted to choose a solution. The min–max, min–max regret and 2-stage min–max versions of the problem are discussed. The complexity and approximability of all these problems are explored. It is proved that the min–max and min–max regret versions with nonnegative edge costs are hard to approximate within for any unless the problems in NP have quasi-polynomial time algorithms. Similarly, the 2-stage min–max problem cannot be approximated within unless the problems in NP have quasi-polynomial time algorithms. In this paper randomized LP-based approximation algorithms with performance bound of for min–max and 2-stage min–max problems are also proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
45. Exact algorithms for computing the tree edit distance between unordered trees
- Author
-
Akutsu, Tatsuya, Fukagawa, Daiji, Takasu, Atsuhiro, and Tamura, Takeyuki
- Subjects
- *
ALGORITHMS , *COMPUTER systems , *DYNAMIC programming , *SPACETIME , *DISTANCES , *POLYNOMIALS , *MATHEMATICAL constants - Abstract
Abstract: This paper presents a fixed-parameter algorithm for the tree edit distance problem for unordered trees under the unit cost model that works in time and space, where the parameter is the maximum bound of the edit distance and is the maximum size of input trees. This paper also presents polynomial-time algorithms for the case where the maximum degree of the largest common subtree is bounded by a constant. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
46. Degree constrained 2-partitions of semicomplete digraphs.
- Author
-
Bang-Jensen, Jørgen and Christiansen, Tilde My
- Subjects
- *
DIRECTED graphs , *GEOMETRIC vertices , *PARTITION functions , *POLYNOMIALS , *MATHEMATICAL analysis - Abstract
Abstract A 2-partition of a digraph D is a partition (V 1 , V 2) of V (D) into two disjoint non-empty sets V 1 and V 2 such that V 1 ∪ V 2 = V (D). A semicomplete digraph is a digraph with no pair of non-adjacent vertices. We consider the complexity of deciding whether a given semicomplete digraph has a 2-partition such that each part of the partition induces a (semicomplete) digraph with some specified property. In [4] and [5] Bang-Jensen, Cohen and Havet determined the complexity of 120 such 2-partition problems for general digraphs. Several of these problems are NP-complete for general digraphs and thus it is natural to ask whether this is still the case for well-structured classes of digraphs, such as semicomplete digraphs. This is the main topic of the paper. More specifically, we consider 2-partition problems where the set of properties are minimum out-, minimum in- or minimum semi-degree. Among other results we prove the following: • For all integers k 1 , k 2 ≥ 1 and k 1 + k 2 ≥ 3 it is NP-complete to decide whether a given digraph D has a 2-partition (V 1 , V 2) such that D 〈 V i 〉 has out-degree at least k i for i = 1 , 2. • For every fixed choice of integers α , k 1 , k 2 ≥ 1 there exists a polynomial algorithm for deciding whether a given digraph of independence number at most α has a 2-partition (V 1 , V 2) such that D 〈 V i 〉 has out-degree at least k i for i = 1 , 2. • For every fixed integer k ≥ 1 there exists a polynomial algorithm for deciding whether a given semicomplete digraph has a 2-partition (V 1 , V 2) such that D 〈 V 1 〉 has out-degree at least one and D 〈 V 2 〉 has in-degree at least k. • It is NP-complete to decide whether a given semicomplete digraph D has a 2-partition (V 1 , V 2) such that D 〈 V i 〉 is a strong tournament. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. An exact exponential branch-and-merge algorithm for the single machine total tardiness problem.
- Author
-
Garraffa, Michele, Shang, Lei, Della Croce, Federico, and T'kindt, Vincent
- Subjects
- *
COMPUTATIONAL complexity , *ALGORITHMS , *DYNAMIC programming , *POLYNOMIALS , *PROBLEM solving - Abstract
Abstract This paper proposes an exact exponential algorithm for the single machine total tardiness problem. It exploits the structure of a basic branch-and-reduce framework based on the well known Lawler's decomposition property that solves the problem with worst-case complexity in time O ⁎ (3 n) and polynomial space. The proposed algorithm, called branch-and-merge, is an improvement of the branch-and-reduce technique with the embedding of a node merging operation. Its time complexity converges to O ⁎ (2 n) keeping the space complexity polynomial. This improves upon the best-known complexity result for this problem provided by dynamic programming across the subsets with O ⁎ (2 n) worst-case time and space complexity. The branch-and-merge technique is likely to be generalized to other sequencing problems with similar decomposition properties. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Vertex deletion problems on chordal graphs.
- Author
-
Cao, Yixin, Ke, Yuping, Otachi, Yota, and You, Jie
- Subjects
- *
COMPUTATIONAL complexity , *MATHEMATICAL optimization , *ALGORITHMS , *NP-complete problems , *POLYNOMIALS , *GRAPH theory - Abstract
Abstract Containing many classic optimization problems, the family of vertex deletion problems has an important position in algorithm and complexity study. The celebrated result of Lewis and Yannakakis gives a complete dichotomy of their complexity. It however has nothing to say about the case when the input graph is also special. This paper initiates a systematic study of vertex deletion problems from one subclass of chordal graphs to another. We give polynomial-time algorithms or proofs of NP-completeness for most of the problems. In particular, we show that the vertex deletion problem from chordal graphs to interval graphs is NP-complete. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. The minimal polynomial of a sequence obtained from the componentwise linear transformation of a linear recurring sequence
- Author
-
Gao, Zhi-Han and Fu, Fang-Wei
- Subjects
- *
RECURSIVE sequences (Mathematics) , *POLYNOMIALS , *MATHEMATICAL transformations , *LINEAR systems , *FACTORIZATION , *COMPUTATIONAL complexity - Abstract
Abstract: Let be a linear recurring sequence with terms in and be a linear transformation of over . Denote . In this paper, we first present counter examples to show that the main result in [A.M. Youssef, G. Gong, On linear complexity of sequences over , Theoret. Comput. Sci., 352 (2006) 288–292] is not correct in general since Lemma 3 in that paper is incorrect. Then we determine the minimal polynomial of if the canonical factorization of the minimal polynomial of without multiple roots is known and thus present the solution to the main problem which was considered in the above paper but incorrectly solved. Additionally, as a special case, we determine the minimal polynomial of if the minimal polynomial of is primitive. Finally, we give an upper bound on the linear complexity of when exhausts all possible linear transformations of over . This bound is tight in some cases. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Improved upper bounds for vertex cover
- Author
-
Chen, Jianer, Kanj, Iyad A., and Xia, Ge
- Subjects
- *
COMPUTER algorithms , *BOUNDARY value problems , *POLYNOMIALS , *EXPONENTIAL functions , *GRAPH theory , *BRANCHING processes - Abstract
Abstract: This paper presents an -time polynomial-space algorithm for Vertex Cover improving the previous -time polynomial-space upper bound by Chen, Kanj, and Jia. Most of the previous algorithms rely on exhaustive case-by-case branching rules, and an underlying conservative worst-case-scenario assumption. The contribution of the paper lies in the simplicity, uniformity, and obliviousness of the algorithm presented. Several new techniques, as well as generalizations of previous techniques, are introduced including: general folding, struction, tuples, and local amortized analysis. The algorithm also improves the -time exponential-space upper bound for the problem by Chandran and Grandoni. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.