1,247 results
Search Results
2. COAP 2014 Best Paper Prize.
- Subjects
LINEAR programming ,VALUE at risk ,COMPUTER algorithms ,RESEARCH awards ,APPROXIMATION algorithms ,MATHEMATICAL optimization ,PRODUCTION scheduling - Abstract
The article focuses on the paper "A primal-dual aggregation algorihtm for minimizing conditional value-at-risk in linear programs," by Daniel Espinoza and Eduardo Moreno which won the Computational Optimization and Applications (COAP) Best Paper Award. Topics discussed include the approximation method for stochastic optimization problems, the conditional value at risk (CvaR), and production scheduling problems.
- Published
- 2015
- Full Text
- View/download PDF
3. Algorithmic study on liar’s vertex-edge domination problem.
- Author
-
Bhattacharya, Debojyoti and Paul, Subhabrata
- Abstract
Let G = (V , E) be a graph. For an edge e = x y ∈ E , the closed neighbourhood of e, denoted by N G [ e ] or N G [ x y ] , is the set N G [ x ] ∪ N G [ y ] . A vertex set L ⊆ V is liar’s vertex-edge dominating set of a graph G = (V , E) if for every e i ∈ E , | N G [ e i ] ∩ L | ≥ 2 and for every pair of distinct edges e i and e j , | (N G [ e i ] ∪ N G [ e j ]) ∩ L | ≥ 3 . This paper introduces the notion of liar’s vertex-edge domination which arises naturally from some applications in communication networks. Given a graph G, the Minimum Liar’s Vertex-Edge Domination Problem (MinLVEDP) asks to find a liar’s vertex-edge dominating set of G of minimum cardinality. In this paper, we study this problem from an algorithmic point of view. We show that MinLVEDP can be solved in linear time for trees, whereas the decision version of this problem is NP-complete for general graphs, chordal graphs, and bipartite graphs. We further study approximation algorithms for this problem. We propose two approximation algorithms for MinLVEDP in general graphs and p-claw free graphs. On the negative side, we show that the MinLVEDP cannot be approximated within 1 2 (1 8 - ϵ) ln | V | for any ϵ > 0 , unless N P ⊆ D T I M E (| V | O (log (log | V |)) . Finally, we prove that the MinLVEDP is APX-complete for bounded degree graphs and p-claw-free graphs for p ≥ 6 . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Approximate and Randomized Algorithms for Computing a Second Hamiltonian Cycle.
- Author
-
Deligkas, Argyrios, Mertzios, George B., Spirakis, Paul G., and Zamaraev, Viktor
- Subjects
HAMILTONIAN graph theory ,DETERMINISTIC algorithms ,ALGORITHMS ,APPROXIMATION algorithms - Abstract
In this paper we consider the following problem: Given a Hamiltonian graph G, and a Hamiltonian cycle C of G, can we compute a second Hamiltonian cycle C ′ ≠ C of G, and if yes, how quickly? If the input graph G satisfies certain conditions (e.g. if every vertex of G is odd, or if the minimum degree is large enough), it is known that such a second Hamiltonian cycle always exists. Despite substantial efforts, no subexponential-time algorithm is known for this problem. In this paper we relax the problem of computing a second Hamiltonian cycle in two ways. First, we consider approximating the length of a second longest cycle on n-vertex graphs with minimum degree δ and maximum degree Δ . We provide a linear-time algorithm for computing a cycle C ′ ≠ C of length at least n - 4 α (n + 2 α) + 8 , where α = Δ - 2 δ - 2 . This results provides a constructive proof of a recent result by Girão, Kittipassorn, and Narayanan in the regime of Δ δ = o (n) . Our second relaxation of the problem is probabilistic. We propose a randomized algorithm which computes a second Hamiltonian cycle with high probability, given that the input graph G has a large enough minimum degree. More specifically, we prove that for every 0 < p ≤ 0.02 , if the minimum degree of G is at least 8 p log 8 n + 4 , then a second Hamiltonian cycle can be computed with probability at least 1 - 1 n 50 p 4 + 1 in p o l y (n) · 2 4 p n time. This result implies that, when the minimum degree δ is sufficiently large, we can compute with high probability a second Hamiltonian cycle faster than any known deterministic algorithm. In particular, when δ = ω (log n) , our probabilistic algorithm works in 2 o (n) time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. New Algorithms for Steiner Tree Reoptimization.
- Author
-
Bilò, Davide
- Subjects
APPROXIMATION algorithms ,ALGORITHMS ,TREES - Abstract
Reoptimization is a setting in which we are given a good approximate solution of an optimization problem instance and a local modification that slightly changes the instance. The main goal is that of finding a good approximate solution of the modified instance. We investigate one of the most studied scenarios in reoptimization known as Steiner tree reoptimization. Steiner tree reoptimization is a collection of strongly NP -hard optimization problems that are defined on top of the classical Steiner tree problem and for which several constant-factor approximation algorithms have been designed in the last decades. In this paper we improve upon all these results by developing a novel technique that allows us to design polynomial-time approximation schemes. Remarkably, prior to this paper, no approximation algorithm better than recomputing a solution from scratch was known for the elusive scenario in which the cost of a single edge decreases. Our results are best possible since none of the problems addressed in this paper admits a fully polynomial-time approximation scheme, unless P = NP [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Approximations for Throughput Maximization.
- Author
-
Hyatt-Denesik, Dylan, Rahgoshay, Mirmahdi, and Salavatipour, Mohammad R.
- Subjects
APPROXIMATION algorithms ,COMPUTER scheduling ,COMPUTER science ,DYNAMIC programming ,OPEN-ended questions - Abstract
In this paper we study the classical problem of throughput maximization. In this problem we have a collection J of n jobs, each having a release time r j , deadline d j , and processing time p j . They have to be scheduled non-preemptively on m identical parallel machines. The goal is to find a schedule which maximizes the number of jobs scheduled entirely in their [ r j , d j ] window. This problem has been studied extensively (even for the case of m = 1 ). Several special cases of the problem remain open. Bar-Noy et al. (Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, May 1–4, 1999, Atlanta, Georgia, USA, pp. 622–631. ACM, 1999, https://doi.org/10.1145/301250.301420) presented an algorithm with ratio 1 - 1 / (1 + 1 / m) m for m machines, which approaches 1 - 1 / e as m increases. For m = 1 , Chuzhoy et al. (42nd Annual Symposium on Foundations of Computer Science (FOCS) 2001, 14–17 October 2001, Las Vegas, Nevada, USA, pp. 348–356. IEEE Computer Society, 2001) presented an algorithm with approximation with ratio 1 - 1 e - ε (for any ε > 0 ). Recently Im et al. (SIAM J Discrete Math 34(3):1649–1669, 2020) presented an algorithm with ratio 1 - 1 / e + ε 0 for some absolute constant ε 0 > 0 for any fixed m. They also presented an algorithm with ratio 1 - O (log m / m) - ε for general m which approaches 1 as m grows. The approximability of the problem for m = O (1) remains a major open question. Even for the case of m = 1 and c = O (1) distinct processing times the problem is open (Sgall in: Algorithms - ESA 2012 - 20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012. Proceedings, pp 2–11, 2012). In this paper we study the case of m = O (1) and show that if there are c distinct processing times, i.e. p j 's come from a set of size c, then there is a randomized (1 - ε) -approximation that runs in time O (n m c 7 ε - 6 log T) , where T is the largest deadline. Therefore, for constant m and constant c this yields a PTAS. Our algorithm is based on proving structural properties for a near optimum solution that allows one to use a dynamic programming with pruning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Approximation algorithms for solving the trip-constrained vehicle routing cover problems.
- Author
-
Li, Jianping, Yang, Ping, Lichen, Junran, and Pan, Pengxiang
- Abstract
In this paper, we address the trip-constrained vehicle routing cover problem (the TcVRC problem). Specifically, given a metric complete graph G = (V , E ; w) with a set D (⊆ V) of depots, a set J (= V \ D) of customer locations, each customer having unsplittable demand 1, and k vehicles with capacity Q, it is asked to find a set C = { C i | i = 1 , 2 , … , k } of k tours for k vehicles to service all customers, each tour for a vehicle starts and ends at one depot in D and permits to be replenished at some other depots in D before continuously servicing at most Q customers, i.e., the number of customers continuously serviced in per trip of each tour is at most Q (except the two end-vertices of that trip), where each trip is a path or cycle, starting at a depot and ending at other depot (maybe the same depot) in D, such that there are no other depots in the interior of that path or cycle, the objective is to minimize the maximum weight of such k tours in C , i.e., min C max { w (C i) | i = 1 , 2 , … , k } , where w (C i) is the total weight of edges in that tour C i . Considering k vehicles whether to have common depot or suppliers, we consider three variations of the TcVRC problem, i.e., (1) the trip-constrained vehicle routing cover problem with multiple suppliers (the TcVRC-MS problem) is asked to find a set C = { C i | i = 1 , 2 , … , k } of k tours mentioned-above, the objective is to minimize the maximum weight of such k tours in C ; (2) the trip-constrained vehicle routing cover problem with common depot and multiple suppliers (the TcVRC-CDMS problem) is asked to find a set C = { C i | i = 1 , 2 , … , k } of k tours mentioned-above, where each tour starts and ends at same depot v in D, each vehicle having its suppliers at some depots in D (possibly including v), the objective is to minimize the maximum weight of such k tours in C ; (3) the trip-constrained k-traveling salesman problem with non-suppliers (the TckTS-NS problem, simply the TckTSP-NS) is asked to find a set C = { C i | i = 1 , 2 , … , k } of k tours mentioned-above, where each tour starts and ends at same depot v in D, each vehicle having non-suppliers, the objective is to minimize the maximum weight of such k tours in C . As for the main contributions, we design some approximation algorithms to solve these three aforementioned problems in polynomial time, whose approximation ratios achieve three constants 8 - 2 k , 7 2 - 1 k and 5, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. An Improved Rational Approximation of Bark Scale Using Low Complexity and Low Delay Filter Banks.
- Author
-
Hareesh, V. and Bindiya, T. S.
- Subjects
- *
FILTER banks , *FINITE impulse response filters , *STANDARD deviations , *APPROXIMATION algorithms , *APPROXIMATION error - Abstract
This paper proposes an algorithm to obtain the sampling factors to model any frequency partitioning so that it is realized using low complexity rational decimated non-uniform filter banks (RDNUFBs). The proposed algorithm is employed to approximate the Bark scale to find a rational frequency partitioning that can be realized using RDNUFBs with less approximation error. The proposed Bark frequency partitioning is found to reduce the average deviation in centre frequency and bandwidth by 65.04% and 48.50%, respectively, and root-mean-square bandwidth deviation by 54.46% when compared to those of the existing perceptual wavelet approximation of Bark scale. In the first filter bank design approach discussed in the paper, a near-perfect reconstruction partially cosine modulated filter bank is employed to obtain the improved Bark frequency partitioning. In the second design approach, the hardware complexity of the PCM-based RDNUFB is reduced by deriving the channels with different sampling factors from the same prototype filter by the method of channel merging. There is a considerable reduction of 40.83% in the number of multipliers when merging of partially cosine modulated channels is employed. It is also found that both the proposed approaches can reduce the delay by 95.17% when compared to the existing rational tree approximation methods and hence, are suitable for real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Symmetric 2-adic complexity of Tang–Gong interleaved sequences from generalized GMW sequence pair.
- Author
-
Yang, Bo, He, Kangkang, Zeng, Xiangyong, and Xiao, Zibi
- Subjects
BINARY sequences ,APPROXIMATION algorithms - Abstract
Tang–Gong interleaved sequences constructed from the generalized GMW sequence pair are a class of binary sequences with optimal autocorrelation magnitude. In this paper, the symmetric 2-adic complexity of these sequences is investigated. We first derive a lower bound on their 2-adic complexity by extending the method proposed by Hu. Then, by analysing the algebraic structure of these sequences, a lower bound on their symmetric 2-adic complexity is obtained. Our result shows that the symmetric 2-adic complexity of these sequences is large enough to resist attacks with the rational approximation algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Approximating the probabilistic p-Center problem under pressure.
- Author
-
Demange, Marc, Haddad, Marcel A., and Murat, Cécile
- Abstract
The Probabilistic p-Center problem under Pressure (Min PpCP) is a variant of the usual Minp-Center problem we recently introduced in the context of wildfire management. The problem is to locate p shelters minimizing the maximum distance people will have to cover in case of fire in order to reach the closest accessible shelter. The landscape is divided into zones and is modeled as an edge-weighted graph with vertices corresponding to zones and edges corresponding to direct connections between two adjacent zones. The risk associated with fire outbreaks is modeled using a finite set of fire scenarios. Each scenario corresponds to a fire outbreak on a single zone (i.e., on a vertex) with the main consequence of modifying evacuation paths in two ways. First, an evacuation path cannot pass through the vertex on fire. Second, the fact that someone close to the fire may not take rational decisions when selecting a direction to escape is modeled using new kinds of evacuation paths. In this paper, we characterize the set of feasible solutions of Min PpCP-instance. Then, we propose some approximation results for Min PpCP. These results require approximation results for two variants of the (deterministic) Minp-Center problem called Min MACp-Center and Min Partialp-Center. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A hybrid patch decomposition approach to compute an enclosure for multi-objective mixed-integer convex optimization problems.
- Author
-
Eichfelder, Gabriele and Warnow, Leo
- Subjects
APPROXIMATION algorithms ,CONVEX functions ,INTEGERS ,ALGORITHMS - Abstract
In multi-objective mixed-integer convex optimization, multiple convex objective functions need to be optimized simultaneously while some of the variables are restricted to take integer values. In this paper, we present a new algorithm to compute an enclosure of the nondominated set of such optimization problems. More precisely, we decompose the multi-objective mixed-integer convex optimization problem into several multi-objective continuous convex optimization problems, which we refer to as patches. We then dynamically compute and improve coverages of the nondominated sets of those patches to finally combine them to obtain an enclosure of the nondominated set of the multi-objective mixed-integer convex optimization problem. Additionally, we introduce a mechanism to reduce the number of patches that need to be considered in total. Our new algorithm is the first of its kind and guaranteed to return an enclosure of prescribed quality within a finite number of iterations. For selected numerical test instances we compare our new criterion space based approach to other algorithms from the literature and show that much larger instances can be solved with our new algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Approximating Long Cycle Above Dirac's Guarantee.
- Author
-
Fomin, Fedor V., Golovach, Petr A., Sagunov, Danil, and Simonov, Kirill
- Subjects
POLYNOMIAL time algorithms ,APPROXIMATION algorithms ,UNDIRECTED graphs ,PATHS & cycles in graph theory - Abstract
Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit "natural" guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on m clauses, there is an assignment that satisfies at least m/2 clauses. How difficult is it to decide whether there is an assignment satisfying more than m / 2 + k clauses? Or, if an n-vertex graph has a perfect matching, then its vertex cover is at least n/2. Is there a vertex cover of size at least n / 2 + k for some k ≥ 1 and how difficult is it to find such a vertex cover? The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac's theorem from 1952. Dirac's theorem provides the following guarantee on the length of the longest cycle: for every 2-connected n-vertex graph G with minimum degree δ (G) ≤ n / 2 , the length of a longest cycle L is at least 2 δ (G) . Thus the "essential" part in finding the longest cycle is in approximating the "offset" k = L - 2 δ (G) . The main result of this paper is the above-guarantee approximation theorem for k. Informally, the theorem says that approximating the offset k is not harder than approximating the total length L of a cycle. In other words, for any (reasonably well-behaved) function f, a polynomial time algorithm constructing a cycle of length f(L) in an undirected graph with a cycle of length L, yields a polynomial time algorithm constructing a cycle of length 2 δ (G) + Ω (f (k)) . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Path Cover Problems with Length Cost.
- Author
-
Kobayashi, Kenya, Lin, Guohui, Miyano, Eiji, Saitoh, Toshiki, Suzuki, Akira, Utashima, Tadatoshi, and Yagita, Tsuyoshi
- Subjects
BIPARTITE graphs ,APPROXIMATION algorithms ,COST functions ,PLANAR graphs ,GRAPH algorithms ,COST - Abstract
For a graph G = (V , E) , a collection P of vertex-disjoint (simple) paths is called a path cover of G if every vertex v ∈ V is contained in exactly one path of P . The Path Cover problem (PC for short) is to find a minimum cardinality path cover of G. In this paper, we introduce generalizations of PC, where each path is associated with a weight (cost or profit). Our problem, Minimum (Maximum) Weighted Path Cover [MinPC (MaxPC)], is defined as follows: Let U = { 0 , 1 , ⋯ , n - 1 } . Given a graph G = (V , E) and a weight function f : U → R ∪ { + ∞ , - ∞ } that defines a weight for each path based on its length, the objective of MinPC (MaxPC) is to find a path cover P of G such that the total weight of the paths in P is minimized (maximized). Let L be a subset of U, and P L be the set of paths such that each path is of length ℓ ∈ L . We consider Min P L PC with binary cost, i.e., the cost function is f (ℓ) = 1 if ℓ ∈ L ; otherwise, f (ℓ) = 0 . We also consider Max P L PC with f (ℓ) = ℓ + 1 , if ℓ ∈ L ; otherwise, f (ℓ) = 0 . Many well-known graph theoretic problems such as the Hamiltonian Path and the Maximum Matching problems can be modeled using Min P L PC and Max P L PC. In this paper, we first show that deciding whether Min P { 0 , 1 , 2 } PC has a 0-weight solution is NP-complete for planar bipartite graphs of maximum degree three, and consequently, (i) for any constant σ ≥ 1 , there is no polynomial-time approximation algorithm with approximation ratio σ for Min P { 0 , 1 , 2 } PC unless P = NP, and (ii) Max P { 3 , ⋯ , n - 1 } PC is NP-hard for the same graph class. Next, we present a polynomial-time algorithm for Min P { 0 , 1 , ⋯ , k } PC on graphs with bounded treewidth for a fixed k. Lastly, we present a 4-approximation algorithm for Max P { 3 , ⋯ , n - 1 } PC, which becomes a 2.5-approximation algorithm for subcubic graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Guest Editorial: Special Issue on Theoretical Informatics.
- Author
-
Kohayakawa, Yoshiharu and Miyazawa, Flávio K.
- Subjects
APPROXIMATION algorithms ,COMPUTATIONAL number theory ,MEDICAL informatics ,RANDOM access memory - Abstract
These bounds already improve previous results for HT ht and HT ht . In I Exponential-time quantum algorithms for graph coloring problems i , the authors present an exponential-space quantum algorithm computing the chromatic number of an I n i -vertex graph with running time HT ht using quantum random access memory (QRAM). The authors also present polynomial-space quantum algorithms with running time HT ht for the I k i -coloring problem, for HT ht , without using the QRAM, as well as HT ht -time classical algorithms that can be improved quadratically by Grover's search to obtain polynomial-space quantum algorithms. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
15. Exact algorithms and approximation schemes for proportionate flow shop scheduling with step-deteriorating processing times.
- Author
-
Shabtay, Dvir and Mor, Baruch
- Subjects
FLOW shop scheduling ,APPROXIMATION algorithms ,FLOW shops ,POLYNOMIAL approximation ,POLYNOMIAL time algorithms - Abstract
We study two scheduling problems in a proportionate flow shop environment, where job processing times are machine independent. In contrast to classical proportionate flow shop models, we assume (in both problems) that processing times are step-deteriorating. Accordingly, each job J j has a normal processing time, a j , if it starts to be processed in the shop no later than its deteriorating date, δ j . Otherwise, the job's processing time increases by b j (the job's deterioration penalty). Our aim is to find a job schedule that minimizes either the makespan or the total load. These two problems are known to be N P -hard for the special case of a single machine, even when all jobs have the same deteriorating date. In this paper, we derive several positive results in relation to the two problems. We first show that the two problems can be represented in a unified way. We then prove that the unified problem is only ordinary N P -hard by providing a pseudo-polynomial time algorithm for its solution. We also show that the pseudo-polynomial time algorithm can be converted into a fully polynomial time approximation scheme (FPTAS). Finally, we analyze the parameterized complexity of the problem with respect to the number of different deteriorating dates in the instance, v δ . We show that although the problem is N P -hard when v δ = 1 , it is fixed parameterized tractable (FPT) for the combined parameters (i) (ν δ , ν a) and (ii) (ν δ , ν b) , where ν a is the number of different normal processing times in the instance, and ν b is the number of different deterioration penalties in the instance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Theoretical Analysis of Git Bisect.
- Author
-
Courtiel, Julien, Dorbec, Paul, and Lecoq, Romain
- Subjects
APPROXIMATION algorithms ,DIRECTED acyclic graphs ,ACYCLIC model ,GRAPH algorithms - Abstract
In this paper, we consider the problem of finding a regression in a version control system (VCS), such as git. The set of versions is modelled by a directed acyclic graph (DAG) where vertices represent versions of the software, and arcs are the changes between different versions. We assume that somewhere in the DAG, a bug was introduced, which persists in all of its subsequent versions. It is possible to query a vertex to check whether the corresponding version carries the bug. Given a DAG and a bugged vertex, the Regression Search Problem consists in finding the first vertex containing the bug in a minimum number of queries in the worst-case scenario. This problem is known to be NP-complete. We study the algorithm used in git to address this problem, known as git bisect. We prove that in a general setting, git bisect can use an exponentially larger number of queries than an optimal algorithm. We also consider the restriction where all vertices have indegree at most 2 (i.e. where merges are made between at most two branches at a time in the VCS), and prove that in this case, git bisect is a 1 log 2 (3 / 2) -approximation algorithm, and that this bound is tight. We also provide a better approximation algorithm for this case. Finally, we give an alternative proof of the NP-completeness of the Regression Search Problem, via a variation with bounded indegree. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Stochastic Variance Reduction for DR-Submodular Maximization.
- Author
-
Lian, Yuefang, Du, Donglei, Wang, Xiao, Xu, Dachuan, and Zhou, Yang
- Subjects
OPTIMIZATION algorithms ,SUBMODULAR functions ,STOCHASTIC approximation ,APPROXIMATION algorithms ,CONVEX functions - Abstract
Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a (1 - e - 1) OPT - ε approximation after O (ε - 1) iterations and O (ε - 2) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a 1 4 (1 - min x ∈ C ‖ x ‖ ∞) OPT - ε approximation with O (ε - 1) iterations and O (ε - 2) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only O (1) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Combined method for the cage induction motor parameters estimation using two-stage PSO algorithm.
- Author
-
Vukašinović, Jovan, Štatkić, Saša, Milovanović, Miloš, Arsić, Nebojša, and Perović, Bojan
- Subjects
INDUCTION motors ,PARTICLE swarm optimization ,PARAMETER estimation ,SKIN effect ,APPROXIMATION algorithms - Abstract
This paper presents a combined method for the equivalent circuit parameters estimation of cage induction motors. The method is based on dual usage of the particle swarm optimization algorithm and the approximation of rotor parameters as a function of the speed, due to the influence of the skin effect. The approximation of rotor parameters as a function of the speed is not directly applied in the parameters estimation algorithm, but it is used to obtain the torque-speed characteristics of cage induction motors. The first stage of the optimization includes the estimation of equivalent circuit parameters of a motor for the nominal operating mode, while the rotor parameters at the start of the motor are estimated in the second stage of the optimization. These parameters are obtained as a result of minimizing the error between the calculated and the manufacture data. The proposed method is applied on eight two-pole induction motors with different rated powers, energy efficiency class IE3, manufactured by ABB. The results are verified by comparing the obtained torque-speed characteristics with the corresponding characteristics provided by the manufacturer (i.e., ABB) using the MotSize program. It is shown that the torque-speed characteristics obtained by the proposed method are in a good agreement with the characteristics given by the manufacturer. Also, the value of the mean absolute relative error does not exceed 5% for all considered motors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. The Space Complexity of Sum Labelling.
- Author
-
Fernau, Henning and Gajjar, Kshitij
- Subjects
GRAPH labelings ,SPARSE graphs ,POLYNOMIAL time algorithms ,APPROXIMATION algorithms ,PLANAR graphs ,GRAPH algorithms ,COMPUTATIONAL complexity - Abstract
A graph is called a sum graph if its vertices can be labelled by distinct positive integers such that there is an edge between two vertices if and only if the sum of their labels is the label of another vertex of the graph. Most papers on sum graphs consider combinatorial questions like the minimum number of isolated vertices that need to be added to a given graph to make it a sum graph. In this paper, we initiate the study of sum graphs from the viewpoint of computational complexity. Notice that every n-vertex sum graph can be represented by a sorted list of n positive integers where edge queries can be answered in O (log n) time. Therefore, upper-bounding the numbers used as vertex labels also upper-bounds the space complexity of storing the graph in the database. We show that every n-vertex, m-edge, d-degenerate graph can be made a sum graph by adding at most m isolated vertices to it, such that the largest numbers used as vertex labels grows as O (n 2 d) . This enables us to store the graph using O (m log n) bits of memory. For sparse graphs (graphs with O (n) edges), this matches the trivial lower bound of Ω (n log n) . As planar graphs and forests have constant degeneracy, our result implies an upper bound of O (n 2) on their label numbers. The previously best known upper bound on the numbers needed for labelling general graphs with the minimum number of isolated vertices was O (4 n) , due to Kratochvíl, Miller & Nguyen (2001). Furthermore, their proof was existential, whereas our labelling can be constructed in polynomial time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Polyak's Method Based on the Stochastic Lyapunov Function for Justifying the Consistency of Estimates Produced by a Stochastic Approximation Search Algorithm under an Unknown-but-Bounded Noise.
- Author
-
Granichin, O. N., Ivanskii, Yu. V., and Kopylova, K. D.
- Subjects
- *
STOCHASTIC approximation , *APPROXIMATION algorithms , *LYAPUNOV functions , *NOISE , *REMOTE control , *SEARCH algorithms - Abstract
In 1976–1977, Polyak published in the journal Avtomatica i Telemekhanika (Automation and Remote Control) two remarkable papers on how to study the properties of estimates of iterative pseudogradient algorithms. The first paper published in 1976 considered the general case based on the stochastic Lyapunov function, and the second one considered the linear case. The assumptions formulated in these papers and the estimates obtained in them can still be considered the state-of-the art. In the current paper, Polyak's approach is applied to the study of the properties of estimates of a (randomized) stochastic approximation search algorithm for the case of unknown-but-bounded noise in observations. The obtained asymptotic estimates were already known earlier, and exact estimates for a finite number of observations are published for the first time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Risk Aversion, Reservation Utility and Bargaining Power: An Evolutionary Algorithm Approximation of Incentive Contracts.
- Author
-
Curiel-Cabral, Itza Tlaloc Quetzalcoatl, Di Giannatale, Sonia, and Labrador-Badía, Giselle
- Subjects
EVOLUTIONARY algorithms ,BARGAINING power ,APPROXIMATION algorithms ,RISK aversion ,ELECTRIC utilities - Abstract
In this paper we analyze the links between the agent's reservation utility, bargaining power, and risk aversion in terms of their simultaneous effects on the structure of optimal static contracts. We compare the following principal-agent models in the symmetric and asymmetric information environments: the standard approach, which includes a participation constraint, and a multi-objective (MO) optimization approach in which the objective function is a convex combination of the expected utilities of the principal and the agent. The MO model does not include a participation constraint, but it includes a parameter for the agent's bargaining power. We also study an Evolutionary Algorithm implementation of the static principal-agent model to support and extend our analytical results. We show that the numerical solution approximated by our implementation of an evolutionary algorithm is in line with the analytical solutions mentioned before. That is, for every admissible value of the agent's reservation utility, there is a corresponding admissible value of the agent's bargaining parameter, both in the MO approach and the EA implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Interval division and linearization algorithm for minimax linear fractional program.
- Author
-
Zhang, Bo, Gao, Yuelin, Liu, Xia, and Huang, Xiaoli
- Subjects
FRACTIONAL programming ,POLYNOMIAL time algorithms ,ALGORITHMS ,POLYNOMIAL approximation ,OUTER space ,PARALLEL algorithms ,APPROXIMATION algorithms - Abstract
This paper constructs an interval partition linearization algorithm for solving minimax linear fractional programming (MLFP) problem. In this algorithm, MLFP is converted and decomposed into a series of linear programs by dividing the outer 1-dimensional space of the equivalent problem (EP) into polynomially countable intervals. To improve the computational efficiency of the algorithm, two new acceleration techniques are introduced, in which the regions in outer space where the optimal solution of EP does not exist are largely deleted. In addition, the global convergence of the proposed algorithm is summarized, and its computational complexity is illustrated to reveal that it is a fully polynomial time approximation scheme. Finally, the numerical results demonstrate that the proposed algorithm is feasible and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Randomized Strategies for Robust Combinatorial Optimization with Approximate Separation.
- Author
-
Kawase, Yasushi and Sumita, Hanna
- Subjects
ROBUST optimization ,COMBINATORIAL optimization ,DISTRIBUTION (Probability theory) ,LINEAR programming ,APPROXIMATION algorithms ,ARTIFICIAL intelligence - Abstract
In this paper, we study the following robust optimization problem. Given a set family representing feasibility and candidate objective functions, we choose a feasible set, and then an adversary chooses one objective function, knowing our choice. The goal is to find a randomized strategy (i.e., a probability distribution over the feasible sets) that maximizes the expected objective value in the worst case. This problem is fundamental in wide areas such as artificial intelligence, machine learning, game theory, and optimization. To solve the problem, we provide a general framework based on the dual linear programming problem. In the framework, we utilize the ellipsoid algorithm with the approximate separation algorithm. We prove that there exists an α -approximation algorithm for our robust optimization problem if there exists an α -approximation algorithm for finding a (deterministic) feasible set that maximizes a nonnegative linear combination of the candidate objective functions. Using our result, we provide approximation algorithms for the max–min fair randomized allocation problem and the maximum cardinality robustness problem with a knapsack constraint. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Top-k heavy weight triangles listing on graph stream.
- Author
-
Zhang, Fan, Gou, Xiangyang, and Zou, Lei
- Subjects
TRIANGLES ,APPROXIMATION algorithms ,SOCIAL networks ,DATA structures ,BIG data ,PROBLEM solving - Abstract
Graph stream is used to express complex and highly dynamic relationships between entities, such as friendships in social networks. The storage and mining on graph stream are the main research areas of big data research. Triangle listing/counting is an important topic in graph mining research. The triangle, as the simplest circle and clique structure, has many applications in many real-world scenarios. A large amount of related work also exists on the study of triangles on graph streams. However, the existing research has focused on triangle counting on static graphs or graph streams, and there is a lack of research targeting heavy weight triangle listing. This paper formally defines the triangle weight on graph stream. Based on this definition, this paper presents an approximation algorithm for the top-k heavy weight triangle listing problem on graph stream, and proposes various optimized data structures DolhaT, Filtered DolhaT and Double Filtered DolhaT (DFD) to solve this problem. Experiments on real graph stream datasets demonstrate the effectiveness of the proposed optimized structures for the heavy weight triangle listing problem on graph stream. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Distributed primal outer approximation algorithm for sparse convex programming with separable structures.
- Author
-
Olama, Alireza, Camponogara, Eduardo, and Mendes, Paulo R. C.
- Subjects
CONVEX programming ,OPTIMIZATION algorithms ,SPARSE approximations ,MODERN architecture ,CONSTRAINED optimization ,DISTRIBUTED algorithms ,APPROXIMATION algorithms - Abstract
This paper presents the distributed primal outer approximation (DiPOA) algorithm for solving sparse convex programming (SCP) problems with separable structures, efficiently, and in a decentralized manner. The DiPOA algorithm development consists of embedding the recently proposed relaxed hybrid alternating direction method of multipliers (RH-ADMM) algorithm into the outer approximation (OA) algorithm. We also propose two main improvements to control the quality and the number of cutting planes that approximate nonlinear functions. In particular, the RH-ADMM algorithm acts as a distributed numerical engine inside the DiPOA algorithm. DiPOA takes advantage of the multi-core architecture of modern processors to speed up optimization algorithms. The proposed distributed algorithm makes practical the solution of SCP in learning and control problems from the application side. This paper concludes with a performance analysis of DiPOA for the distributed sparse logistic regression and quadratically constrained optimization problems. Finally, the paper concludes with a numerical comparison with state-of-the-art optimization solvers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Partially symmetric tensor structure preserving rank-R approximation via BFGS algorithm.
- Author
-
Chen, Ciwen, Ni, Guyan, and Yang, Bo
- Subjects
LEAST squares ,APPROXIMATION algorithms - Abstract
It is known that many tensor data have symmetric or partially symmetric structure and structural tensors have structure preserving Candecomp/Parafac (CP) decompositions. However, the well-known alternating least squares (ALS) method cannot realize structure preserving CP decompositions of tensors. Hence, in this paper, we consider numerical problems of structure preserving rank-R approximation and structure preserving CP decomposition of partially symmetric tensors. For the problem of structure preserving rank-R approximation, we derive the gradient formula of the objective function, obtain BFGS iterative formulas, propose a BFGS algorithm for positive partially symmetric rank-R approximation, and discuss the convergence of the algorithm. For the problem of structure preserving CP decomposition, we give a necessary condition for partially symmetric tensors with even orders to have positive partially symmetric CP decompositions, and design a general partially symmetric rank-R approximation algorithm. Finally, some numerical examples are given. Through numerical examples, we find that if a tensor has a positive partially symmetric CP decomposition then its partially symmetric rank CP decomposition must be a positive CP decomposition. In addition, we compare the BFGS algorithm proposed in this paper with the standard CP-ALS method. Numerical examples show that the BFGS algorithm has better stability and faster computing speed than CP-ALS algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Practical Budgeted Submodular Maximization.
- Author
-
Feldman, Moran, Nutov, Zeev, and Shoham, Elad
- Subjects
SUBMODULAR functions ,GREEDY algorithms ,APPROXIMATION algorithms ,THRESHOLDING algorithms ,LEAD time (Supply chain management) ,BACKPACKS - Abstract
We consider the problem of maximizing a non-negative monotone submodular function subject to a knapsack constraint, which is also known as the Budgeted Submodular Maximization (BSM) problem. Sviridenko (Operat Res Lett 32:41–43, 2004) showed that by guessing 3 appropriate elements of an optimal solution, and then executing a greedy algorithm, one can obtain the optimal approximation ratio of α = 1 - 1 / e ≈ 0.632 for BSM. However, the need to guess (by enumeration) 3 elements makes the algorithm of Sviridenko (Operat Res Lett 32:41–43, 2004) impractical as it leads to a time complexity of roughly O (n 5) (this time complexity can be slightly improved using the thresholding technique of Badanidiyuru & Vondrák (in: SODA, 1497–1514, 2014) but only to roughly O (n 4) ). Our main results in this paper show that fewer guesses suffice. Specifically, by making only 2 guesses (and using the thresholding technique of Badanidiyuru & Vondrák (in: SODA, 1497–1514, 2014), we get the same optimal approximation ratio of α with an improved time complexity of roughly O (n 3) . Furthermore, by making only a single guess, we get an almost as good approximation ratio of 0.6174 > 0.9767 α in roughly O (n 2) time. Prior to our work, the only approximation algorithms that were known to obtain an approximation ratio close to α for BSM were the algorithm of Sviridenko (Operat Res Lett 32:41–43, 2004) and an algorithm of Ene & Nguyen (in: ICALP, 53:1–53:12, 2019) that achieves (α - ε) -approximation. However, the algorithm of Ene & Nguyen (in: ICALP, 53:1–53:12, 2019) requires (1 / ε) O (1 / ε 4) n log 2 n time, and hence, is of theoretical interest only since (1 / ε) O (1 / ε 4) is huge even for moderate values of ε . In contrast, all the algorithms we analyze are simple and parallelizable, which makes them good candidates for practical use. Recently, Tang et al. (in: Proc ACM Meas Anal Comput Syst, 5(1): 08:1–08:22, 2021) studied a simple greedy algorithm that already has a long research history, and proved that it admits an approximation ratio of at least 0.405 (without any guesses). The last part of this paper improves over the result of Tang et al. (in: Proc ACM Meas Anal Comput Syst, 5(1): 08:1–08:22, 2021) and shows that the approximation ratio of this algorithm is within the range [0.427, 0.462]. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Performance of Vector Fitting Algorithm Applied to Bandpass and Baseband Systems.
- Author
-
Vrinda, K., Murty, N. S., and Kurup, Dhanesh G.
- Subjects
VECTOR analysis ,NOISE measurement ,APPROXIMATION algorithms ,BANDPASS filters ,MICROSTRIP filters - Abstract
This article presents the performance evaluation of Vector Fitting Algorithm (VFA) from a system identification perspective. In this paper, VFA has been first applied to known baseband and bandpass systems such as Butterworth lowpass and bandpass filters to analyze the algorithm’s pole-residue extraction ability for band-limited noisy data. The poles identified by the algorithm for different bandwidths and noise powers are compared with the actual system poles of the baseband and bandpass systems. It is concluded that the algorithm is capable of identifying the actual system poles even if the capture bandwidth is less than the 3 dB bandwidth, which is a significant observation of this paper. It is also seen that the system identification performance with noisy data is better for baseband systems when compared to bandpass systems. Further, a practical investigation has been done to evaluate VFA performance for modeling a microstrip coupled line filter in the presence of noise. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. Preface of STACS 2016 Special Issue.
- Author
-
Dürr, Christoph and Vollmer, Heribert
- Subjects
DATA structures ,APPROXIMATION algorithms ,DISCRETE uniform distribution - Published
- 2018
- Full Text
- View/download PDF
30. A Spatial Logistic Regression Model Based on a Valid Skew-Gaussian Latent Field.
- Author
-
Tadayon, Vahid and Saber, Mohammad Mehdi
- Subjects
REGRESSION analysis ,GAUSSIAN processes ,LOGISTIC regression analysis ,EXPECTATION-maximization algorithms ,RANDOM fields ,PARAMETER estimation ,APPROXIMATION algorithms - Abstract
Logistic regression is commonly used to estimate the association of one (or more) independent variable(s) with a binary- dependent outcome. In many applications latent sources are both spatially dependent and non-Gaussian; thus, it is desirable to exploit both properties jointly. Spatial logistic regression is a well-established technique of including spatial dependence in logistic regression models. In this paper, we develop a spatial logistic regression model based on a valid skew-Gaussian random field. For parameter estimation, we use a Monte Carlo extension of the EM algorithm along with an approximation based on the standard logistic function. A simulation study is applied in order to determine the performance of the proposed model and also to compare the results with a recently introduced model with established efficiency. The identifiability of the parameters is investigated as well. As an illustrative purpose, an application to the Meuse heavy metals dataset is presented. Supplementary materials accompanying this paper appear online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. An approximation algorithm for virtual machine placement in cloud data centers.
- Author
-
Mahmoodabadi, Zahra and Nouri-Baygi, Mostafa
- Subjects
VIRTUAL machine systems ,APPROXIMATION algorithms ,SERVER farms (Computer network management) ,VARIABLE costs ,OVERHEAD costs ,ENERGY consumption - Abstract
This study addresses the energy efficiency challenge in cloud data centers by examining the Virtual Machine Placement (VMP) problem. VMP involves mapping virtual machines (VMs) to physical machines (PMs) under capacity constraints. The paper focuses on the bin packing with linear usage cost (BPLUC) variant of bin packing, which includes fixed and variable costs in the calculation of the cost of a used bin. We prove that every approximation algorithm for the bin and vector bin packing can be used for BPLUC and VBPLUC, respectively. We propose a more power-efficient approach to VMP by applying a vector bin packing algorithm to minimize power consumption in data centers. We test the proposed algorithm on various synthetic and real workloads, and the experimental results demonstrate that it is more power-efficient than existing algorithms for VMP. The findings suggest that the proposed algorithm has significant implications for energy-efficient strategies in cloud data centers. Generally, this study makes contributes to the development of energy-efficient approaches to VMP that can help reduce power consumption and improve the sustainability of cloud data centers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Dispersing facilities on planar segment and circle amidst repulsion.
- Author
-
Singireddy, Vishwanath R. and Basappa, Manjanna
- Subjects
LOCATION problems (Programming) ,POLYNOMIAL time algorithms ,APPROXIMATION algorithms ,DYNAMIC programming ,CIRCLE - Abstract
In this paper, we study restricted variations of the following obnoxious facility location problem in the plane: locate k new obnoxious facilities amidst n ordered demand points (existing facility sites) in the plane so that none of the existing facility sites are affected. We study this problem in two restricted settings. The obnoxious facilities are constrained to be centered on (i) a predetermined horizontal line segment pq ¯ , and (ii) the boundary arc of a predetermined disk C . For the problem in (i), we first give an (1 - ϵ) -approximation algorithm that runs in O (n + k) log | | p q | | 2 (k - 1) ϵ time, where ϵ > 0 and ||pq|| is the length of pq ¯ . We then propose two exact polynomial time algorithms, one based on doing binary search on all candidates and the other with the parametric search technique of Megiddo (J ACM 30:852–865, 1983); that will take O (n k) 2 time and O (n + k) 2 time respectively. Continuing further, using the improved parametric technique, we give an O (n log n) -time algorithm for k = 2 . We also show that the (1 - ϵ) -approximation algorithm of (i) can be easily adapted to solve the problem (ii) with an extra multiplicative factor of n in the running time. Finally, we give a O (n 3 k) -time dynamic programming solution to the min-sum obnoxious facility location problem restricted to a line segment where the demand points are associated with weights, and the goal is to minimize the sum of the weights of the points covered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. QR decomposition based low rank approximation for Gaussian process regression.
- Author
-
Thomas, Emil and Sarin, Vivek
- Subjects
KRIGING ,LOW-rank matrices ,APPROXIMATION algorithms ,MATRIX decomposition - Abstract
This paper presents a QR decomposition based low-rank approximation algorithm for training and prediction in Gaussian process regression. Conventional low-rank methods like FITC, Nyström, etc., rely on low-rank approximations to the full kernel matrix that are derived from a set of representative data points. Training with FITC involves the selection of these representative points and is computationally intensive when the dimension of the dataset is large. Prediction accuracy of Nyström suffers when the number of representative points is small or when the length scale is small. The algorithm proposed here uses the thin QR decomposition of the low-rank matrix used in the Nyström approximation. We show that the marginal likelihood and its derivatives needed for training can be split into the subspace belonging to that approximation and its orthogonal complement. During prediction, we further restrict the target vectors into this subspace and thus eliminate the numerical error caused by very low noise variance. Use of the QR factorization improves the training and prediction performance. Experiments on real and synthetic data show that our approach is more accurate and faster than conventional methods. Our algorithm performs well at low length scales and achieves the prediction accuracy comparable to the full kernel matrix approach even for low rank approximations, without the need to optimize the representative points. This results in a simpler and faster approximation that could be used to scale Gaussian process regression to large datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. A Novel Regularization Paradigm for the Extreme Learning Machine.
- Author
-
Zhang, Yuao, Dai, Yunwei, and Wu, Qingbiao
- Subjects
MACHINE learning ,REGULARIZATION parameter ,APPROXIMATION algorithms ,BIG data ,ITERATIVE learning control ,GENERALIZATION - Abstract
Due to its fast training speed and powerful approximation capabilities, the extreme learning machine (ELM) has generated a lot of attention in recent years. However, the basic ELM still has some drawbacks, such as the tendency to over-fitting and the susceptibility to noisy data. By adding a regularization term to the basic ELM, the regularized extreme learning machine (R-ELM) can dramatically improve its generalization and stability. In the R-ELM, choosing an appropriate regularization parameter is critical since it can regulate the fitting and generalization capabilities of the model. In this paper, we propose the regularized functional extreme learning machine (RF-ELM), which employs the regularization functional instead of a preset regularization parameter for adaptively choosing appropriate regularization parameters. The regularization functional is defined according to output weights, and the successive approximation iterative algorithm is utilized to solve the output weights so that we can get their values simultaneously at each iteration step. We also developed a parallel version of RF-ELM (PRF-ELM) to deal with big data tasks. Furthermore, the analysis of convexity and convergence ensures the validity of the model training. Finally, the experiments on the function approximation and the UCI repository with or without noise data demonstrate the superiority and competitiveness of our proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Solving quantum circuit compilation problem variants through genetic algorithms.
- Author
-
Arufe, Lis, Rasconi, Riccardo, Oddi, Angelo, Varela, Ramiro, and González, Miguel Ángel
- Subjects
GENETIC algorithms ,OPTIMIZATION algorithms ,APPROXIMATION algorithms ,QUANTUM gates ,GENETIC variation ,QUANTUM computers - Abstract
The gate-based model is one of the leading quantum computing paradigms for representing quantum circuits. Within this paradigm, a quantum algorithm is expressed in terms of a set of quantum gates that are executed on the quantum hardware over time, subject to a number of constraints whose satisfaction must be guaranteed before running the circuit, to allow for feasible execution. The need to guarantee the previous feasibility condition gives rise to the Quantum Circuit Compilation Problem (QCCP). The QCCP has been demonstrated to be NP-Complete, and can be considered as a Planning and Scheduling problem. In this paper, we consider quantum compilation instances deriving from the general Quantum Approximation Optimization Algorithm (QAOA), applied to the MaxCut problem, devised to be executed on Noisy Intermediate Scale Quantum (NISQ) hardware architectures. More specifically, in addition to the basic QCCP version, we also tackle other variants of the same problem such as the QCCP-X (QCCP with crosstalk constraints), the QCCP-V (QCCP with variable qubit state initialization), as well as the QCCP-VX that includes both previous variants. All problem variants are solved using genetic algorithms. We perform an experimental study across a conventional set of instances taken from the literature, and show that the proposed genetic algorithm, termed G A VX , outperforms previous approaches in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Probabilistic Analysis of Optimization Problems on Sparse Random Shortest Path Metrics.
- Author
-
Klootwijk, Stefan and Manthey, Bodo
- Subjects
RANDOM graphs ,SPARSE graphs ,TRAVELING salesman problem ,DENSE graphs ,COMPLETE graphs ,METRIC spaces - Abstract
Simple heuristics for (combinatorial) optimization problems often show a remarkable performance in practice. Worst-case analysis often falls short of explaining this performance. Because of this, "beyond worst-case analysis" of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many (combinatorial) optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained in recent years, where random shortest path metrics generated from dense graphs (either complete graphs or Erdős–Rényi random graphs) have been used so far. In this paper we extend these findings to sparse graphs, with a focus on sparse graphs with 'fast growing cut sizes', i.e. graphs for which | δ (U) | = Ω (| U | ε) for some constant ε ∈ (0 , 1) for all subsets U of the vertices, where δ (U) is the set of edges connecting U to the remaining vertices. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances generated from a sparse graph with fast growing cut sizes, we prove that the greedy heuristic for the minimum distance maximum matching problem, and the nearest neighbor and insertion heuristics for the traveling salesman problem all achieve a constant expected approximation ratio. Additionally, for instances generated from an arbitrary sparse graph, we show that the 2-opt heuristic for the traveling salesman problem also achieves a constant expected approximation ratio. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Deterministic Dynamic Matching in Worst-Case Update Time.
- Author
-
Kiss, Peter
- Subjects
DETERMINISTIC algorithms ,APPROXIMATION algorithms ,BATCH processing - Abstract
We present deterministic algorithms for maintaining a (3 / 2 + ϵ) and (2 + ϵ) -approximate maximum matching in a fully dynamic graph with worst-case update times O ^ (n) and O ~ (1) respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio (2 - δ) (for any δ > 0 ) and (2 + ϵ) were both shown by Roghani et al. (Beating the folklore algorithm for dynamic matching, 2021) with update times O (n 3 / 4) and O ϵ (n) respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are O ϵ (n) and O ~ (1) which were shown in Bernstein and Stein (in: Proceedings of the twenty-seventh annual ACM-SIAM symposium on discrete algorithms, 2016) and Bhattacharya and Kiss (in: 48th international colloquium on automata, languages, and programming, ICALP 2021, 12–16 July, Glasgow, 2021) respectively. The algorithm achieving (3 / 2 + ϵ) approximation builds on the EDCS concept introduced by the influential paper of Bernstein and Stein (in: International colloquium on automata, languages, and programming, Springer, Berlin, 2015). Say that H is a (α , δ) -approximate matching sparsifier if at all times H satisfies that μ (H) · α + δ · n ≥ μ (G) (define (α , δ) -approximation similarly for matchings). We show how to maintain a locally damaged version of the EDCS which is a (3 / 2 + ϵ , δ) -approximate matching sparsifier. We further show how to reduce the maintenance of an α -approximate maximum matching to the maintenance of an (α , δ) -approximate maximum matching building based on an observation of Assadi et al. (in: Proceedings of the twenty-seventh annual (ACM-SIAM) symposium on discrete algorithms, (SODA) 2016, Arlington, VA, USA, January 10–12, 2016). Our reduction requires an update time blow-up of O ^ (1) or O ~ (1) and is deterministic or randomized against an adaptive adversary respectively. To achieve (2 + ϵ) -approximation we improve on the update time guarantee of an algorithm of Bhattacharya and Kiss (in: 48th International colloquium on automata, languages, and programming, ICALP 2021, 12–16 July, Glasgow, 2021). In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak (in: Proceedings of the twenty-seventh annual ACM symposium on theory of computing, 2017) and Bernstein et al. (Fully-dynamic graph sparsifiers against an adaptive adversary, 2020) which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Scheduling on a graph with release times.
- Author
-
Yu, Wei, Golin, Mordecai, and Zhang, Guochuan
- Subjects
TRAVELING salesman problem ,APPROXIMATION algorithms ,CITIES & towns ,METRIC spaces ,SCHEDULING - Abstract
We study a generalization of the well-known traveling salesman problem in a metric space, in which each city is associated with a release time. The salesman has to visit each city at or after its release time. There exists a naive 5/2-approximation algorithm where the salesman simply starts to route the network after all cities are released. Interestingly, this bound has never been improved for more than two decades. In this paper, we revisit the problem and achieve the following results. First, we devise an approximation algorithm with performance ratio less than 5/2 when the number of distinct release times is fixed. Then, we analyze a natural class of algorithms and show that no performance ratio better than 5/2 is possible unless the Metric TSP can be approximated with a ratio strictly less than 3/2, which is a well-known longstanding open question. Finally, we consider a special case where the graph has a heavy edge and present an approximation algorithm with performance ratio less than 5/2. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Approximation algorithms for the individually fair k-center with outliers.
- Author
-
Han, Lu, Xu, Dachuan, Xu, Yicheng, and Yang, Ping
- Subjects
APPROXIMATION algorithms ,METRIC spaces ,FAIRNESS - Abstract
In this paper, we propose and investigate the individually fair k-center with outliers (IFkCO). In the IFkCO, we are given an n-sized vertex set in a metric space, as well as integers k and q. At most k vertices can be selected as the centers and at most q vertices can be selected as the outliers. The centers are selected to serve all the not-an-outlier (i.e., served) vertices. The so-called individual fairness constraint restricts that every served vertex must have a selected center not too far way. More precisely, it is supposed that there exists at least one center among its ⌈ (n - q) / k ⌉ closest neighbors for every served vertex. Because every center serves (n - q) / k vertices on the average. The objective is to select centers and outliers, assign every served vertex to some center, such that the maximum fairness ratio over all served vertices is minimized, where the fairness ratio of a vertex is defined as the ratio between its distance with the assigned center and its distance with a ⌈ (n - q) / k ⌉ th closest neighbor. As our main contribution, a 4-approximation algorithm is presented, based on which we develop an improved algorithm from a practical perspective. Extensive experiment results on both synthetic datasets and real-world datasets are presented to illustrate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Approximation algorithms for solving the heterogeneous Chinese postman problem.
- Author
-
Li, Jianping, Pan, Pengxiang, Lichen, Junran, Cai, Lijian, Wang, Wencheng, and Liu, Suding
- Abstract
In this paper, we consider the heterogeneous Chinese postman problem (the HCPP), which generalizes the k-Chinese postman problem. Specifically, given a weighted graph G = (V , E ; w ; r) with length function w : E → R + satisfying the triangle inequality, a fixed depot r ∈ V , and k vehicles having k nonuniform speeds λ 1 , λ 2 , … , λ k , respectively, it is asked to find k tours in G for these k vehicles, each starting at the same depot r, and collectively traversing each edge in E at least once. The objective is to minimize the maximum completion time of vehicles, where the completion time of a vehicle is its total travel length divided by its speed. The main contribution of our paper is to show the following two results. (1) Given any small constant δ > 0 , we design a 20.8765 (1 + δ) -approximation algorithm to solve the HCPP, where the running time required is bounded by a polynomial in the input size and 1 δ . (2) We present a (1 + Δ - 1 / k) -approximation algorithm to solve the HCPP in cubic time, where Δ is the ratio of the largest vehicle speed to the smallest one. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. An estimator for matching size in low arboricity graphs with two applications.
- Author
-
Jowhari, Hossein
- Abstract
In this paper, we present a new degree-based estimator for the size of maximum matching in bounded arboricity graphs. When the arboricity of the graph is bounded by α , the estimator gives a α + 2 factor approximation of the matching size. For planar graphs, we show the estimator does better and returns a 3.5 approximation of the matching size. Using this estimator, we get new results for approximating the matching size of planar graphs in the streaming and distributed models of computation. In particular, in the vertex-arrival streams, we get a randomized O n ε 2 log n space algorithm for approximating the matching size of a planar graph on n vertices within (3.5 + ε) factor. Similarly, we get a simultaneous protocol in the vertex-partition model for approximating the matching size within (3.5 + ε) factor using O n 2 / 3 ε 2 log n communication from each player. In comparison with previous works, the estimator in this paper does not need to know the arboricity of the input graph and improves the approximation factor in the case of planar graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. An outer-approximation guided optimization approach for constrained neural network inverse problems.
- Author
-
Cheon, Myun-Seok
- Subjects
INVERSE problems ,CONVEX domains ,CONSTRAINED optimization ,APPROXIMATION algorithms - Abstract
This paper discusses an outer-approximation guided optimization method for constrained neural network inverse problems with rectified linear units. The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in presence of constraints on input values. This paper analyzes the characteristics of optimal solutions of neural network inverse problems with rectified activation units and proposes an outer-approximation algorithm by exploiting their characteristics. The proposed outer-approximation guided optimization comprises primal and dual phases. The primal phase incorporates neighbor curvatures with neighbor outer-approximations to expedite the process. The dual phase identifies and utilizes the structure of local convex regions to improve the convergence to a local optimal solution. At last, computation experiments demonstrate the superiority of the proposed algorithm compared to a projected gradient method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Joint Reflectance Field Estimation and Sparse Representation for Face Image Illumination Preprocessing and Recognition.
- Author
-
Zhang, Jian, Liu, Weihua, Bo, Liling, Zhang, Heng, Li, Hongran, and Xu, Shuai
- Subjects
IMAGE representation ,FACE ,REFLECTANCE ,LIGHTING ,APPROXIMATION algorithms ,FACE perception ,HUMAN facial recognition software - Abstract
Illumination preprocessing is an important ingredient for handling lighting variation face recognition challenge. Nonetheless, existing methods are usually designed to be independent of the face recognition methods and the interaction between them is not yet well explored. In this paper, we formulate the face image illumination preprocessing and recognition into a unified sparse representation framework and propose a novel joint reflectance field estimation and sparse representation (JRSR) method for face recognition under extreme lighting conditions. The proposed method separates the identify factor and the interfered illumination of a query sample simultaneously by one nonconvex sparse optimizing model. We also present an efficient approximation algorithm to solve JRSR in this paper. Evaluation on several face databases and the experimental results of face recognition with illumination variation clearly demonstrate the advantages of our proposed JRSR algorithm in illumination preprocessing efficiency and recognition accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Single-source shortest paths in the CONGEST model with improved bounds.
- Author
-
Chechik, Shiri and Mukhtar, Doron
- Subjects
DISTRIBUTED algorithms ,WEIGHTED graphs ,DISTRIBUTED computing ,DIRECTED graphs ,GRAPH algorithms ,APPROXIMATION algorithms - Abstract
Computing shortest paths from a single source is one of the central problems studied in the CONGEST model of distributed computing. After many years in which no algorithmic progress was made, Elkin [STOC '17] provided the first improvement over the distributed Bellman-Ford algorithm. Since then, several improved algorithms have been published. The state-of-the-art algorithm for weighted directed graphs (with polynomially bounded non-negative integer weights) requires O ~ (min { n D 1 / 2 , n D 1 / 4 + n 3 / 5 + D }) rounds [Forster and Nanongkai, FOCS '18], which is still quite far from the known lower bound of Ω ~ (n + D) rounds [Elkin, STOC '04]; here D is the diameter of the underlying network and n is the number of vertices in it. For the (1 + o (1)) -approximate version of this problem and the same class of graphs, Forster and Nanongkai [FOCS '18] obtained a better upper bound of O ~ (n D 1 / 4 + D) rounds. In the same paper, they stated that achieving the same bound for the exact case remains a major open problem. In this paper we resolve the above mentioned problem by devising a new randomized algorithm for computing shortest paths from a single source in O ~ (n D 1 / 4 + D) rounds. Our algorithm is based on a novel weight-modifying technique that allows us to compute approximate distances that preserve a certain form of the triangle inequality for the edges in the graph. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Guest Editorial: Special Issue on Approximation and Online Algorithms.
- Author
-
Solis-Oba, Roberto and Fleischer, Rudolf
- Subjects
ONLINE algorithms ,APPROXIMATION algorithms ,BIN packing problem ,PLANAR graphs ,BIPARTITE graphs - Published
- 2019
- Full Text
- View/download PDF
46. Approximation algorithms for coupled task scheduling minimizing the sum of completion times.
- Author
-
Fischer, David and Györgyi, Péter
- Subjects
APPROXIMATION algorithms ,SCHEDULING ,WORKING hours ,PRODUCTION scheduling - Abstract
In this paper we consider the coupled task scheduling problem with exact delay times on a single machine with the objective of minimizing the total completion time of the jobs. We provide constant-factor approximation algorithms for several variants of this problem that are known to be N P -hard, while also proving N P -hardness for two variants whose complexity was unknown before. Using these results, together with constant-factor approximations for the makespan objective from the literature, we also introduce the first results on bi-objective approximation in the coupled task setting. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. An approximation algorithm for indefinite mixed integer quadratic programming.
- Author
-
Pia, Alberto Del
- Subjects
QUADRATIC programming ,APPROXIMATION algorithms ,INTEGER programming ,POLYNOMIAL time algorithms ,MATRIX decomposition ,SYMMETRIC matrices - Abstract
In this paper, we give an algorithm that finds an ϵ -approximate solution to a mixed integer quadratic programming (MIQP) problem. The algorithm runs in polynomial time if the rank of the quadratic function and the number of integer variables are fixed. The running time of the algorithm is expected unless P = NP. In order to design this algorithm we introduce the novel concepts of spherical form MIQP and of aligned vectors, and we provide a number of results of independent interest. In particular, we give a strongly polynomial algorithm to find a symmetric decomposition of a matrix, and show a related result on simultaneous diagonalization of matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Public Bayesian Persuasion: Being Almost Optimal and Almost Persuasive.
- Author
-
Castiglioni, Matteo, Celli, Andrea, and Gatti, Nicola
- Subjects
PERSUASION (Psychology) ,APPROXIMATION algorithms ,POLYNOMIAL time algorithms ,UTILITY functions ,COMPUTATIONAL complexity ,BILEVEL programming ,HOUGH transforms - Abstract
We study algorithmic Bayesian persuasion problems in which the principal (a.k.a. the sender) has to persuade multiple agents (a.k.a. receivers) by using public communication channels. Specifically, our model follows the multi-receiver model with no inter-agent externalities introduced by Arieli and Babichenko (J Econ Theory 182:185–217, 2019). It is known that the problem of computing a sender-optimal public persuasive signaling scheme is not approximable even in simple settings. Therefore, prior works usually focus on determining restricted classes of the problem for which efficient approximation is possible. Typically, positive results in this space amounts to finding bi-criteria approximation algorithms yielding an almost optimal and almost persuasive solution in polynomial time. In this paper, we take a different perspective and study the persuasion problem in the general setting where the space of the states of nature, the action space of the receivers, and the utility function of the sender can be arbitrary. We fully characterize the computational complexity of computing a bi-criteria approximation of an optimal public signaling scheme in such settings. In particular, we show that, assuming the Exponential Time Hypothesis, solving this problem requires at least a quasi-polynomial number of steps even in instances with simple utility functions and binary action spaces such as an election with the k-voting rule. In doing so, we prove that a relaxed version of the Maximum Feasible Subsystem of Linear Inequalities problem requires at least quasi-polynomial time to be solved. Finally, we close the gap by providing a quasi-polynomial time bi-criteria approximation algorithm for arbitrary public persuasion problems that, under mild assumptions, yields a QPTAS. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. New semidefinite relaxations for a class of complex quadratic programming problems.
- Author
-
Xu, Yingzhe, Lu, Cheng, Deng, Zhibin, and Liu, Ya-Feng
- Subjects
QUADRATIC programming ,SEMIDEFINITE programming ,NONCONVEX programming ,APPROXIMATION algorithms ,COMPUTER performance ,SIGNAL processing - Abstract
In this paper, we propose some new semidefinite relaxations for a class of nonconvex complex quadratic programming problems, which widely appear in the areas of signal processing and power system. By deriving new valid constraints to the matrix variables in the lifted space, we derive some enhanced semidefinite relaxations of the complex quadratic programming problems. Then, we compare the proposed semidefinite relaxations with existing ones, and show that the newly proposed semidefinite relaxations could be strictly tighter than the previous ones. Moreover, the proposed semidefinite relaxations can be applied to more general cases of complex quadratic programming problems, whereas the previous ones are only designed for special cases. Numerical results indicate that the proposed semidefinite relaxations not only provide tighter relaxation bounds, but also improve some existing approximation algorithms by finding better sub-optimal solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Construction of node- and link-fault-tolerant virtual backbones in wireless networks.
- Author
-
Liang, Jiarong, Zeng, Weijian, and Du, Xiaojiang
- Subjects
VIRTUAL networks ,APPROXIMATION algorithms ,DOMINATING set ,FAULT tolerance (Engineering) - Abstract
In wireless sensor networks (WSNs), the virtual backbone (VB) consists of a subset of nodes, which are responsible for routing tasks. Fault-tolerant VBs are desirable for overcoming the effects of node or link failure in WSNs. Usually, a homogeneous WSN (VB) is abstracted as a unit disk graph (UDG) (connected dominating set(CDS)). This paper introduces the concept of a fault-tolerant CDS in a UDG called a ((2, 2), m)-CDS, which is different from a traditional fault-tolerant CDS ((k, m)-CDS). A ((2, 2), m)-CDS can still function even if one edge or one node fails, which implies that it possesses fault-tolerant properties for both nodes and edges, in contrast to traditional (k, m)-CDSs, which possess fault tolerance only for nodes. Then, we propose a 5 α -approximation algorithm for computing a ((2, 2), m)-CDS, where α is the performance ratio for computing a (2, m) -CDS, and analyze its time complexity. In simulations, we compare our algorithm with an existing algorithm for fault-tolerant CDS construction based on CDS size. From the simulations, we find that our algorithm outperforms its competitor in constructing a quality VB. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.