4 results
Search Results
2. Optimizing Leader Influence in Networks Through Selection of Direct Followers.
- Author
-
Mai, Van Sy and Abed, Eyad H.
- Subjects
- *
GREEDY algorithms , *MATHEMATICAL optimization , *MATHEMATICAL models , *ALGORITHMS , *MATHEMATICAL analysis - Abstract
This paper considers the problem of a leader that seeks to optimally influence the opinions of agents in a directed network through connecting with a limited number of the agents (“direct followers”), possibly in the presence of a fixed competing leader. The settings involving a single leader and two competing leaders are unified into a general combinatoric optimization problem, for which two heuristic approaches are developed. The first approach is based on a convex relaxation scheme, possibly in combination with the $\ell _1$ -norm regularization technique, and the second is based on a greedy selection strategy. The main technical novelties of this work are in the establishment of supermodularity of the objective function and convexity of its continuous relaxation. The greedy approach is guaranteed to have a lower bound on the approximation ratio sharper than $(1-1/e)$ , while the convex approach can benefit from efficient (customized) numerical solvers to have practically comparable solutions possibly with faster computation times. The two approaches can be combined to provide improved results. In numerical examples, the approximation ratio can be made to reach ${\text{90}\%}$ or higher depending on the number of direct followers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Sparse Learning with Stochastic Composite Optimization.
- Author
-
Zhang, Weizhong, Zhang, Lijun, Jin, Zhongming, Jin, Rong, Cai, Deng, Li, Xuelong, Liang, Ronghua, and He, Xiaofei
- Subjects
- *
EDUCATION , *MATHEMATICAL programming , *ALGORITHMS , *MATHEMATICAL optimization , *FIBERS - Abstract
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate \mathcal O(1/\lambda T)
with $\delta$- Published
- 2017
- Full Text
- View/download PDF
4. A Fast Algorithm for Nonnegative Matrix Factorization and Its Convergence.
- Author
-
Li, Li-Xin, Wu, Lin, Zhang, Hui-Sheng, and Wu, Fang-Xiang
- Subjects
- *
ALGORITHMS , *FACTORIZATION , *STOCHASTIC convergence , *MATHEMATICAL functions , *MATHEMATICAL optimization , *DIVERGENCE theorem - Abstract
Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been well resolved. This paper first discusses the convergence of existing multiplicative update algorithms. We then propose a new multiplicative update algorithm for minimizing the Euclidean distance between approximate and true values. Based on the optimization principle and the auxiliary function method, we prove that our new algorithm not only converges to a stationary point, but also does faster than existing ones. To verify our theoretical results, the experiments on three data sets have been conducted by comparing our proposed algorithm with other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.