Search

Your search keyword '"Drusvyatskiy, Dmitriy"' showing total 135 results

Search Constraints

Start Over You searched for: Author "Drusvyatskiy, Dmitriy" Remove constraint Author: "Drusvyatskiy, Dmitriy" Search Limiters Available in Library Collection Remove constraint Search Limiters: Available in Library Collection
135 results on '"Drusvyatskiy, Dmitriy"'

Search Results

1. Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth

2. The radius of statistical efficiency

3. Linear Recursive Feature Machines provably recover low-rank matrices

4. Aiming towards the minimizers: fast convergence of SGD for overparametrized problems

5. The slope robustly determines convex functions

6. Asymptotic normality and optimality in nonsmooth stochastic approximation

7. Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality

8. Decision-Dependent Risk Minimization in Geometrically Decaying Dynamic Environments

9. Flat minima generalize for low-rank matrix recovery

10. Multiplayer Performative Prediction: Learning in Decision-Dependent Games

11. A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions

12. Improved Rates for Derivative Free Gradient Play in Strongly Monotone Games

13. Active manifolds, stratifications, and convergence to local minima in nonsmooth optimization

14. Stochastic Optimization under Distributional Drift

15. Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

16. Conservative and semismooth derivatives are equivalent for semialgebraic maps

17. Stochastic optimization with decision-dependent distributions

18. Stochastic optimization over proximally smooth sets

19. Proximal methods avoid active strict saddles of weakly convex functions

20. Pathological subgradient dynamics

21. Iterative Linearized Control: Stable Algorithms and Complexity Guarantees

22. From low probability to high confidence in stochastic convex optimization

23. Stochastic algorithms with geometric step decay converge linearly on sharp functions

24. Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

25. Composite optimization for robust blind deconvolution

26. Inexact alternating projections on nonconvex sets

27. Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

28. Stochastic model-based minimization under high-order growth

29. Stochastic subgradient method converges on tame functions

30. Stochastic model-based minimization of weakly convex functions

31. Subgradient methods for sharp weakly convex functions

32. Complexity of finding near-stationary points of convex functions stochastically

33. Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functions

34. Proximal Methods Avoid Active Strict Saddles of Weakly Convex Functions

35. The proximal point method revisited

36. The nonsmooth landscape of phase retrieval

37. The many faces of degeneracy in conic optimization

38. Catalyst Acceleration for Gradient-Based Non-Convex Optimization

39. Foundations of gauge and perspective duality

41. Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

42. Efficient quadratic penalization through the partial minimization technique

43. Efficiency of minimizing compositions of convex functions and smooth maps

44. An optimal first order method based on optimal quadratic averaging

45. Error bounds, quadratic growth, and linear convergence of proximal methods

46. Level-set methods for convex optimization

47. The Euclidean Distance Degree of Orthogonally Invariant Matrix Varieties

48. Counting real critical points of the distance to orthogonally invariant matrix sets

49. Sweeping by a tame process

50. Noisy Euclidean distance realization: robust facial reduction and the Pareto frontier

Catalog

Books, media, physical & digital resources