Search

Your search keyword '"Richtarik, Peter"' showing total 29 results

Search Constraints

Start Over You searched for: Author "Richtarik, Peter" Remove constraint Author: "Richtarik, Peter" Publication Year Range This year Remove constraint Publication Year Range: This year
29 results on '"Richtarik, Peter"'

Search Results

1. Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization

2. Pushing the Limits of Large Language Model Quantization via the Linearity Theorem

3. Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

4. Tighter Performance Theory of FedExProx

5. Unlocking FedNL: Self-Contained Compute-Optimized Implementation

6. Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation

7. MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times

8. On the Convergence of FedProx with Extrapolation and Inexact Prox

9. Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity

10. Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning

11. Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning

12. SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning

13. A Simple Linear Convergence Analysis of the Point-SAGA Algorithm

14. Local Curvature Descent: Squeezing More Curvature out of Standard and Polyak Gradient Descent

15. On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization

16. A Unified Theory of Stochastic Proximal Point Methods without Smoothness

17. MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence

18. Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations

19. Stochastic Proximal Point Methods for Monotone Inclusions under Expected Similarity

20. PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression

21. The Power of Extrapolation in Federated Learning

22. FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity

23. FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

24. Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

25. LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

26. Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants

27. Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity

28. Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity

29. Correlated Quantization for Faster Nonconvex Distributed Optimization

Catalog

Books, media, physical & digital resources