Search

Your search keyword '"Richtarik, Peter"' showing total 299 results

Search Constraints

Start Over You searched for: Author "Richtarik, Peter" Remove constraint Author: "Richtarik, Peter" Publication Year Range Last 10 years Remove constraint Publication Year Range: Last 10 years
299 results on '"Richtarik, Peter"'

Search Results

1. Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization

2. Pushing the Limits of Large Language Model Quantization via the Linearity Theorem

3. Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

4. Tighter Performance Theory of FedExProx

5. Unlocking FedNL: Self-Contained Compute-Optimized Implementation

6. Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation

7. MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times

8. On the Convergence of FedProx with Extrapolation and Inexact Prox

9. Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity

10. Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning

11. Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning

12. SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning

13. A Simple Linear Convergence Analysis of the Point-SAGA Algorithm

14. Local Curvature Descent: Squeezing More Curvature out of Standard and Polyak Gradient Descent

15. On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization

16. A Unified Theory of Stochastic Proximal Point Methods without Smoothness

17. MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence

18. Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations

19. Stochastic Proximal Point Methods for Monotone Inclusions under Expected Similarity

20. PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression

21. The Power of Extrapolation in Federated Learning

22. FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity

23. FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

24. Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

25. LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

26. Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants

27. Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity

28. Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity

29. Correlated Quantization for Faster Nonconvex Distributed Optimization

30. Kimad: Adaptive Gradient Compression with Bandwidth Awareness

31. Federated Learning is Better with Non-Homomorphic Encryption

32. Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences

33. Consensus-Based Optimization with Truncated Noise

34. Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates

35. Variance Reduced Distributed Non-Convex Optimization Using Matrix Stepsizes

36. High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise

37. Towards a Better Theoretical Understanding of Independent Subnetwork Training

38. Understanding Progressive Training Through the Framework of Randomized Coordinate Descent

39. Improving Accelerated Federated Learning with Compression and Importance Sampling

40. Clip21: Error Feedback for Gradient Clipping

41. Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees

42. A Guide Through the Zoo of Biased SGD

43. Error Feedback Shines when Features are Rare

44. Momentum Provably Improves Error Feedback!

45. Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning

46. Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization

47. Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model

48. 2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression

49. ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression

50. TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation

Catalog

Books, media, physical & digital resources