Search

Your search keyword '"Wu, Jingfeng"' showing total 25 results

Search Constraints

Start Over You searched for: Author "Wu, Jingfeng" Remove constraint Author: "Wu, Jingfeng" Publication Type Reports Remove constraint Publication Type: Reports
25 results on '"Wu, Jingfeng"'

Search Results

1. UELLM: A Unified and Efficient Approach for LLM Inference Serving

2. CloudNativeSim: a toolkit for modeling and simulation of cloud-native applications

3. Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization

4. Scaling Laws in Linear Regression: Compute, Parameters, and Data

5. Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency

6. In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization

7. Risk Bounds of Accelerated SGD for Overparameterized Linear Regression

8. How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?

9. Private Federated Frequency Estimation: Adapting to the Hardness of the Instance

10. Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability

11. Fixed Design Analysis of Regularization-Based Continual Learning

12. Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron

13. The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift

14. Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime

15. Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression

16. Gap-Dependent Unsupervised Exploration for Reinforcement Learning

17. The Benefits of Implicit Regularization from SGD in Least Squares Problems

18. Lifelong Learning with Sketched Structural Regularization

19. Benign Overfitting of Constant-Stepsize SGD for Linear Regression

20. Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning

21. Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate

22. Obtaining Adjustable Regularization for Free via Iterate Averaging

23. On the Noisy Gradient Descent that Generalizes as SGD

24. Tangent-Normal Adversarial Regularization for Semi-supervised Learning

25. The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects

Catalog

Books, media, physical & digital resources