Search

Your search keyword '"Bartlett, Peter L."' showing total 53 results

Search Constraints

Start Over You searched for: Author "Bartlett, Peter L." Remove constraint Author: "Bartlett, Peter L." Topic machine learning (cs.lg) Remove constraint Topic: machine learning (cs.lg)
53 results on '"Bartlett, Peter L."'

Search Results

1. The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks

2. Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization

3. Trained Transformers Learn Linear Models In-Context

4. Optimal variance-reduced stochastic approximation in Banach spaces

5. Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data

6. Random Feature Amplification: Feature Learning and Generalization in Neural Networks

7. Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data

8. On the Theory of Reinforcement Learning with Once-per-Episode Feedback

9. Agnostic learning with unknown utilities

10. When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

11. Preference learning along multiple criteria: A game-theoretic perspective

12. Optimal and instance-dependent guarantees for Markovian linear stochastic approximation

13. The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks

14. Adversarial Examples in Multi-Layer Random ReLU Networks

15. Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm

16. When does gradient descent with logistic loss find interpolating two-layer networks?

17. Optimal Mean Estimation without a Variance

18. Failures of model-dependent generalization bounds for least-norm interpolation

19. Optimal Robust Linear Regression in Nearly Linear Time

20. On Thompson Sampling with Langevin Algorithms

21. Self-Distillation Amplifies Regularization in Hilbert Space

22. On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration

23. Hebbian Synaptic Modifications in Spiking Neurons that Learn

24. High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

25. Stochastic Gradient and Langevin Processes

26. OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits

27. Testing Markov Chains without Hitting

28. Quantitative Weak Convergence for Discrete Stochastic Processes

29. Large-Scale Markov Decision Problems via the Linear Programming Dual

30. Sampling for Bayesian Mixture Models: MCMC with Polynomial-Time Mixing

31. Langevin Monte Carlo without smoothness

32. Fast Mean Estimation with Sub-Gaussian Rates

33. Bayesian Robustness: A Nonasymptotic Viewpoint

34. Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems

35. Gen-Oja: A Two-time-scale approach for Streaming CCA

36. Best of many worlds: Robust model selection for online supervised learning

37. Sharp convergence rates for Langevin dynamics in the nonconvex setting

38. Online learning with kernel losses

39. Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

40. Representing smooth functions as compositions of near-identity functions with implications for deep network optimization

41. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

42. Alternating minimization for dictionary learning: Local Convergence Guarantees

43. Underdamped Langevin MCMC: A non-asymptotic analysis

44. Recovery Guarantees for One-hidden-layer Neural Networks

45. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning

46. FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods

47. Bounding Embeddings of VC Classes into Maximum Classes

48. Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions

49. Oracle inequalities for computationally adaptive model selection

50. REGAL: A Regularization based Algorithm for Reinforcement Learning in Weakly Communicating MDPs

Catalog

Books, media, physical & digital resources