Search

Your search keyword '"Bartlett, Peter L."' showing total 59 results

Search Constraints

Start Over You searched for: Author "Bartlett, Peter L." Remove constraint Author: "Bartlett, Peter L." Topic fos: computer and information sciences Remove constraint Topic: fos: computer and information sciences
59 results on '"Bartlett, Peter L."'

Search Results

1. The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks

2. Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization

3. Kernel-based off-policy estimation without overlap: Instance optimality beyond semiparametric efficiency

4. Trained Transformers Learn Linear Models In-Context

5. Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency

6. Optimal variance-reduced stochastic approximation in Banach spaces

7. Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data

8. Random Feature Amplification: Feature Learning and Generalization in Neural Networks

9. Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data

10. On the Theory of Reinforcement Learning with Once-per-Episode Feedback

11. Agnostic learning with unknown utilities

12. When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

13. Preference learning along multiple criteria: A game-theoretic perspective

14. Optimal and instance-dependent guarantees for Markovian linear stochastic approximation

15. The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks

16. Adversarial Examples in Multi-Layer Random ReLU Networks

17. Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm

18. When does gradient descent with logistic loss find interpolating two-layer networks?

19. Optimal Mean Estimation without a Variance

20. Failures of model-dependent generalization bounds for least-norm interpolation

21. Optimal Robust Linear Regression in Nearly Linear Time

22. On Thompson Sampling with Langevin Algorithms

23. Self-Distillation Amplifies Regularization in Hilbert Space

24. On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration

25. Hebbian Synaptic Modifications in Spiking Neurons that Learn

26. High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

27. Improved Bounds for Discretization of Langevin Diffusions: Near-Optimal Rates without Convexity

28. Stochastic Gradient and Langevin Processes

29. OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits

30. Testing Markov Chains without Hitting

31. Quantitative Weak Convergence for Discrete Stochastic Processes

32. Large-Scale Markov Decision Problems via the Linear Programming Dual

33. Sampling for Bayesian Mixture Models: MCMC with Polynomial-Time Mixing

34. Langevin Monte Carlo without smoothness

35. Fast Mean Estimation with Sub-Gaussian Rates

36. Bayesian Robustness: A Nonasymptotic Viewpoint

37. Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems

38. Gen-Oja: A Two-time-scale approach for Streaming CCA

39. Best of many worlds: Robust model selection for online supervised learning

40. Sharp convergence rates for Langevin dynamics in the nonconvex setting

41. Online learning with kernel losses

42. Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

43. Representing smooth functions as compositions of near-identity functions with implications for deep network optimization

44. Acceleration and Averaging in Stochastic Mirror Descent Dynamics

45. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

46. Alternating minimization for dictionary learning: Local Convergence Guarantees

47. Underdamped Langevin MCMC: A non-asymptotic analysis

48. Recovery Guarantees for One-hidden-layer Neural Networks

49. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning

50. FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods

Catalog

Books, media, physical & digital resources