Search

Your search keyword '"Ye, Mao"' showing total 26 results

Search Constraints

Start Over You searched for: Author "Ye, Mao" Remove constraint Author: "Ye, Mao" Topic computer science - machine learning Remove constraint Topic: computer science - machine learning
26 results on '"Ye, Mao"'

Search Results

1. Fine-Grained Gradient Restriction: A Simple Approach for Mitigating Catastrophic Forgetting

2. MSMix:An Interpolation-Based Text Data Augmentation Method Manifold Swap Mixup

3. BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

4. First Hitting Diffusion Models for Generating Manifold, Graph and Categorical Data

5. Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems

6. Diffusion-based Molecule Generation with Informative Prior Bridges

7. Let us Build Bridges: Understanding and Extending Diffusion Generative Models

8. The scope for AI-augmented interpretation of building blueprints in commercial and industrial property insurance

9. Centroid Approximation for Bootstrap: Improving Particle Quality at Inference

10. Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set

11. VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments

12. Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough

13. Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data

14. Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

15. SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

16. Unsupervised Feature Selection via Multi-step Markov Transition Probability

17. Learning Various Length Dependence by Dual Recurrent Neural Networks

18. Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting

19. Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

20. Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework

21. Stein Self-Repulsive Dynamics: Benefits From Past Samples

22. Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision

23. MaxUp: A Simple Way to Improve Generalization of Neural Network Training

24. Extended Stochastic Gradient MCMC for Large-Scale Bayesian Variable Selection

25. Stein Neural Sampler

26. Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision

Catalog

Books, media, physical & digital resources