Search

Your search keyword '"Xu, Zhi-Qin"' showing total 264 results

Search Constraints

Start Over You searched for: Author "Xu, Zhi-Qin" Remove constraint Author: "Xu, Zhi-Qin"
264 results on '"Xu, Zhi-Qin"'

Search Results

1. A rationale from frequency perspective for grokking in training neural network

2. The Buffer Mechanism for Multi-Step Information Reasoning in Language Models

3. Initialization is Critical to Whether Transformers Fit Composite Functions by Inference or Memorizing

4. Loss Jump During Loss Switch in Solving PDEs with Neural Networks

5. Efficient and Flexible Method for Reducing Moderate-size Deep Neural Networks with Condensation

6. Input gradient annealing neural network for solving low-temperature Fokker-Planck equations

8. Understanding Time Series Anomaly State Detection through One-Class Classification

9. Solving multiscale dynamical systems by deep learning

11. An Unsupervised Deep Learning Approach for the Wave Equation Inverse Problem

12. Optimistic Estimate Uncovers the Potential of Nonlinear Models

13. Stochastic Modified Equations and Dynamics of Dropout Algorithm

14. Loss Spike in Training Neural Networks

15. Understanding the Initial Condensation of Convolutional Neural Networks

16. Laplace-fPINNs: Laplace-based fractional physics-informed neural networks for solving forward and inverse problems of subdiffusion

17. Phase Diagram of Initial Condensation for Two-layer Neural Networks

18. Linear Stability Hypothesis and Rank Stratification for Nonlinear Models

19. Bayesian Inversion with Neural Operator (BINO) for Modeling Subdiffusion: Forward and Inverse Problems

20. DeepFlame: A deep learning empowered open-source platform for reacting flow simulations

21. Implicit regularization of dropout

22. Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks

23. An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

24. Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

26. Limitation of Characterizing Implicit Regularization by Data-independent Functions

27. Overview frequency principle/spectral bias in deep learning

28. A multi-scale sampling method for accurate and robust deep neural network to predict combustion chemical kinetics

29. A deep learning-based model reduction (DeePMR) method for simplifying chemical kinetics

30. Subspace Decomposition based DNN algorithm for elliptic type multi-scale PDEs

32. Embedding Principle: a hierarchical structure of loss landscape of deep neural networks

33. Dropout in Training Neural Networks: Flatness of Solution and Noise Structure

34. Data-informed Deep Optimization

35. Force-in-domain GAN inversion

36. MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs

38. Embedding Principle of Loss Landscape of Deep Neural Networks

39. An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network

40. Towards Understanding the Condensation of Neural Networks at Initial Training

41. Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks

42. Frequency Principle in Deep Learning Beyond Gradient-descent-based Training

43. Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning

44. On the exact computation of linear frequency principle dynamics and its generalization

45. A multi-scale DNN algorithm for nonlinear elliptic equations with multiple scales

46. A regularized deep matrix factorized model of matrix completion for image restoration

47. Deep frequency principle towards understanding why deeper learning is faster

48. Multi-scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains

49. Phase diagram for two-layer ReLU neural networks at infinite-width limit

50. Implicit bias with Ritz-Galerkin method in understanding deep learning for solving PDEs

Catalog

Books, media, physical & digital resources