27 results on '"Riekert, Adrian"'
Search Results
2. Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses
3. Strong Overall Error Analysis for the Training of Artificial Neural Networks Via Random Initializations
4. Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
5. A proof of the corrected Sister Beiter cyclotomic coefficient conjecture inspired by Zhao and Zhang
6. Deep neural network approximation of composite functions without the curse of dimensionality
7. Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
8. Convergence to good non-optimal critical points in the training of neural networks: Gradient descent optimization with one random initialization overcomes all bad non-global local minima with high probability
9. Normalized gradient flow optimization in the training of ReLU artificial neural networks
10. On the existence of infinitely many realization functions of non-global local minima in the training of artificial neural networks with ReLU activation
11. On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
12. Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
13. Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
14. A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
15. Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
16. A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
17. A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
18. Convergence Rates for Empirical Measures of Markov Chains in Dual and Wasserstein Distances
19. Strong overall error analysis for the training of artificial neural networks via random initializations
20. A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
21. Strong Overall Error Analysis for the Training of Artificial Neural Networks Via Random Initializations
22. Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
23. Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
24. Convergence rates for empirical measures of Markov chains in dual and Wasserstein distances
25. A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
26. On the Existence of Global Minima and Convergence Analyses for Gradient Descent Methods in the Training of Deep Neural Networks
27. A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.