Back to Search
Start Over
Learned Optimizers for Analytic Continuation
- Source :
- Phys. Rev. B 105, 075112 (2022)
- Publication Year :
- 2021
-
Abstract
- Traditional maximum entropy and sparsity-based algorithms for analytic continuation often suffer from the ill-posed kernel matrix or demand tremendous computation time for parameter tuning. Here we propose a neural network method by convex optimization and replace the ill-posed inverse problem by a sequence of well-conditioned surrogate problems. After training, the learned optimizers are able to give a solution of high quality with low time cost and achieve higher parameter efficiency than heuristic fully-connected networks. The output can also be used as a neural default model to improve the maximum entropy for better performance. Our methods may be easily extended to other high-dimensional inverse problems via large-scale pretraining.<br />Comment: 11 pages, 7 figures, 6 tables
Details
- Database :
- arXiv
- Journal :
- Phys. Rev. B 105, 075112 (2022)
- Publication Type :
- Report
- Accession number :
- edsarx.2107.13265
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1103/PhysRevB.105.075112