384 results on '"LORENZ, DIRK A."'
Search Results
2. A general framework for inexact splitting algorithms with relative errors and applications to Chambolle-Pock and Davis-Yin methods
- Author
-
Alves, M. Marques, Lorenz, Dirk A., and Naldi, Emanuele
- Subjects
Mathematics - Optimization and Control - Abstract
In this work we apply the recently introduced framework of degenerate preconditioned proximal point algorithms to the hybrid proximal extragradient (HPE) method for maximal monotone inclusions. The latter is a method that allows inexact proximal (or resolvent) steps where the error is controlled by a relative-error criterion. Recently the HPE framework has been extended to the Douglas-Rachford method by Eckstein and Yao. In this paper we further extend the applicability of the HPE framework to splitting methods. To that end we use the framework of degenerate preconditioners that allows to write a large class of splitting methods as preconditioned proximal point algorithms. In this way we modify many splitting methods such that one or more of the resolvents can be computed inexactly with an error that is controlled by an adaptive criterion. Further, we illustrate the algorithmic framework in the case of Chambolle-Pock's primal dual hybrid gradient method and the Davis-Yin's forward Douglas-Rachford method. In both cases, the inexact computation of the resolvent shows clear advantages in computing time and accuracy., Comment: 33 pages, 6 figures
- Published
- 2024
3. On inertial Levenberg-Marquardt type methods for solving nonlinear ill-posed operator equations
- Author
-
Leitão, Antonio, Rabelo, Joel C., Lorenz, Dirk A., and Winkler, Maximilian
- Subjects
Mathematics - Numerical Analysis ,65J20, 47J06 - Abstract
In these notes we propose and analyze an inertial type method for obtaining stable approximate solutions to nonlinear ill-posed operator equations. The method is based on the Levenberg-Marquardt (LM) iteration. The main obtained results are: monotonicity and convergence for exact data, stability and semi-convergence for noisy data. Regarding numerical experiments we consider: i) a parameter identification problem in elliptic PDEs, ii) a parameter identification problem in machine learning; the computational efficiency of the proposed method is compared with canonical implementations of the LM method.
- Published
- 2024
4. Proximal Algorithms for a class of abstract convex functions
- Author
-
Bednarczuk, Ewa, Lorenz, Dirk, and Tran, The Hung
- Subjects
Mathematics - Optimization and Control ,49J45, 65K05, 74P10, 90C26 - Abstract
In this paper we analyze a class of nonconvex optimization problem from the viewpoint of abstract convexity. Using the respective generalizations of the subgradient we propose an abstract notion proximal operator and derive a number of algorithms, namely an abstract proximal point method, an abstract forward-backward method and an abstract projected subgradient method. Global convergence results for all algorithms are discussed and numerical examples are given
- Published
- 2024
5. Acceleration and restart for the randomized Bregman-Kaczmarz method
- Author
-
Tondji, Lionel, Necoara, Ion, and Lorenz, Dirk A.
- Subjects
Mathematics - Numerical Analysis ,65F10, 68W20, 90C25 - Abstract
Optimizing strongly convex functions subject to linear constraints is a fundamental problem with numerous applications. In this work, we propose a block (accelerated) randomized Bregman-Kaczmarz method that only uses a block of constraints in each iteration to tackle this problem. We consider a dual formulation of this problem in order to deal in an efficient way with the linear constraints. Using convex tools, we show that the corresponding dual function satisfies the Polyak-Lojasiewicz (PL) property, provided that the primal objective function is strongly convex and verifies additionally some other mild assumptions. However, adapting the existing theory on coordinate descent methods to our dual formulation can only give us sublinear convergence results in the dual space. In order to obtain convergence results in some criterion corresponding to the primal (original) problem, we transfer our algorithm to the primal space, which combined with the PL property allows us to get linear convergence rates. More specifically, we provide a theoretical analysis of the convergence of our proposed method under different assumptions on the objective and demonstrate in the numerical experiments its superior efficiency and speed up compared to existing methods for the same problem.
- Published
- 2023
6. Adaptive Bregman-Kaczmarz: An Approach to Solve Linear Inverse Problems with Independent Noise Exactly
- Author
-
Tondji, Lionel, Tondji, Idriss, and Lorenz, Dirk A.
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,65F10, 15A29, 65F20 - Abstract
We consider the block Bregman-Kaczmarz method for finite dimensional linear inverse problems. The block Bregman-Kaczmarz method uses blocks of the linear system and performs iterative steps with these blocks only. We assume a noise model that we call independent noise, i.e. each time the method performs a step for some block, one obtains a noisy sample of the respective part of the right-hand side which is contaminated with new noise that is independent of all previous steps of the method. One can view these noise models as making a fresh noisy measurement of the respective block each time it is used. In this framework, we are able to show that a well-chosen adaptive stepsize of the block Bergman-Kaczmarz method is able to converge to the exact solution of the linear inverse problem. The plain form of this adaptive stepsize relies on unknown quantities (like the Bregman distance to the solution), but we show a way how these quantities can be estimated purely from given data. We illustrate the finding in numerical experiments and confirm that these heuristic estimates lead to effective stepsizes.
- Published
- 2023
7. Minimal error momentum Bregman-Kaczmarz
- Author
-
Lorenz, Dirk A. and Winkler, Maximilian
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,65B05, 65F10, 90C06, 90C25 ,G.1.3 ,G.1.6 ,G.3 - Abstract
The Bregman-Kaczmarz method is an iterative method which can solve strongly convex problems with linear constraints and uses only one or a selected number of rows of the system matrix in each iteration, thereby making it amenable for large-scale systems. To speed up convergence, we investigate acceleration by heavy ball momentum in the so-called dual update. Heavy ball acceleration of the Kaczmarz method with constant parameters has turned out to be difficult to analyze, in particular no accelerated convergence for the L2-error of the iterates has been proven to the best of our knowledge. Here we propose a way to adaptively choose the momentum parameter by a minimal-error principle similar to a recently proposed method for the standard randomized Kaczmarz method. The momentum parameter can be chosen to exactly minimize the error in the next iterate or to minimize a relaxed version of the minimal error principle. The former choice leads to a theoretically optimal step while the latter is cheaper to compute. We prove improved convergence results compared to the non-accelerated method. Numerical experiments show that the proposed methods can accelerate convergence in practice, also for matrices which arise from applications such as computational tomography.
- Published
- 2023
8. On the Interplay of Subset Selection and Informed Graph Neural Networks
- Author
-
Breustedt, Niklas, Climaco, Paolo, Garcke, Jochen, Hamaekers, Jan, Kutyniok, Gitta, Lorenz, Dirk A., Oerder, Rick, and Shukla, Chirag Varun
- Subjects
Physics - Chemical Physics ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Machine learning techniques paired with the availability of massive datasets dramatically enhance our ability to explore the chemical compound space by providing fast and accurate predictions of molecular properties. However, learning on large datasets is strongly limited by the availability of computational resources and can be infeasible in some scenarios. Moreover, the instances in the datasets may not yet be labelled and generating the labels can be costly, as in the case of quantum chemistry computations. Thus, there is a need to select small training subsets from large pools of unlabelled data points and to develop reliable ML methods that can effectively learn from small training sets. This work focuses on predicting the molecules atomization energy in the QM9 dataset. We investigate the advantages of employing domain knowledge-based data sampling methods for an efficient training set selection combined with informed ML techniques. In particular, we show how maximizing molecular diversity in the training set selection process increases the robustness of linear and nonlinear regression techniques such as kernel methods and graph neural networks. We also check the reliability of the predictions made by the graph neural network with a model-agnostic explainer based on the rate distortion explanation framework.
- Published
- 2023
9. Linearly convergent adjoint free solution of least squares problems by random descent
- Author
-
Lorenz, Dirk A., Schneppe, Felix, and Tondji, Lionel
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,65F10, 68W20, 15A06 - Abstract
We consider the problem of solving linear least squares problems in a framework where only evaluations of the linear map are possible. We derive randomized methods that do not need any other matrix operations than forward evaluations, especially no evaluation of the adjoint map is needed. Our method is motivated by the simple observation that one can get an unbiased estimate of the application of the adjoint. We show convergence of the method and then derive a more efficient method that uses an exact linesearch. This method, called random descent, resembles known methods in other context and has the randomized coordinate descent method as special case. We provide convergence analysis of the random descent method emphasizing the dependence on the underlying distribution of the random vectors. Furthermore we investigate the applicability of the method in the context of ill-posed inverse problems and show that the method can have beneficial properties when the unknown solution is rough. We illustrate the theoretical findings in numerical examples. One particular result is that the random descent method actually outperforms established transposed-free methods (TFQMR and CGS) in examples.
- Published
- 2023
10. A Bregman–Kaczmarz method for nonlinear systems of equations
- Author
-
Gower, Robert, Lorenz, Dirk A., and Winkler, Maximilian
- Published
- 2024
- Full Text
- View/download PDF
11. A Bregman-Kaczmarz method for nonlinear systems of equations
- Author
-
Gower, Robert, Lorenz, Dirk A., and Winkler, Maximilian
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,49M15, 90C53, 65Y20 - Abstract
We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.
- Published
- 2023
- Full Text
- View/download PDF
12. The Degenerate Variable Metric Proximal Point Algorithm and Adaptive Stepsizes for Primal-Dual Douglas-Rachford
- Author
-
Lorenz, Dirk A., Marquardt, Jannis, and Naldi, Emanuele
- Subjects
Mathematics - Optimization and Control ,47H05, 65K05, 90C25 - Abstract
In this paper the degenerate preconditioned proximal point algorithm will be combined with the idea of varying preconditioners leading to the degenerate variable metric proximal point algorithm. The weak convergence of the resulting iteration will be proven. From the perspective of the degenerate variable metric proximal point algorithm, a version of the primal-dual Douglas-Rachford method with varying preconditioners will be derived and a proof of its weak convergence which is based on the previous results for the proximal point algorithm, is provided, too. After that, we derive a heuristic on how to choose those varying preconditioners in order to increase the convergence speed of the method.
- Published
- 2023
13. Learning Variational Models with Unrolling and Bilevel Optimization
- Author
-
Brauer, Christoph, Breustedt, Niklas, de Wolff, Timo, and Lorenz, Dirk A.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Mathematics - Statistics Theory ,68T07, 68T05 - Abstract
In this paper we consider the problem of learning variational models in the context of supervised learning via risk minimization. Our goal is to provide a deeper understanding of the two approaches of learning of variational models via bilevel optimization and via algorithm unrolling. The former considers the variational model as a lower level optimization problem below the risk minimization problem, while the latter replaces the lower level optimization problem by an algorithm that solves said problem approximately. Both approaches are used in practice, but unrolling is much simpler from a computational point of view. To analyze and compare the two approaches, we consider a simple toy model, and compute all risks and the respective estimators explicitly. We show that unrolling can be better than the bilevel optimization approach, but also that the performance of unrolling can depend significantly on further parameters, sometimes in unexpected ways: While the stepsize of the unrolled algorithm matters a lot (and learning the stepsize gives a significant improvement), the number of unrolled iterations plays a minor role.
- Published
- 2022
14. Correction: A Bregman–Kaczmarz method for nonlinear systems of equations
- Author
-
Gower, Robert, Lorenz, Dirk A., and Winkler, Maximilian
- Published
- 2024
- Full Text
- View/download PDF
15. Damage Identification in Fiber Metal Laminates using Bayesian Analysis with Model Order Reduction
- Author
-
Muralidhar, Nanda Kishore Bellam, Gräßle, Carmen, Rauter, Natalie, Mikhaylenko, Andrey, Lammering, Rolf, and Lorenz, Dirk A.
- Subjects
Statistics - Applications ,Computer Science - Computational Engineering, Finance, and Science ,Mathematics - Numerical Analysis - Abstract
Fiber metal laminates (FML) are composite structures consisting of metals and fiber reinforced plastics (FRP) which have experienced an increasing interest as the choice of materials in aerospace and automobile industries. Due to a sophisticated built up of the material, not only the design and production of such structures is challenging but also its damage detection. This research work focuses on damage identification in FML with guided ultrasonic waves (GUW) through an inverse approach based on the Bayesian paradigm. As the Bayesian inference approach involves multiple queries of the underlying system, a parameterized reduced-order model (ROM) is used to closely approximate the solution with considerably less computational cost. The signals measured by the embedded sensors and the ROM forecasts are employed for the localization and characterization of damage in FML. In this paper, a Markov Chain Monte-Carlo (MCMC) based Metropolis-Hastings (MH) algorithm and an Ensemble Kalman filtering (EnKF) technique are deployed to identify the damage. Numerical tests illustrate the approaches and the results are compared in regard to accuracy and efficiency. It is found that both methods are successful in multivariate characterization of the damage with a high accuracy and were also able to quantify their associated uncertainties. The EnKF distinguishes itself with the MCMC-MH algorithm in the matter of computational efficiency. In this application of identifying the damage, the EnKF is approximately thrice faster than the MCMC-MH.
- Published
- 2022
- Full Text
- View/download PDF
16. Faster Randomized Block Sparse Kaczmarz by Averaging
- Author
-
Tondji, Lionel and Lorenz, Dirk A
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,65F10, 68W20, 68W10, 90C25 - Abstract
The standard randomized sparse Kaczmarz (RSK) method is an algorithm to compute sparse solutions of linear systems of equations and uses sequential updates, and thus, does not take advantage of parallel computations. In this work, we introduce a parallel (mini batch) version of RSK based on averaging several Kaczmarz steps. Naturally, this method allows for parallelization and we show that it can also leverage large over-relaxation. We prove linear expected convergence and show that, given that parallel computations can be exploited, the method provably provides faster convergence than the standard method. This method can also be viewed as a variant of the linearized Bregman algorithm, a randomized dual block coordinate descent update, a stochastic mirror descent update, or a relaxed version of RSK and we recover the standard RSK method when the batch size is equal to one. We also provide estimates for inconsistent systems and show that the iterates convergence to an error in the order of the noise level. Finally, numerical examples illustrate the benefits of the new algorithm.
- Published
- 2022
17. Acceleration and restart for the randomized Bregman-Kaczmarz method
- Author
-
Tondji, Lionel, Necoara, Ion, and Lorenz, Dirk A.
- Published
- 2024
- Full Text
- View/download PDF
18. Extended Randomized Kaczmarz Method for Sparse Least Squares and Impulsive Noise Problems
- Author
-
Schöpfer, Frank, Lorenz, Dirk A, Tondji, Lionel, and Winkler, Maximilian
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,65F10, 68W20, 90C25 - Abstract
The Extended Randomized Kaczmarz method is a well known iterative scheme which can find the Moore-Penrose inverse solution of a possibly inconsistent linear system and requires only one additional column of the system matrix in each iteration in comparison with the standard randomized Kaczmarz method. Also, the Sparse Randomized Kaczmarz method has been shown to converge linearly to a sparse solution of a consistent linear system. Here, we combine both ideas and propose an Extended Sparse Randomized Kaczmarz method. We show linear expected convergence to a sparse least squares solution in the sense that an extended variant of the regularized basis pursuit problem is solved. Moreover, we generalize the additional step in the method and prove convergence to a more abstract optimization problem. We demonstrate numerically that our method can find sparse least squares solutions of real and complex systems if the noise is concentrated in the complement of the range of the system matrix and that our generalization can handle impulsive noise.
- Published
- 2022
19. $L^\alpha$-Regularization of the Beckmann Problem
- Author
-
Lorenz, Dirk, Mahler, Hinrich, and Meyer, Christian
- Subjects
Mathematics - Optimization and Control - Abstract
We investigate the problem of optimal transport in the so-called Beckmann form, i.e. given two Radon measures on a compact set, we seek an optimal flow field which is a vector valued Radon measure on the same set that describes a flow between these two measures and minimizes a certain linear cost function. We consider $L^\alpha$ regularization of the problem, which guarantees uniqueness and forces the solution to be an integrable function rather than a Radon measure. This regularization naturally gives rise to a semi-smooth Newton scheme that can be used to solve the problem numerically. Besides motivating and developing the numerical scheme, we also include approximation results for vanishing regularization in the continuous setting.
- Published
- 2022
20. Chambolle-Pock's Primal-Dual Method with Mismatched Adjoint
- Author
-
Lorenz, Dirk A. and Schneppe, Felix
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,49M29, 90C25, 65K10 - Abstract
The primal-dual method of Chambolle and Pock is a widely used algorithm to solve various optimization problems written as convex-concave saddle point problems. Each update step involves the application of both the forward linear operator and its adjoint. However, in practical applications like computerized tomography, it is often computationally favourable to replace the adjoint operator by a computationally more efficient approximation. This leads to an adjoint mismatch in the algorithm. In this paper, we analyze the convergence of Chambolle-Pock's primal-dual method under the presence of a mismatched adjoint in the strongly convex setting. We present an upper bound on the error of the primal solution and derive stepsizes and mild conditions under which convergence to a fixed point is still guaranteed. Furthermore we show linear convergence similar to the result of Chambolle-Pock's primal-dual method without the adjoint mismatch. Moreover, we illustrate our results both for an academic and a real-world inspired application.
- Published
- 2022
21. Nonconvex flexible sparsity regularization: theory and monotone numerical schemes
- Author
-
Ghilli, Daria, Lorenz, Dirk A., and Resmerita, Elena
- Subjects
Mathematics - Optimization and Control ,Mathematics - Analysis of PDEs ,Mathematics - Numerical Analysis ,49XX, 65KXX, 90CXX - Abstract
Flexible sparsity regularization means stably approximating sparse solutions of operator equations by using coefficient-dependent penalizations. We propose and analyse a general nonconvex approach in this respect, from both theoretical and numerical perspectives. Namely, we show convergence of the regularization method and establish convergence properties of a couple of majorization approaches for the associated nonconvex problems. We also test a monotone algorithm for an academic example where the operator is an $M$ matrix, and on a time-dependent optimal control problem, pointing out the advantages of employing variable penalties over a fixed penalty.
- Published
- 2021
22. Degenerate Preconditioned Proximal Point algorithms
- Author
-
Bredies, Kristian, Chenchene, Enis, Lorenz, Dirk A., and Naldi, Emanuele
- Subjects
Mathematics - Optimization and Control ,47H05, 47H09, 47N10, 90C25 - Abstract
In this paper we describe a systematic procedure to analyze the convergence of degenerate preconditioned proximal point algorithms. We establish weak convergence results under mild assumptions that can be easily employed in the context of splitting methods for monotone inclusion and convex minimization problems. Moreover, we show that the degeneracy of the preconditioner allows for a reduction of the variables involved in the iteration updates. We show the strength of the proposed framework in the context of splitting algorithms, providing new simplified proofs of convergence and highlighting the link between existing schemes, such as Chambolle-Pock, Forward Douglas-Rachford and Peaceman-Rachford, that we study from a preconditioned proximal point perspective. The proposed framework allows to devise new flexible schemes and provides new ways to generalize existing splitting schemes to the case of the sum of many terms. As an example, we present a new sequential generalization of Forward Douglas-Rachford along with numerical experiments that demonstrate its interest in the context of nonsmooth convex optimization.
- Published
- 2021
23. Faster randomized block sparse Kaczmarz by averaging
- Author
-
Tondji, Lionel and Lorenz, Dirk A.
- Published
- 2023
- Full Text
- View/download PDF
24. Regularization of Inverse Problems by Filtered Diagonal Frame Decomposition
- Author
-
Ebner, Andrea, Frikel, Jürgen, Lorenz, Dirk, Schwab, Johannes, and Haltmeier, Markus
- Subjects
Mathematics - Numerical Analysis ,65J20, 65J22, 45F05 - Abstract
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the concept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.
- Published
- 2020
25. Orlicz space regularization of continuous optimal transport problems
- Author
-
Lorenz, Dirk and Mahler, Hinrich
- Subjects
Mathematics - Optimization and Control ,49Q20, 49J45, 90C25 - Abstract
In this work we analyze regularized optimal transport problems in the so-called Kantorovich form, i.e. given two Radon measures on two compact sets, the aim is to find a transport plan, which is another Radon measure on the product of the sets, that has these two measures as marginals and minimizes the sum of a certain linear cost function and a regularization term. We focus on regularization terms where a Young's function applied to the (density of the) transport plan is integrated against a product measure. This forces the transport plan to belong to a certain Orlicz space. The predual problem is derived and proofs for strong duality and existence of primal solutions of the regularized problem are presented. Existence of (pre-)dual solutions is shown for the special case of $L^p$ regularization for $p\geq 2$. Moreover, two results regarding $\Gamma$-convergence are stated: The first is concerned with marginals that do not lie in the appropriate Orlicz space and guarantees $\Gamma$-convergence to the original Kantorovich problem, when smoothing the marginals. The second results gives convergence of a regularized and discretized problem to the unregularized, continuous problem., Comment: Update paper in response to reviewers' comments
- Published
- 2020
- Full Text
- View/download PDF
26. Orlicz-space regularization for optimal transport and algorithms for quadratic regularization
- Author
-
Lorenz, Dirk A. and Mahler, Hinrich
- Subjects
Mathematics - Optimization and Control - Abstract
We investigate the continuous optimal transport problem in the so-called Kantorovich form, i.e. given two Radon measures on two compact sets, we seek an optimal transport plan which is another Radon measure on the product of the sets that has these two measures as marginals and minimizes a certain cost function. We consider regularization of the problem with so-called Young's functions, which forces the optimal transport plan to be a function in the corresponding Orlicz space rather than a Radon measure. We derive the predual problem and show strong duality and existence of primal solutions to the regularized problem. Existence of (pre-)dual solutions will be shown for the special case of $L^p$ regularization for $p\geq2$. Then we derive four algorithms to solve the dual problem of the quadratically regularized problem: A cyclic projection method, a dual gradient decent, a simple fixed point method, and Nesterov's accelerated gradient, all of which have a very low cost per iteration.
- Published
- 2019
27. Entropic regularization of continuous optimal transport problems
- Author
-
Clason, Christian, Lorenz, Dirk A., Mahler, Hinrich, and Wirth, Benedikt
- Subjects
Mathematics - Optimization and Control - Abstract
We analyze continuous optimal transport problems in the so-called Kantorovich form, where we seek a transport plan between two marginals that are probability measures on compact subsets of Euclidean space. We consider the case of regularization with the negative entropy with respect to the Lebesgue measure, which has attracted attention because it can be solved by the very simple Sinkhorn algorithm. We first analyze the regularized problem in the context of classical Fenchel duality and derive a strong duality result for a predual problem in the space of continuous functions. However, this problem may not admit a minimizer, which prevents obtaining primal-dual optimality conditions. We then show that the primal problem is naturally analyzed in the Orlicz space of functions with finite entropy in the sense that the entropically regularized problem admits a minimizer if and only if the marginals have finite entropy. We then derive a dual problem in the corresponding dual space, for which existence can be shown by purely variational arguments and primal-dual optimality conditions can be derived. For marginals that do not have finite entropy, we finally show Gamma-convergence of the regularized problem with smoothed marginals to the original Kantorovich problem.
- Published
- 2019
- Full Text
- View/download PDF
28. Quadratically regularized optimal transport
- Author
-
Lorenz, Dirk A., Manns, Paul, and Meyer, Christian
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,49Q20, 65D99, 90C25 - Abstract
We investigate the problem of optimal transport in the so-called Kantorovich form, i.e. given two Radon measures on two compact sets, we seek an optimal transport plan which is another Radon measure on the product of the sets that has these two measures as marginals and minimizes a certain cost function. We consider quadratic regularization of the problem, which forces the optimal transport plan to be a square integrable function rather than a Radon measure. We derive the dual problem and show strong duality and existence of primal and dual solutions to the regularized problem. Then we derive two algorithms to solve the dual problem of the regularized problem: A Gauss-Seidel method and a semismooth quasi-Newton method and investigate both methods numerically. Our experiments show that the methods perform well even for small regularization parameters. Quadratic regularization is of interest since the resulting optimal transport plans are sparse, i.e. they have a small support (which is not the case for the often used entropic regularization where the optimal transport plan always has full measure).
- Published
- 2019
29. Damage identification in fiber metal laminates using Bayesian analysis with model order reduction
- Author
-
Bellam Muralidhar, Nanda Kishore, Gräßle, Carmen, Rauter, Natalie, Mikhaylenko, Andrey, Lammering, Rolf, and Lorenz, Dirk A.
- Published
- 2023
- Full Text
- View/download PDF
30. Chambolle–Pock’s Primal-Dual Method with Mismatched Adjoint
- Author
-
Lorenz, Dirk A. and Schneppe, Felix
- Published
- 2023
- Full Text
- View/download PDF
31. Sarrus rules and dihedral groups
- Author
-
Lorenz, Dirk A. and Wirths, Karl-Joachim
- Subjects
Mathematics - Combinatorics ,15A15 - Abstract
This paper is devoted to the analysis of a false generalization of the rule of Sarrus and its properties that can be derived with the help of dihedral groups. Further, we discuss a Sarrus-like scheme that could be helpful for students to memorize the calculation of a $4\times 4$ determinant., Comment: To appear in "The College Mathematics Journal"
- Published
- 2018
- Full Text
- View/download PDF
32. Primal-dual residual networks
- Author
-
Brauer, Christoph and Lorenz, Dirk
- Subjects
Statistics - Machine Learning ,Computer Science - Learning ,Mathematics - Optimization and Control - Abstract
In this work, we propose a deep neural network architecture motivated by primal-dual splitting methods from convex optimization. We show theoretically that there exists a close relation between the derived architecture and residual networks, and further investigate this connection in numerical experiments. Moreover, we demonstrate how our approach can be used to unroll optimization algorithms for certain problems with hard constraints. Using the example of speech dequantization, we show that our method can outperform classical splitting methods when both are applied to the same task.
- Published
- 2018
33. The Randomized Kaczmarz Method with Mismatched Adjoint
- Author
-
Lorenz, Dirk A., Rose, Sean, and Schöpfer, Frank
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Numerical Analysis ,Mathematics - Optimization and Control ,65F10, 68W20, 15A24 - Abstract
This paper investigates the randomized version of the Kaczmarz method to solve linear systems in the case where the adjoint of the system matrix is not exact---a situation we refer to as "mismatched adjoint". We show that the method may still converge both in the over- and underdetermined consistent case under appropriate conditions, and we calculate the expected asymptotic rate of linear convergence. Moreover, we analyze the inconsistent case and obtain results for the method with mismatched adjoint as for the standard method. Finally, we derive a method to compute optimized probabilities for the choice of the rows and illustrate our findings with numerical example.
- Published
- 2018
34. Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence
- Author
-
Lorenz, Dirk A. and Tran-Dinh, Quoc
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,Statistics - Machine Learning ,90C25, 65K05, 65J15, 47H05 - Abstract
We revisit the classical Douglas-Rachford (DR) method for finding a zero of the sum of two maximal monotone operators. Since the practical performance of the DR method crucially depends on the stepsizes, we aim at developing an adaptive stepsize rule. To that end, we take a closer look at a linear case of the problem and use our findings to develop a stepsize strategy that eliminates the need for stepsize tuning. We analyze a general non-stationary DR scheme and prove its convergence for a convergent sequence of stepsizes with summable increments. This, in turn, proves the convergence of the method with the new adaptive stepsize rule. We also derive the related non-stationary alternating direction method of multipliers (ADMM) from such a non-stationary DR method. We illustrate the efficiency of the proposed methods on several numerical examples.
- Published
- 2018
35. Denoising of image gradients and total generalized variation denoising
- Author
-
Komander, Birgit, Lorenz, Dirk A., and Vestweber, Lena
- Subjects
Mathematics - Optimization and Control ,Computer Science - Computer Vision and Pattern Recognition ,Mathematics - Numerical Analysis - Abstract
We revisit total variation denoising and study an augmented model where we assume that an estimate of the image gradient is available. We show that this increases the image reconstruction quality and derive that the resulting model resembles the total generalized variation denoising method, thus providing a new motivation for this model. Further, we propose to use a constraint denoising model and develop a variational denoising model that is basically parameter free, i.e. all model parameters are estimated directly from the noisy image. Moreover, we use Chambolle-Pock's primal dual method as well as the Douglas-Rachford method for the new models. For the latter one has to solve large discretizations of partial differential equations. We propose to do this in an inexact manner using the preconditioned conjugate gradients method and derive preconditioners for this. Numerical experiments show that the resulting method has good denoising properties and also that preconditioning does increase convergence speed significantly. Finally we analyze the duality gap of different formulations of the TGV denoising problem and derive a simple stopping criterion.
- Published
- 2017
36. A Sinkhorn-Newton method for entropic optimal transport
- Author
-
Brauer, Christoph, Clason, Christian, Lorenz, Dirk, and Wirth, Benedikt
- Subjects
Mathematics - Optimization and Control ,Mathematics - Numerical Analysis ,Statistics - Machine Learning - Abstract
We consider the entropic regularization of discretized optimal transport and propose to solve its optimality conditions via a logarithmic Newton iteration. We show a quadratic convergence rate and validate numerically that the method compares favorably with the more commonly used Sinkhorn--Knopp algorithm for small regularization strength. We further investigate numerically the robustness of the proposed method with respect to parameters such as the mesh size of the discretization.
- Published
- 2017
37. An extended Perona-Malik model based on probabilistic models
- Author
-
Mescheder, Lars M. and Lorenz, Dirk A.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Mathematics - Numerical Analysis ,Statistics - Machine Learning - Abstract
The Perona-Malik model has been very successful at restoring images from noisy input. In this paper, we reinterpret the Perona-Malik model in the language of Gaussian scale mixtures and derive some extensions of the model. Specifically, we show that the expectation-maximization (EM) algorithm applied to Gaussian scale mixtures leads to the lagged-diffusivity algorithm for computing stationary points of the Perona-Malik diffusion equations. Moreover, we show how mean field approximations to these Gaussian scale mixtures lead to a modification of the lagged-diffusivity algorithm that better captures the uncertainties in the restoration. Since this modification can be hard to compute in practice we propose relaxations to the mean field objective to make the algorithm computationally feasible. Our numerical experiments show that this modified lagged-diffusivity algorithm often performs better at restoring textured areas and fuzzy edges than the unmodified algorithm. As a second application of the Gaussian scale mixture framework, we show how an efficient sampling procedure can be obtained for the probabilistic model, making the computation of the conditional mean and other expectations algorithmically feasible. Again, the resulting algorithm has a strong resemblance to the lagged-diffusivity algorithm. Finally, we show that a probabilistic version of the Mumford-Shah segementation model can be obtained in the same framework with a discrete edge-prior.
- Published
- 2016
38. Learning variational models with unrolling and bilevel optimization
- Author
-
Brauer, Christoph, primary, Breustedt, Niklas, additional, de Wolff, Timo, additional, and Lorenz, Dirk A., additional
- Published
- 2024
- Full Text
- View/download PDF
39. The degenerate variable metric proximal point algorithm and adaptive stepsizes for primal–dual Douglas–Rachford
- Author
-
Lorenz, Dirk A., primary, Marquardt, Jannis, additional, and Naldi, Emanuele, additional
- Published
- 2024
- Full Text
- View/download PDF
40. A Primal-Dual Homotopy Algorithm for $\ell_{1}$-Minimization with $\ell_{\infty}$-Constraints
- Author
-
Brauer, Christoph, Lorenz, Dirk A., and Tillmann, Andreas M.
- Subjects
Mathematics - Optimization and Control ,Computer Science - Numerical Analysis ,Mathematics - Numerical Analysis ,90C05, 90C255, 65K05 - Abstract
In this paper we propose a primal-dual homotopy method for $\ell_1$-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies.
- Published
- 2016
41. Linear convergence of the Randomized Sparse Kaczmarz Method
- Author
-
Schöpfer, Frank and Lorenz, Dirk A.
- Subjects
Mathematics - Optimization and Control ,Computer Science - Numerical Analysis ,Mathematics - Numerical Analysis ,65F10, 68W20, 90C25 - Abstract
The randomized version of the Kaczmarz method for the solution of linear systems is known to converge linearly in expectation. In this work we extend this result and show that the recently proposed Randomized Sparse Kaczmarz method for recovery of sparse solutions, as well as many variants, also converges linearly in expectation. The result is achieved in the framework of split feasibility problems and their solution by randomized Bregman projections with respect to strongly convex functions. To obtain the expected convergence rates we prove extensions of error bounds for projections. The convergence result is shown to hold in more general settings involving smooth convex functions, piecewise linear-quadratic functions and also the regularized nuclear norm, which is used in the area of low rank matrix problems. Numerical experiments indicate that the Randomized Sparse Kaczmarz method provides advantages over both the non-randomized and the non-sparse Kaczmarz methods for the solution of over- and under-determined linear systems.
- Published
- 2016
42. Rank-optimal weighting or 'How to be best in the OECD Better Life Index?'
- Author
-
Lorenz, Jan, Brauer, Christoph, and Lorenz, Dirk A.
- Subjects
Mathematics - Optimization and Control ,Quantitative Finance - General Finance ,Statistics - Applications - Abstract
We present a method of rank-optimal weighting which can be used to explore the best possible position of a subject in a ranking based on a composite indicator by means of a mathematical optimization problem. As an example, we explore the dataset of the OECD Better Life Index and compute for each country a weight vector which brings it as far up in the ranking as possible with the greatest advance of the immediate rivals. The method is able to answer the question "What is the best possible rank a country can achieve with a given set of weighted indicators?" Typically, weights in composite indicators are justified normatively and not empirically. Our approach helps to give bounds on what is achievable by such normative judgments from a purely output-oriented and strongly competitive perspective. The method can serve as a basis for exact bounds in sensitivity analysis focused on ranking positions. In the OECD Better Life Index data we find that 19 out the 36 countries in the OECD Better Life Index 2014 can be brought to the top of the ranking by specific weights. We give a table of weights for each country which brings it to its highest possible position. Many countries achieve their best rank by focusing on their strong dimensions and setting the weights of many others to zero. Although setting dimensions to zero is possible in the OECD's online tool, this contradicts the idea of better life being multidimensional in essence. We discuss modifications of the optimization problem which could take this into account, e.g. by allowing only a minimal weight of one. Methods to find rank-optimal weights can be useful for various multidimensional datasets like the ones used to rank universities or employers., Comment: 16 pages, 6 figures
- Published
- 2016
- Full Text
- View/download PDF
43. Flexible sparse regularization
- Author
-
Lorenz, Dirk A. and Resmerita, Elena
- Subjects
Mathematics - Numerical Analysis ,65J20, 46A19, 47A52 - Abstract
The seminal paper of Daubechies, Defrise, DeMol made clear that $\ell^p$ spaces with $p\in [1,2)$ and $p$-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case $p=1$ provides the best results in most of the situations compared to the cases $p\in (1,2)$. An extensive literature gives great credit also to using $\ell^p$ spaces with $p\in (0,1)$ together with the corresponding quasinorms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either super, linear or sublinear, the question of how to choose the exponent $p$ has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the $\ell^1$ space, but they are strictly convex, while the $\ell^1$-norm is just convex.
- Published
- 2016
- Full Text
- View/download PDF
44. Quadratically Regularized Optimal Transport
- Author
-
Lorenz, Dirk A., Manns, Paul, and Meyer, Christian
- Published
- 2021
- Full Text
- View/download PDF
45. Variational Methods
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
46. Partial Differential Equations in Image Processing
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
47. Frequency and Multiscale Methods
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
48. Basic Tools
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
49. Introduction
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
50. Mathematical Preliminaries
- Author
-
Bredies, Kristian, Lorenz, Dirk, Benedetto, John, Series Editor, Aldroubi, Akram, Editorial Board Member, Cochran, Douglas, Editorial Board Member, Feichtinger, Hans, Editorial Board Member, Heil, Christopher, Editorial Board Member, Jaffard, Stéphane, Editorial Board Member, Kovaˇcevi´c, Jelena, Editorial Board Member, Kutyniok, Gitta, Editorial Board Member, Maggioni, Mauro, Editorial Board Member, Shen, Zuowei, Editorial Board Member, Strohmer, Thomas, Editorial Board Member, Wang, Yang, Editorial Board Member, Bredies, Kristian, and Lorenz, Dirk
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.