Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (44)
- Habilitation (2)
- Wissenschaftlicher Artikel (1)
Sprache
- Englisch (47) (entfernen)
Volltext vorhanden
- ja (47) (entfernen)
Schlagworte
- Optimierung (6)
- Funktionalanalysis (5)
- Partielle Differentialgleichung (5)
- Approximation (4)
- Numerische Strömungssimulation (4)
- Shape Optimization (4)
- Approximationstheorie (3)
- Funktionentheorie (3)
- Hadamard product (3)
- Kompositionsoperator (3)
Institut
- Mathematik (47) (entfernen)
In this thesis, global surrogate models for responses of expensive simulations are investigated. Computational fluid dynamics (CFD) have become an indispensable tool in the aircraft industry. But simulations of realistic aircraft configurations remain challenging and computationally expensive despite the sustained advances in computing power. With the demand for numerous simulations to describe the behavior of an output quantity over a design space, the need for surrogate models arises. They are easy to evaluate and approximate quantities of interest of a computer code. Only a few number of evaluations of the simulation are stored for determining the behavior of the response over a whole range of the input parameter domain. The Kriging method is capable of interpolating highly nonlinear, deterministic functions based on scattered datasets. Using correlation functions, distinct sensitivities of the response with respect to the input parameters can be considered automatically. Kriging can be extended to incorporate not only evaluations of the simulation, but also gradient information, which is called gradient-enhanced Kriging. Adaptive sampling strategies can generate more efficient surrogate models. Contrary to traditional one-stage approaches, the surrogate model is built step-by-step. In every stage of an adaptive process, the current surrogate is assessed in order to determine new sample locations, where the response is evaluated and the new samples are added to the existing set of samples. In this way, the sampling strategy learns about the behavior of the response and a problem-specific design is generated. Critical regions of the input parameter space are identified automatically and sampled more densely for reproducing the response's behavior correctly. The number of required expensive simulations is decreased considerably. All these approaches treat the response itself more or less as an unknown output of a black-box. A new approach is motivated by the assumption that for a predefined problem class, the behavior of the response is not arbitrary, but rather related to other instances of the mutual problem class. In CFD, for example, responses of aerodynamic coefficients share structural similarities for different airfoil geometries. The goal is to identify the similarities in a database of responses via principal component analysis and to use them for a generic surrogate model. Characteristic structures of the problem class can be used for increasing the approximation quality in new test cases. Traditional approaches still require a large number of response evaluations, in order to achieve a globally high approximation quality. Validating the generic surrogate model for industrial relevant test cases shows that they generate efficient surrogates, which are more accurate than common interpolations. Thus practical, i.e. affordable surrogates are possible already for moderate sample sizes. So far, interpolation problems were regarded as separate problems. The new approach uses the structural similarities of a mutual problem class innovatively for surrogate modeling. Concepts from response surface methods, variable-fidelity modeling, design of experiments, image registration and statistical shape analysis are connected in an interdisciplinary way. Generic surrogate modeling is not restricted to aerodynamic simulation. It can be applied, whenever expensive simulations can be assigned to a larger problem class, in which structural similarities are expected.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
The Hadamard product of two holomorphic functions which is defined via a convolution integral constitutes a generalization of the Hadamard product of two power series which is obtained by pointwise multiplying their coefficients. Based on the integral representation mentioned above, an associative law for this convolution is shown. The main purpose of this thesis is the examination of the linear and continuous Hadamard convolution operators. These operators map between spaces of holomorphic functions and send - with a fixed function phi - a function f to the convolution of phi and f. The transposed operator is computed and turns out to be a Hadamard convolution operator, too, mapping between spaces of germs of holomorphic functions. The kernel of Hadamard convolution operators is investigated and necessary and sufficient conditions for those operators to be injective or to have dense range are given. In case that the domain of holomorphy of the function phi allows a Mellin transform of phi, certain (generalized) monomials are identified as eigenfunctions of the corresponding operator. By means of this result and some extract of the theory of growth of entire functions, further propositions concerning the injectivity, the denseness of the range or the surjectivity of Hadamard convolution operators are shown. The relationship between Hadamard convolution operators, operators which are defined via the convolution with an analytic functional and differential operators of infinite order is investigated and the results which are obtained in the thesis are put into the research context. The thesis ends with an application of the results to the approximation of holomorphic functions by lacunary polynomials. On the one hand, the question under which conditions lacunary polynomials are dense in the space of all holomorphic functions is investigated and on the other hand, the rate of approximation is considered. In this context, a result corresponding to the Bernstein-Walsh theorem is formulated.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.