Refine
Year of publication
Document Type
- Doctoral Thesis (62)
- Habilitation (2)
- Article (1)
Keywords
- Optimierung (7)
- Approximation (6)
- Approximationstheorie (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Funktionalanalysis (5)
- universal functions (5)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)
Institute
- Mathematik (65) (remove)
The main topic of this treatise is the solution of two problems from the general theory of linear partial differential equations with constant coefficients. While surjectivity criteria for linear partial differential operators in spaces of smooth functions over an open subset of euclidean space and distributions were proved by B. Malgrange and L. Hörmander in 1955, respectively 1962, concrete evaluation of these criteria is still a highly non-trivial task. In particular, it is well-known that surjectivity in the space of smooth functions over an open subset of euclidean space does not automatically imply surjectivity in the space of distributions. Though, examples for this fact all live in three or higher dimensions. In 1966, F. Trèves conjectured that in the two dimensional setting surjectivity of a linear partial differential operator on the smooth functions indeed implies surjectivity on the space of distributions. An affirmative solution to this problem is presented in this treatise. The second main result solves the so-called problem of (distributional) parameter dependence for solutions of linear partial differential equations with constant coefficients posed by J. Bonet and P. Domanski in 2006. It is shown that, in dimensions three or higher, this problem in general has a negative solution even for hypoelliptic operators. Moreover, it is proved that the two dimensional case is again an exception, because in this setting the problem of parameter dependence always has a positive solution.
In a paper of 1996 the british mathematician Graham R. Allan posed the question, whether the product of two stable elements is again stable. Here stability describes the solvability of a certain infinite system of equations. Using a method from the theory of homological algebra, it is proved that in the case of topological algebras with multiplicative webs, and thus in all common locally convex topological algebras that occur in standard analysis, the answer of Allan's question is affirmative.
In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 -> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(Ω) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(Ω) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(Ω) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.
In dieser Dissertation beschäftigen wir uns mit der konstruktiven und generischen Gewinnung universeller Funktionen. Unter einer universellen Funktion verstehen wie dabei eine solche holomorphe Funktion, die in gewissem Sinne ganze Klassen von Funktionen enthält. Die konstruktive Methode beinhaltet die explizite Konstruktion einer universellen Funktion über einen Grenzprozess, etwa als Polynomreihe. Die generische Methode definiert zunächst rein abstrakt die jeweils gewünschte Klasse von universellen Funktionen. Mithilfe des Baireschen Dichtesatzes wird dann gezeigt, dass die Klasse dieser Funktionen nicht nur nichtleer, sondern sogar G_delta und dicht in dem betrachteten Funktionenraum ist. Beide Methoden bedienen sich der Approximationssätze von Runge und von Mergelyan. Die Hauptergebnisse sind die folgenden: (1) Wir haben konstruktiv die Existenz von universellen Laurentreihen auf mehrfach zusammenhängenden Gebieten bewiesen. Zusätzlich haben wir gezeigt, dass die Menge solcher universeller Laurentreihen dicht im Raum der auf dem betrachteten Gebiet holomorphen Funktionen ist. (2) Die Existenz von universellen Faberreihen auf gewissen Gebieten wurde sowohl konstruktiv als auch generisch bewiesen. (3) Zum einen haben wir konstruktiv gezeigt, dass es so genannte ganze T-universelle Funktionen mit vorgegebenen Approximationswegen gibt. Die Approximationswege sind durch eine hinreichend variable funktionale Form vorgegeben. Die Menge solcher Funktionen ist im Raum der ganzen Funktionen eine dichte G_delta-Menge. Zum anderen haben wir generisch die Existenz von auf einem beschränkten Gebiet T-universellen Funktionen bezüglich gewisser vorgegebener Approximationswege bewiesen. Die Approximationswege sind auch hier genügend allgemein.
This work investigates the industrial applicability of graphics and stream processors in the field of fluid simulations. For this purpose, an explicit Runge-Kutta discontinuous Galerkin method in arbitrarily high order is implemented completely for the hardware architecture of GPUs. The same functionality is simultaneously realized for CPUs and compared to GPUs. Explicit time steppings as well as established implicit methods are under consideration for the CPU. This work aims at the simulation of inviscid, transsonic flows over the ONERA M6 wing. The discontinuities which typically arise in hyperbolic equations are treated with an artificial viscosity approach. It is further investigated how this approach fits into the explicit time stepping and works together with the special architecture of the GPU. Since the treatment of artificial viscosity is close to the simulation of the Navier-Stokes equations, it is reviewed how GPU-accelerated methods could be applied for computing viscous flows. This work is based on a nodal discontinuous Galerkin approach for linear hyperbolic problems. Here, it is extended to non-linear problems, which makes the application of numerical quadrature obligatory. Moreover, the representation of complex geometries is realized using isoparametric mappings. Higher order methods are typically very sensitive with respect to boundaries which are not properly resolved. For this purpose, an approach is presented which fits straight-sided DG meshes to curved geometries which are described by NURBS surfaces. The mesh is modeled as an elastic body and deformed according to the solution of closest point problems in order to minimize the gap to the original spline surface. The sensitivity with respect to geometry representations is reviewed in the end of this work in the context of shape optimization. Here, the aerodynamic drag of the ONERA M6 wing is minimized according to the shape gradient which is implicitly smoothed within the mesh deformation approach. In this context a comparison to the classical Laplace-Beltrami operator is made in a Stokes flow situation.
The Hadamard product of two holomorphic functions which is defined via a convolution integral constitutes a generalization of the Hadamard product of two power series which is obtained by pointwise multiplying their coefficients. Based on the integral representation mentioned above, an associative law for this convolution is shown. The main purpose of this thesis is the examination of the linear and continuous Hadamard convolution operators. These operators map between spaces of holomorphic functions and send - with a fixed function phi - a function f to the convolution of phi and f. The transposed operator is computed and turns out to be a Hadamard convolution operator, too, mapping between spaces of germs of holomorphic functions. The kernel of Hadamard convolution operators is investigated and necessary and sufficient conditions for those operators to be injective or to have dense range are given. In case that the domain of holomorphy of the function phi allows a Mellin transform of phi, certain (generalized) monomials are identified as eigenfunctions of the corresponding operator. By means of this result and some extract of the theory of growth of entire functions, further propositions concerning the injectivity, the denseness of the range or the surjectivity of Hadamard convolution operators are shown. The relationship between Hadamard convolution operators, operators which are defined via the convolution with an analytic functional and differential operators of infinite order is investigated and the results which are obtained in the thesis are put into the research context. The thesis ends with an application of the results to the approximation of holomorphic functions by lacunary polynomials. On the one hand, the question under which conditions lacunary polynomials are dense in the space of all holomorphic functions is investigated and on the other hand, the rate of approximation is considered. In this context, a result corresponding to the Bernstein-Walsh theorem is formulated.
Copositive programming is concerned with the problem of optimizing a linear function over the copositive cone, or its dual, the completely positive cone. It is an active field of research and has received a growing amount of attention in recent years. This is because many combinatorial as well as quadratic problems can be formulated as copositive optimization problems. The complexity of these problems is then moved entirely to the cone constraint, showing that general copositive programs are hard to solve. A better understanding of the copositive and the completely positive cone can therefore help in solving (certain classes of) quadratic problems. In this thesis, several aspects of copositive programming are considered. We start by studying the problem of computing the projection of a given matrix onto the copositive and the completely positive cone. These projections can be used to compute factorizations of completely positive matrices. As a second application, we use them to construct cutting planes to separate a matrix from the completely positive cone. Besides the cuts based on copositive projections, we will study another approach to separate a triangle-free doubly nonnegative matrix from the completely positive cone. A special focus is on copositive and completely positive programs that arise as reformulations of quadratic optimization problems. Among those we start by studying the standard quadratic optimization problem. We will show that for several classes of objective functions, the relaxation resulting from replacing the copositive or the completely positive cone in the conic reformulation by a tractable cone is exact. Based on these results, we develop two algorithms for solving standard quadratic optimization problems and discuss numerical results. The methods presented cannot immediately be adapted to general quadratic optimization problems. This is illustrated with examples.
Design and structural optimization has become a very important field in industrial applications over the last years. Due to economical and ecological reasons, the efficient use of material is of highly industrial interest. Therefore, computational tools based on optimization theory have been developed and studied in the last decades. In this work, different structural optimization methods are considered. Special attention lies on the applicability to three-dimensional, large-scale, multiphysic problems, which arise from different areas of the industry. Based on the theory of PDE-constraint optimization, descent methods in structural optimization require knowledge of the (partial) derivatives with respect to shape or topology variations. Therefore, shape and topology sensitivity analysis is introduced and the connection between both sensitivities is given by the Topological-Shape Sensitivity Method. This method leads to a systematic procedure to compute the topological derivative by terms of the shape sensitivity. Due to the framework of moving boundaries in structural optimization, different interface tracking techniques are presented. If the topology of the domain is preserved during the optimization process, explicit interface tracking techniques, combined with mesh-deformation, are used to capture the interface. This techniques fit very well the requirements in classical shape optimization. Otherwise, an implicit representation of the interface is of advantage if the optimal topology is unknown. In this case, the level set method is combined with the concept of the topological derivative to deal with topological perturbation. The resulting methods are applied to different industrial problems. On the one hand, interface shape optimization for solid bodies subject to a transient heat-up phase governed by both linear elasticity and thermal stresses is considered. Therefore, the shape calculus is applied to coupled heat and elasticity problems and a generalized compliance objective function is studied. The resulting thermo-elastic shape optimization scheme is used for compliance reduction of realistic hotplates. On the other hand, structural optimization based on the topological derivative for three-dimensional elasticity problems is observed. In order to comply typical volume constraints, a one-shot augmented Lagrangian method is proposed. Additionally, a multiphase optimization approach based on mesh-refinement is used to reduce the computational costs and the method is illustrated by classical minimum compliance problems. Finally, the topology optimization algorithm is applied to aero-elastic problems and numerical results are presented.
In der modernen Survey-Statistik treten immer häufifiger Optimierungsprobleme auf, die es zu lösen gilt. Diese sind oft von hoher Dimension und Simulationsstudien erfordern das mehrmalige Lösen dieser Optimierungsprobleme. Um dies in angemessener Zeit durchführen zu können, sind spezielle Algorithmen und Lösungsansätze erforderlich, welche in dieser Arbeit entwickelt und untersucht werden. Bei den Optimierungsproblemen handelt es sich zum einen um Allokationsprobleme zur Bestimmung optimaler Teilstichprobenumfänge. Hierbei werden neben auf einem Nullstellenproblem basierende, stetige Lösungsmethoden auch ganzzahlige, auf der Greedy-Idee basierende Lösungsmethoden untersucht und die sich ergebenden Optimallösungen miteinander verglichen.Zum anderen beschäftigt sich diese Arbeit mit verschiedenen Kalibrierungsproblemen. Hierzu wird ein alternativer Lösungsansatz zu den bisher praktizierten Methoden vorgestellt. Dieser macht das Lösen eines nichtglatten Nullstellenproblemes erforderlich, was mittels desrnnichtglatten Newton Verfahrens erfolgt. Im Zusammenhang mit nichtglatten Optimierungsalgorithmen spielt die Schrittweitensteuerung eine große Rolle. Hierzu wird ein allgemeiner Ansatz zur nichtmonotonen Schrittweitensteuerung bei Bouligand-differenzierbaren Funktionen betrachtet. Neben der klassischen Kalibrierung wird ferner ein Kalibrierungsproblem zur kohärenten Small Area Schätzung unter relaxierten Nebenbedingungen und zusätzlicher Beschränkung der Variation der Designgewichte betrachtet. Dieses Problem lässt sich in ein hochdimensionales quadratisches Optimierungsproblem umwandeln, welches die Verwendung von Lösern für dünn besetzte Optimierungsprobleme erfordert.Die in dieser Arbeit betrachteten numerischen Probleme können beispielsweise bei Zensen auftreten. In diesem Zusammenhang werden die vorgestellten Ansätze abschließend in Simulationsstudien auf eine mögliche Anwendung auf den Zensus 2011 untersucht, die im Rahmen des Zensus-Stichprobenforschungsprojektes untersucht wurden.
In this thesis, global surrogate models for responses of expensive simulations are investigated. Computational fluid dynamics (CFD) have become an indispensable tool in the aircraft industry. But simulations of realistic aircraft configurations remain challenging and computationally expensive despite the sustained advances in computing power. With the demand for numerous simulations to describe the behavior of an output quantity over a design space, the need for surrogate models arises. They are easy to evaluate and approximate quantities of interest of a computer code. Only a few number of evaluations of the simulation are stored for determining the behavior of the response over a whole range of the input parameter domain. The Kriging method is capable of interpolating highly nonlinear, deterministic functions based on scattered datasets. Using correlation functions, distinct sensitivities of the response with respect to the input parameters can be considered automatically. Kriging can be extended to incorporate not only evaluations of the simulation, but also gradient information, which is called gradient-enhanced Kriging. Adaptive sampling strategies can generate more efficient surrogate models. Contrary to traditional one-stage approaches, the surrogate model is built step-by-step. In every stage of an adaptive process, the current surrogate is assessed in order to determine new sample locations, where the response is evaluated and the new samples are added to the existing set of samples. In this way, the sampling strategy learns about the behavior of the response and a problem-specific design is generated. Critical regions of the input parameter space are identified automatically and sampled more densely for reproducing the response's behavior correctly. The number of required expensive simulations is decreased considerably. All these approaches treat the response itself more or less as an unknown output of a black-box. A new approach is motivated by the assumption that for a predefined problem class, the behavior of the response is not arbitrary, but rather related to other instances of the mutual problem class. In CFD, for example, responses of aerodynamic coefficients share structural similarities for different airfoil geometries. The goal is to identify the similarities in a database of responses via principal component analysis and to use them for a generic surrogate model. Characteristic structures of the problem class can be used for increasing the approximation quality in new test cases. Traditional approaches still require a large number of response evaluations, in order to achieve a globally high approximation quality. Validating the generic surrogate model for industrial relevant test cases shows that they generate efficient surrogates, which are more accurate than common interpolations. Thus practical, i.e. affordable surrogates are possible already for moderate sample sizes. So far, interpolation problems were regarded as separate problems. The new approach uses the structural similarities of a mutual problem class innovatively for surrogate modeling. Concepts from response surface methods, variable-fidelity modeling, design of experiments, image registration and statistical shape analysis are connected in an interdisciplinary way. Generic surrogate modeling is not restricted to aerodynamic simulation. It can be applied, whenever expensive simulations can be assigned to a larger problem class, in which structural similarities are expected.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
Bei der Preisberechnung von Finanzderivaten bieten sogenannte Jump-diffusion-Modelle mit lokaler Volatilität viele Vorteile. Aus mathematischer Sicht jedoch sind sie sehr aufwendig, da die zugehörigen Modellpreise mittels einer partiellen Integro-Differentialgleichung (PIDG) berechnet werden. Wir beschäftigen uns mit der Kalibrierung der Parameter eines solchen Modells. In einem kleinste-Quadrate-Ansatz werden hierzu Marktpreise von europäischen Standardoptionen mit den Modellpreisen verglichen, was zu einem Problem optimaler Steuerung führt. Ein wesentlicher Teil dieser Arbeit beschäftigt sich mit der Lösung der PIDG aus theoretischer und vor allem aus numerischer Sicht. Die durch ein implizites Zeitdiskretisierungsverfahren entstandenen, dicht besetzten Gleichungssysteme werden mit einem präkonditionierten GMRES-Verfahren gelöst, was zu beinahe linearem Aufwand bezüglich Orts- und Zeitdiskretisierung führt. Trotz dieser effizienten Lösungsmethode sind Funktionsauswertungen der kleinste-Quadrate-Zielfunktion immer noch teuer, so dass im Hauptteil der Arbeit Modelle reduzierter Ordnung basierend auf Proper Orthogonal Decomposition Anwendung finden. Lokale a priori Fehlerabschätzungen für die reduzierte Differentialgleichung sowie für die reduzierte Zielfunktion, kombiniert mit einem Trust-Region-Ansatz zur Globalisierung liefern einen effizienten Algorithmus, der die Rechenzeit deutlich verkürzt. Das Hauptresultat der Arbeit ist ein Konvergenzbeweis für diesen Algorithmus für eine weite Klasse von Optimierungsproblemen, in die auch das betrachtete Kalibrierungsproblem fällt.
Krylov subspace methods are often used to solve large-scale linear equations arising from optimization problems involving partial differential equations (PDEs). Appropriate preconditioning is vital for designing efficient iterative solvers of this type. This research consists of two parts. In the first part, we compare two different kinds of preconditioners for a conjugate gradient (CG) solver attacking one partial integro-differential equation (PIDE) in finance, both theoretically and numerically. An analysis on mesh independence and rate of convergence of the CG solver is included. The knowledge of preconditioning the PIDE is applied to a relevant optimization problem. The second part aims at developing a new preconditioning technique by embedding reduced order models of nonlinear PDEs, which are generated by proper orthogonal decomposition (POD), into deflated Krylov subspace algorithms in solving corresponding optimization problems. Numerical results are reported for a series of test problems.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Das erste Beispiel einer so genannten universellen holomorphen Funktion stammt von Birkhoff, welcher im Jahre 1929 die Existenz einer ganzen Funktion beweisen konnte, die gewissermaßen jede ganze Funktion durch geeignete Translationen approximieren kann. In der Folgezeit hat sich der Bereich der "universellen Approximation" zu einem eigenständigen Gebiet innerhalb der komplexen Approximationstheorie entwickelt, und es gibt eine Vielzahl an Ergebnissen über universelle Funktionen. Hierbei wurde sich allerdings fast ausschließlich auf das Studium holomorpher und ganzer Funktionen beschränkt, insbesondere die Klasse der meromorphen Funktionen wurde bisher kaum auf das Phänomen der Universalität hin untersucht. Die vorliegende Arbeit beschäftigt sich mit universeller meromorpher Approximation, und geht der Fragestellung nach, ob meromorphe Funktionen mit gewissen Universalitätseigenschaften existieren, und ob die klassischen Ergebnisse aus der universellen holomorphen Approximation auf den meromorphen Fall erweiterbar sind. Hierbei wird zunächst zwischen Translations- und Streckungsuniversalität unterschieden und bewiesen, dass in beiden Fällen jeweils eine im Raum der meromorphen Funktionen residuale Menge an universellen Funktionen existiert. Weiterhin werden die Eigenschaften dieser Funktionen ausführlich studiert. Anschließend werden meromorphe Funktionen auf Ableitungsuniversalität hin untersucht. Hierbei wird einerseits gezeigt, dass im Allgemeinen keine positiven Ergebnisse möglich sind, während andererseits eine spezielle Klasse meromorpher Funktionen betrachtet wird, für welche universelles Verhalten der sukzessiven Ableitungen nachgewiesen werden kann.
Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
Die Ménage-Polynome (engl.: ménage hit polynomials) ergeben sich in natürlicher Weise aus den in der Kombinatorik auftretenden Ménage-Zahlen. Eine Verbindung zu einer gewissen Klasse hypergeometrischer Polynome führt auf die Untersuchung spezieller Folgen von Polynomen vom Typ 3-F-1. Unter Verwendung einer Modifikation der komplexen Laplace-Methode zur gleichmäßigen asymptotischen Auswertung von Parameterintegralen sowie einiger Hilfsmittel aus der Potentialtheorie der komplexen Ebene werden starke und schwache Asymptotiken für die in Rede stehenden Polynomfolgen hergeleitet.
Diese Arbeit beschäftigt sich mit (frequent) universellen Funktionen bezüglich Differentialoperatoren und gewichteten Shiftoperatoren. Hierbei wird ein Charakteristikum von Funktionen vom Exponentialtyp untersucht, das bisher im Rahmen der Universalität noch nicht betrachtet wurde: Das konjugierte Indikatordiagramm. Dabei handelt es sich um eine kompakte und konvexe Menge, die einer Funktion vom Exponentialtyp zugeordnet ist und gewisse Rückschlüsse über das Wachstum und die mögliche Nullstellenverteilung zulässt. Mittels einer speziellen Transformation werden (frequent) universelle Funktionen vom Exponentialtyp bezüglich verschiedener Differentialoperatoren ineinander überführt. Hierdurch ist eine genaue Lokalisation der konjugierten Indikatordiagramme möglicher (frequent) universeller Funktionen für diese Operatoren ableitbar. Durch Konjugation der Differentiation mit gewichteten Shiftoperatoren über das Hadamardprodukt, wird auch für diese Operatoren eine Lokalisation möglicher konjugierter Indikatordiagramme ihrer (frequent) universellen Funktionen erreicht.
This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).
The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
The thesis studies the question how universal behavior is inherited by the Hadamard product. The type of universality that is considered here is universality by overconvergence; a definition will be given in chapter five. The situation can be described as follows: Let f be a universal function, and let g be a given function. Is the Hadamard product of f and g universal again? This question will be studied in chapter six. Starting with the Hadamard product for power series, a definition for a more general context must be provided. For plane open sets both containing the origin this has already been done. But in order to answer the above question, it becomes necessary to have a Hadamard product for functions that are not holomorphic at the origin. The elaboration of such a Hadamard product and its properties are the second central part of this thesis; chapter three will be concerned with them. The idea of the definition of such a Hadamard product will follow the case already known: The Hadamard product will be defined by a parameter integral. Crucial for this definition is the choice of appropriate integration curves; these will be introduced in chapter two. By means of the Hadamard product- properties it is possible to prove the Hadamard multiplication theorem and the Borel-Okada theorem. A generalization of these theorems will be presented in chapter four.
Wenn eine stets von Null verschiedene Nullfolge h_n gegeben ist, dann existieren nach einem Satz von Marcinkiewicz stetige Funktionen f vom Intervall [0,1] in die reelle Achse, die in dem Sinne maximal nicht differenzierbar sind, dass zu jeder messbaren Funktion g ein Teilfolge n_k existiert, so dass (f(x+h_n_k)-f(x))/h_n_k fast sicher gegen g konvergiert. Im ersten Teil dieser Arbeit beweisen wir Erweiterungen dieses Satzes im Mehrdimensionalen und Analoga für Funktionen in der komplexen Ebene. Der zweite Teil dieser Arbeit befasst sich mit Operatoren die in enger Beziehung zum Satz von Korovkin über positive lineare Operatoren stehen. Wir zeigen, dass es Operatoren L_n gibt, die jeweils eine der Eigenschaften aus dem Satz von Korovkin nicht erfüllen und gleichzeitig eine residuale Menge von Funktionen f existiert, so dass L_nf nicht nur nicht gegen f konvergiert, sondern sogar dicht im Raum aller stetigen Funktionen des Intervalls [0,1] ist. Ähnliche Phänomene werden bei polynomieller Interpolation untersucht.
In this thesis, we investigate the quantization problem of Gaussian measures on Banach spaces by means of constructive methods. That is, for a random variable X and a natural number N, we are searching for those N elements in the underlying Banach space which give the best approximation to X in the average sense. We particularly focus on centered Gaussians on the space of continuous functions on [0,1] equipped with the supremum-norm, since in that case all known methods failed to achieve the optimal quantization rate for important Gauss-processes. In fact, by means of Spline-approximations and a scheme based on the Best-Approximations in the sense of the Kolmogorov n-width we were able to attain the optimal rate of convergence to zero for these quantization problems. Moreover, we established a new upper bound for the quantization error, which is based on a very simple criterion, the modulus of smoothness of the covariance function. Finally, we explicitly constructed those quantizers numerically.
Considering the numerical simulation of mathematical models it is necessary to have efficient methods for computing special functions. We will focus our considerations in particular on the classes of Mittag-Leffler and confluent hypergeometric functions. The PhD Thesis can be structured in three parts. In the first part, entire functions are considered. If we look at the partial sums of the Taylor series with respect to the origin we find that they typically only provide a reasonable approximation of the function in a small neighborhood of the origin. The main disadvantages of these partial sums are the cancellation errors which occur when computing in fixed precision arithmetic outside this neighborhood. Therefore, our aim is to quantify and then to reduce this cancellation effect. In the next part we consider the Mittag-Leffler and the confluent hypergeometric functions in detail. Using the method we developed in the first part, we can reduce the cancellation problems by "modifying" the functions for several parts of the complex plane. Finally, in in the last part two other approaches to compute Mittag-Leffler type and confluent hypergeometric functions are discussed. If we want to evaluate such functions on unbounded intervals or sectors in the complex plane, we have to consider methods like asymptotic expansions or continued fractions for large arguments z in modulus.
Die Dissertation mit dem Thema "Cross-Border-Leasing als Instrument der Kommunalfinanzierung " Eine finanzwirtschaftliche Analyse unter besonderer Berücksichtigung der Risiken - befasst sich am Beispiel des primär steuerinduzierten, grenzüberschreitenden Cross-Border-Leasings (CBL) mit einem innovativen, strukturierten Finanzierungsinstrument, das sich im Spannungsfeld von Rechtsstaatlichkeit und privatwirtschaftlichem Management öffentlicher Akteure befindet. Dazu werden bereits finanzierte und sich im Betrieb befindliche Assets in Variationen von langfristigen Leasingverträge eingebracht. Durch die geschickte Ausnutzung steuerlicher Zurechnungskriterien werden unter Einbindung mehrerer Jurisdiktionen Gewinnverschiebungsmöglichkeiten und Steueroptimierungspotenziale geschaffen, wobei die generierten Zusatzerträge unter den Akteuren aufgeteilt werden. Die Untersuchung orientiert sich an einem umfassenden forschungsleitenden Fragenkatalog, der sehr vielschichtig und zudem interdisziplinär die komplexen Aspekte des CBLs theoretisch sowie praktisch an einem Fallbeispiel untersucht. Zunächst erfolgt die Einbettung des CBLs in den kommunalen Hintergrund. Daran schliesst sich eine Darstellung des Untersuchungsgegenstands im Hinblick auf seine elementare Grundstruktur, Zahlungsströme, Vertragsparteien und deren bilateralen Verpflechtungen an. Daneben erfolgt eine Analyse der öffentlich-rechtlichen Implikationen des CBLs sowie der regulatorischen kommunalaufsichtsrechtlichen Anforderungen. Im zentralen empirischen Teil der Dissertation wird eine idealtypische CBL-Transaktion einer bundesdeutschen Metropole als Fallstudie analysiert: im Rahmen einer erstmaligen wissenschaftlichen Analyse einer Orginaldokumentation werden zunächst die strukturellen Rahmenparameter untersucht, um dann den Finanzierungsvorteil der Transaktion zu ermitteln. Eine Klassifikation erfolgt dabei in diejenigen Risken, die sich unmittelbar im Einflussbereich der Kommune befinden und somit direkt, d.h. durch aktives eigenes Handeln, minimiert oder vermieden werden können und in solche, die aus ihrer Sicht extern sind. Abgerundet wird die Risikoanalyse durch eine Abschätzung der maximalen Risikoposition in Form der Schadensersatzzahlungen, die die Kommune in vertraglich vereinbarten Fällen leisten muss. Dabei ermittelt die Verfasserin den Break-Even der Transaktion und setzt Szenarien sowie mathematische Modelle ein, um die inhärenten Risiken aufgrund ihrer Kostenfolgen sorgfältig gegenüber dem vereinnahmten kurzfristigen Vorteil abzuwägen. Die Untersuchung bedient sich dem anerkannten mathematisch-statistischen Value-at-Risk-Verfahren (VaR), das unter Verwendung von Ansätzen der Wahrscheinlichkeitsverteilung das Marktpreisrisiko zu quantifizieren vermag. Um zu validen Ergebnissen zu gelangen, werden zur Ermittlung des VaRs die beiden bekanntesten (nicht-parametrischen) Tools des VaR-Ansatzes angewendet, um die potenziellen Performanceschwankungen des Depotwertes unter Zugrundelegung bestimmter Wahrscheinlichkeiten abschätzen zu können. Dies ist das Verfahren der Historischen Simulation sowie die als mathematisch sehr anspruchsvoll eingestufte Monte-Carlo-Simulation. Als Weiterentwicklung des VaR-Modells wird zudem der Conditional VaR berechnet, der Aussagen über das Ausmaß der erwarteten Verluste zulässt. Anhand dieser Ergebnisse wird die maximale finanzielle Risikoposition der Kommune, bezogen auf das Kapitaldepot, abgeleitet. Darüber hinaus wird das CBL im Rahmen eines mathematischen Modells insgesamt beurteilt, indem eine Gegenüberstellung von vereinnahmtem Finanzierungsvorteil und den mit Eintrittswahrscheinlichkeiten gewichteten Ausfallrisiken, unter Berücksichtigung des jeweiligen Eintrittszeitpunktes, durchgeführt wird. Diese Vorgehensweise führt zu einer Symbiose aus Finanzierungsvorteil und den Risikomaßzahlen VaR, Expected Shortfall und Expected Loss. Die ermittelten finanzwirtschaftlichen Risikomaßzahlen führen zu überraschenden Ergebnissen, die die propagierte Risikolosigkeit und das vermeintlich attraktive Renditepotenzial derartiger Transaktionen eindeutig verneinen. Aus den gewonnenen Erkenntnissen leitet die Verfasserin praktische Handlungsempfehlungen und Absicherungsmöglichkeiten für kommunale Entscheidungsträger ab. Die sich aufgrund der US-Steuerrechtsänderung vom Februar 2005 ergebenden Auswirkungen auf bestehende Transaktionen wie auch auf Neugeschäfte werden im Ausblick dargelegt.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.
Eine ganze Funktion φ heißt T-universell bezüglich einer gegebenen Folge b:={b_{n}\}_{n \in ℕ komplexer Zahlen mit b_{n} \to \infty$, falls eine geeignete Folge φ(z+b_{n_{k}})\}$ additiver Translationen von φ lokal gleichmäßig in ℂ gegen jede vorgegebene ganze Funktion konvergiert. Ferner nennen wir eine ganze Funktion φ, für welche eine geeignete Folge φ{(n_k)}\}$ ihrer Ableitungen lokal gleichmäßig in ℂ gegen jede vorgegebene ganze Funktion konvergiert, ableitungsuniversell. Die Existenz solcher Funktionen wurde von Birkhoff (1929) und MacLane (1952) bzw. Verallgemeinerungen ihrer Ergebnisse gesichert. In dieser Arbeit wird die Konstruktion solcher Funktionen, die zusätzlich auf jeder Geraden beschränkt sind oder Nullstellen an bestimmten vorgegebenen Punkten besitzen, studiert. Im Besonderen stellte sich hierbei heraus, dass die Menge aller bezüglich einer gegebenen Folge b - welche einer gewissen Bedingung genügt - T-universellen Funktionen, die überdies auf jeder Geraden beschränkt sind, zwar dicht, aber nicht residual im Raum aller ganzen Funktionen versehen mit der lokal-gleichmäßigen Topologie ist. Ebenso überraschend ist die Konstruktion von T-universellen Funktionen, welche eine "regelmäßige Nullstellenasymptotik" besitzen.
Es wird die Existenz einer Potenzreihe vom Konvergenzradius 1 bewiesen, so dass die mit einer zweifach unendlichen Matrix A (deren komplexe Einträge drei Bedingungen erfüllen müssen) gebildeten A -Transformierten außerhalb des (einfach zusammenhängenden) Holomorphiegebietes der Potenzreihe überkonvergieren. Das Hauptergebnis der Arbeit ist ein Satz über die Existenz einer universellen Potenzreihe vom Konvergenzradius 1, so dass deren A "Transformierte stetige Funktionen auf kompakten, holomorphe Funktionen auf offenen Mengen (in beiden Fällen liegen die Mengen im Komplement des einfach zusammenhängenden Holomorphiegebietes der Potenzreihe) approximieren und sich zusätzlich zur fast-überall-Approximation messbarer Funktionen auf messbaren Mengen (im Komplement des Holomorphiegebietes der Potenzreihe gelegen) eignen. Als wichtige Konsequenz dieses Hauptergebnisses ergibt sich für den Fall, dass das Holomorphiegebietes der Potenzreihe der Einheitskreis ist, die Existenz einer universellen trigonometrischen Reihe, so dass deren A "Transformierte auf dem Rand des Einheitskreises stetige Funktionen approximieren und zusätzlich messbare Funktionen fast-überall auf [0,2π] approximieren
The optimal control of fluid flows described by the Navier-Stokes equations requires massive computational resources, which has led researchers to develop reduced-order models, such as those derived from proper orthogonal decomposition (POD), to reduce the computational complexity of the solution process. The object of the thesis is the acceleration of such reduced-order models through the combination of POD reduced-order methods with finite element methods at various discretization levels. Special stabilization methods required for high-order solution of flow problems with dominant convection on coarse meshes lead to numerical data that is incompatible with standard POD methods for reduced-order modeling. We successfully adapt the POD method for such problems by introducing the streamline diffusion POD method (SDPOD). Using the novel SDPOD method, we experiment with multilevel recursive optimization at Reynolds numbers of Re=400 and Re=10,000.
In dieser Dissertation beschäftigen wir uns mit der konstruktiven und generischen Gewinnung universeller Funktionen. Unter einer universellen Funktion verstehen wie dabei eine solche holomorphe Funktion, die in gewissem Sinne ganze Klassen von Funktionen enthält. Die konstruktive Methode beinhaltet die explizite Konstruktion einer universellen Funktion über einen Grenzprozess, etwa als Polynomreihe. Die generische Methode definiert zunächst rein abstrakt die jeweils gewünschte Klasse von universellen Funktionen. Mithilfe des Baireschen Dichtesatzes wird dann gezeigt, dass die Klasse dieser Funktionen nicht nur nichtleer, sondern sogar G_delta und dicht in dem betrachteten Funktionenraum ist. Beide Methoden bedienen sich der Approximationssätze von Runge und von Mergelyan. Die Hauptergebnisse sind die folgenden: (1) Wir haben konstruktiv die Existenz von universellen Laurentreihen auf mehrfach zusammenhängenden Gebieten bewiesen. Zusätzlich haben wir gezeigt, dass die Menge solcher universeller Laurentreihen dicht im Raum der auf dem betrachteten Gebiet holomorphen Funktionen ist. (2) Die Existenz von universellen Faberreihen auf gewissen Gebieten wurde sowohl konstruktiv als auch generisch bewiesen. (3) Zum einen haben wir konstruktiv gezeigt, dass es so genannte ganze T-universelle Funktionen mit vorgegebenen Approximationswegen gibt. Die Approximationswege sind durch eine hinreichend variable funktionale Form vorgegeben. Die Menge solcher Funktionen ist im Raum der ganzen Funktionen eine dichte G_delta-Menge. Zum anderen haben wir generisch die Existenz von auf einem beschränkten Gebiet T-universellen Funktionen bezüglich gewisser vorgegebener Approximationswege bewiesen. Die Approximationswege sind auch hier genügend allgemein.
Die Probleme bezüglich der Existenz universeller Funktionen und die universelle Approximation von Funktionen sind von klassischer Natur und spielen eine zentrale Rolle. Folgende Untersuchungen sind Gegenstand dieser Arbeit: Universelle Funktionen, die durch Lückenreihen dargestellt werden, sog. eingeschränkte Universalitäten, mehrfache Universalitäten sowie die universelle Approximation messbarer Funktionen. In einem letzten Kapitel werden abschließend ganzzahlige Cesaro-Mittel untersucht. Hier zeigt sich, dass alle bewiesenen Ergebnisse dieser Arbeit über universelle Approximation im Komplement des abgeschlossenen Einheitskreises durch Teilsummen einer Potenzreihe vom Konvergenzradius 1 auch auf die jeweiligen ganzzahligen Cesaro-Transformierten der Teilsummen übertragbar sind.
In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.
In this thesis, we study the convergence behavior of an efficient optimization method used for the identification of parameters for underdetermined systems. The research is motivated by optimization problems arising from the estimation of parameters in neural networks as well as in option pricing models. In the first application, we are concerned with neural networks used to forecasting stock market indices. Since neural networks are able to describe extremely complex nonlinear structures they are used to improve the modelling of the nonlinear dependencies occurring in the financial markets. Applying neural networks to the forecasting of economic indicators, we are confronted with a nonlinear least squares problem of large dimension. Furthermore, in this application the number of parameters of the neural network to be determined is usually much larger than the number of patterns which are available for the determination of the unknowns. Hence, the residual function of our least squares problem is underdetermined. In option pricing, an important but usually not known parameter is the volatility of the underlying asset of the option. Assuming that the underlying asset follows a one-factor continuous diffusion model with nonconstant drift and volatility term, the value of an European call option satisfies a parabolic initial value problem with the volatility function appearing in one of the coefficients of the parabolic differential equation. Using this system equation, the estimation of the volatility function is described by a nonlinear least squares problem. Since the adaption of the volatility function is based only on a small number of observed market data these problems are naturally ill-posed. For the solution of these large-scale underdetermined nonlinear least squares problems we use a fully iterative inexact Gauss-Newton algorithm. We show how the structure of a neural network as well as that of the European call price model can be exploited using iterative methods. Moreover, we present theoretical statements for the convergence of the inexact Gauss-Newton algorithm applied to the less examined case of underdetermined nonlinear least squares problems. Finally, we present numerical results for the application of neural networks to the forecasting of stock market indices as well as for the construction of the volatility function in European option pricing models. In case of the latter application, we discretize the parabolic differential equation using a finite difference scheme and we elucidate convergence problems of the discrete scheme when the initial condition is not everywhere differentiable.
This work is concerned with arbitrage bounds for prices of contingent claims under transaction costs, but regardless of other conceivable market frictions. Assumptions on the underlying market are held as weak as convenient for the deduction of meaningful results that make good economic sense. In discrete time we also allow for underlying price processes with uncountable state space. In continuous time the underlying price process is modeled by a semimartingale. For the most part we could avoid any stronger assumptions. The main problems with which we deal in this work are the modelling of (proportional) transaction costs, Fundamental Theorems of Asset Pricing under transaction costs, dual characterizations of arbitrage bounds under transaction costs, Quantile-Hedging under transaction costs, alternatives to the Black-Scholes model in continuous time (under transaction costs). The results apply to stock and currency markets.
Das Konzept der proximalen Mehrschritt-Regularisierung (MSR) auf Folgen von Gittern bei der Lösung inkorrekter Variationsungleichungen wurde von Kaplan und Tichatschke im Jahre 1997 in ihrer Arbeit "Prox-regularization and solution of illposed elliptic variational inequalities" vorgeschlagen und theoretisch motiviert. In demselben Artikel betrachtet man ein allgemeines Problem der partiellen Regularisierung auf einem abgeschlossenen Unterraum. Als Gegenstand der Anwendung solcher Regularisierung können die schlecht gestellten Optimalsteuerprobleme heraustreten, wobei der Unterraum in dem ganzen Prozessraum durch Steuervariablen gebildet wird. Im ersten Kapitel der vorliegenden Dissertation betrachten wir ein abstraktes linear-quadratisches Kontrollproblem in allgemeinen Hilberträumen. Wir diskutieren Voraussetzungen und Bedingungen, unter denen das Kontrollproblem inkorrekt wird. Danach werden zwei allgemeine numerische Verfahren der partiellen Mehrschritt-Regularisierung formuliert. Im ersten Fall untersucht man das MSR-Verfahren, in dem die Zustandsgleichung in einen quadratischen Strafterm eingebettet wird, gemäß der entsprechenden Publikationen von Kaplan und Tichatschke. Im zweiten Fall werden die Ersatzprobleme des MSR-Verfahrens mit exakt erfüllter Zustandsgleichung entwickelt. Im Mittelpunkt sämtlicher Forschungen steht die Konvergenz der approximativen Lösungen von Ersatzproblemen des MSR-Verfahrens gegen ein Element aus der Optimalmenge des Ausgangsproblems. Es stellt sich die Frage: in welchem der genannten Fälle können schwächeren Konvergenzbedingungen für die inneren Approximationen angegeben werden? Um diese Frage aufzuklären, untersuchen wir zwei inkorrekten Kontrollproblme mit elliptischen Zustandsgleichungen und verteilter Steuerung. Das erste Problem kann auf das bekannte Fuller-Problem zurückgeführt werden, für welches eine analytische Lösung mit sogenanntem "chattering regime" existiert und welches ein Basisbeispiel für unsere Aufgaben liefert. Zur Lösung des Fuller-Problems formulieren wir einen MSR-Algorithmus, in dem man mit Fehlern des Strafverfahrens und der FEM-Approximationen rechnen muß. Als Hauptergebnis erhalten wir ein Konvergenzkriterium, das das asymptotische Verhalten von Regularisierungs-, Diskretisierungs- und Strafparametern des MSR-Algorithmus bestimmt. Im letzten Kapitel formulieren wir ein anderes schlecht gestelltes Optimalsteuerproblem mit verteilter Steuerung über dem Polygongebiet. Die Zustandsgleichung wird nun durch ein Poisson-Problem mit gemischten Randbedingungen erzeugt. Solche Aufgabenstellung liefert eine natürliche Erweiterung des auf einer gewöhnlichen Differentialgeichung beruhenden Fuller-Problems auf die Kontrollprobleme mit partiellen Differentialgleichungen. Wir formulieren neuerlich das MSR-Verfahren, in dem man neben dem Diskretisierungsfehler auch einen Berechnungsfehler berücksichtigt. Diesmal verzichten wir aber auf die Straftechniken und stellen die Ersatzprobleme mit exakt erfüllter Zustandsgleichung zusammen. Mit diesem alternativen Zugang und anhand der Falkschen Beweistechniken erhalten wir ein schwächeres und somit auch besseres Konvergenzkriterium für das MSR-Verfahren. Zum Abschluß präsentieren wir Ergebnisse der numerischen Tests, durchgeführt mit dem MSR-Verfahren für ein konkretes Optimalsteuerproblem, dessen Lösung ein zweidimensionales chattering regime aufweist.
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.