### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (61)
- Habilitation (2)

#### Has Fulltext

- yes (63) (remove)

#### Keywords

- Optimierung (7)
- Approximationstheorie (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Approximation (5)
- Funktionalanalysis (5)
- universal functions (5)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)

#### Institute

- Mathematik (63) (remove)

In dieser Arbeit untersuchen wir das Optimierungsproblem der optimalen Materialausrichtung orthotroper Materialien in der Hülle von dreidimensionalen Schalenkonstruktionen. Ziel der Optimierung ist dabei die Minimierung der Gesamtnachgiebigkeit der Konstruktion, was der Suche nach einem möglichst steifen Design entspricht. Sowohl die mathematischen als auch die mechanischen Grundlagen werden in kompakter Form zusammengetragen und basierend darauf werden sowohl gradientenbasierte als auch auf mechanischen Prinzipien beruhende, neue Erweiterungen punktweise formulierter Optimierungsverfahren entwickelt und implementiert. Die vorgestellten Verfahren werden anhand des Beispiels des Modells einer Flugzeugtragfläche mit praxisrelevanter Problemgröße getestet und verglichen. Schließlich werden die untersuchten Methoden in ihrer Koppelung mit einem Verfahren zur Topologieoptimierung, basierend auf dem topologischen Gradienten untersucht.

Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)

An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.

Krylov subspace methods are often used to solve large-scale linear equations arising from optimization problems involving partial differential equations (PDEs). Appropriate preconditioning is vital for designing efficient iterative solvers of this type. This research consists of two parts. In the first part, we compare two different kinds of preconditioners for a conjugate gradient (CG) solver attacking one partial integro-differential equation (PIDE) in finance, both theoretically and numerically. An analysis on mesh independence and rate of convergence of the CG solver is included. The knowledge of preconditioning the PIDE is applied to a relevant optimization problem. The second part aims at developing a new preconditioning technique by embedding reduced order models of nonlinear PDEs, which are generated by proper orthogonal decomposition (POD), into deflated Krylov subspace algorithms in solving corresponding optimization problems. Numerical results are reported for a series of test problems.

Copositive programming is concerned with the problem of optimizing a linear function over the copositive cone, or its dual, the completely positive cone. It is an active field of research and has received a growing amount of attention in recent years. This is because many combinatorial as well as quadratic problems can be formulated as copositive optimization problems. The complexity of these problems is then moved entirely to the cone constraint, showing that general copositive programs are hard to solve. A better understanding of the copositive and the completely positive cone can therefore help in solving (certain classes of) quadratic problems. In this thesis, several aspects of copositive programming are considered. We start by studying the problem of computing the projection of a given matrix onto the copositive and the completely positive cone. These projections can be used to compute factorizations of completely positive matrices. As a second application, we use them to construct cutting planes to separate a matrix from the completely positive cone. Besides the cuts based on copositive projections, we will study another approach to separate a triangle-free doubly nonnegative matrix from the completely positive cone. A special focus is on copositive and completely positive programs that arise as reformulations of quadratic optimization problems. Among those we start by studying the standard quadratic optimization problem. We will show that for several classes of objective functions, the relaxation resulting from replacing the copositive or the completely positive cone in the conic reformulation by a tractable cone is exact. Based on these results, we develop two algorithms for solving standard quadratic optimization problems and discuss numerical results. The methods presented cannot immediately be adapted to general quadratic optimization problems. This is illustrated with examples.

In this thesis, we investigate the quantization problem of Gaussian measures on Banach spaces by means of constructive methods. That is, for a random variable X and a natural number N, we are searching for those N elements in the underlying Banach space which give the best approximation to X in the average sense. We particularly focus on centered Gaussians on the space of continuous functions on [0,1] equipped with the supremum-norm, since in that case all known methods failed to achieve the optimal quantization rate for important Gauss-processes. In fact, by means of Spline-approximations and a scheme based on the Best-Approximations in the sense of the Kolmogorov n-width we were able to attain the optimal rate of convergence to zero for these quantization problems. Moreover, we established a new upper bound for the quantization error, which is based on a very simple criterion, the modulus of smoothness of the covariance function. Finally, we explicitly constructed those quantizers numerically.

Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.

In der modernen Survey-Statistik treten immer häufifiger Optimierungsprobleme auf, die es zu lösen gilt. Diese sind oft von hoher Dimension und Simulationsstudien erfordern das mehrmalige Lösen dieser Optimierungsprobleme. Um dies in angemessener Zeit durchführen zu können, sind spezielle Algorithmen und Lösungsansätze erforderlich, welche in dieser Arbeit entwickelt und untersucht werden. Bei den Optimierungsproblemen handelt es sich zum einen um Allokationsprobleme zur Bestimmung optimaler Teilstichprobenumfänge. Hierbei werden neben auf einem Nullstellenproblem basierende, stetige Lösungsmethoden auch ganzzahlige, auf der Greedy-Idee basierende Lösungsmethoden untersucht und die sich ergebenden Optimallösungen miteinander verglichen.Zum anderen beschäftigt sich diese Arbeit mit verschiedenen Kalibrierungsproblemen. Hierzu wird ein alternativer Lösungsansatz zu den bisher praktizierten Methoden vorgestellt. Dieser macht das Lösen eines nichtglatten Nullstellenproblemes erforderlich, was mittels desrnnichtglatten Newton Verfahrens erfolgt. Im Zusammenhang mit nichtglatten Optimierungsalgorithmen spielt die Schrittweitensteuerung eine große Rolle. Hierzu wird ein allgemeiner Ansatz zur nichtmonotonen Schrittweitensteuerung bei Bouligand-differenzierbaren Funktionen betrachtet. Neben der klassischen Kalibrierung wird ferner ein Kalibrierungsproblem zur kohärenten Small Area Schätzung unter relaxierten Nebenbedingungen und zusätzlicher Beschränkung der Variation der Designgewichte betrachtet. Dieses Problem lässt sich in ein hochdimensionales quadratisches Optimierungsproblem umwandeln, welches die Verwendung von Lösern für dünn besetzte Optimierungsprobleme erfordert.Die in dieser Arbeit betrachteten numerischen Probleme können beispielsweise bei Zensen auftreten. In diesem Zusammenhang werden die vorgestellten Ansätze abschließend in Simulationsstudien auf eine mögliche Anwendung auf den Zensus 2011 untersucht, die im Rahmen des Zensus-Stichprobenforschungsprojektes untersucht wurden.

Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.

Wenn eine stets von Null verschiedene Nullfolge h_n gegeben ist, dann existieren nach einem Satz von Marcinkiewicz stetige Funktionen f vom Intervall [0,1] in die reelle Achse, die in dem Sinne maximal nicht differenzierbar sind, dass zu jeder messbaren Funktion g ein Teilfolge n_k existiert, so dass (f(x+h_n_k)-f(x))/h_n_k fast sicher gegen g konvergiert. Im ersten Teil dieser Arbeit beweisen wir Erweiterungen dieses Satzes im Mehrdimensionalen und Analoga für Funktionen in der komplexen Ebene. Der zweite Teil dieser Arbeit befasst sich mit Operatoren die in enger Beziehung zum Satz von Korovkin über positive lineare Operatoren stehen. Wir zeigen, dass es Operatoren L_n gibt, die jeweils eine der Eigenschaften aus dem Satz von Korovkin nicht erfüllen und gleichzeitig eine residuale Menge von Funktionen f existiert, so dass L_nf nicht nur nicht gegen f konvergiert, sondern sogar dicht im Raum aller stetigen Funktionen des Intervalls [0,1] ist. Ähnliche Phänomene werden bei polynomieller Interpolation untersucht.

Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.

We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.

Die vorliegende Arbeit teilt sich in die zwei titelgebenden Themengebiete. Inhalt des ersten Teils dieser Arbeit ist die Untersuchung der Proximität, also einer gewissen Messung der Nähe, von Binomial- und Poisson-Verteilungen. Speziell wird die uniforme Struktur des Totalvariationsabstandes auf der abgeschlossenen Menge aller Binomial- und Poisson-Verteilungen charakterisiert, und zwar mit Hilfe der die Verteilungen eindeutig bestimmenden zugehörigen Erwartungswerte und Varianzen. Insbesondere wird eine obere Abschätzung des Totalvariationsabstandes auf der Menge der Binomial- und Poisson-Verteilungen durch eine entsprechende Funktion der zugehörigen Erwartungswerte und Varianzen angegeben. Der zweite Teil der Arbeit widmet sich Konfidenzintervallen für Durchschnitte von Erfolgswahrscheinlichkeiten. Eine der ersten und bekanntesten Arbeiten zu Konfidenzintervallen von Erfolgswahrscheinlichkeiten ist die von Clopper und Pearson (1934). Im Binomialmodell werden hier bei bekanntem Stichprobenumfang und Konfidenzniveau Konfidenzintervalle für die unbekannte Erfolgswahrscheinlichkeit entwickelt. Betrachtet man bei festem Stichprobenumfang statt einer Binomialverteilung, also dem Bildmaß einer homogenen Bernoulli-Kette unter der Summationsabbildung, das entsprechende Bildmaß einer inhomogenen Bernoulli-Kette, so erhält man eine Bernoulli-Faltung mit den entsprechenden Erfolgswahrscheinlichkeiten. Für das Schätzen der durchschnittlichen Erfolgswahrscheinlichkeit im größeren Bernoulli-Faltungs-Modell sind z. B. die einseitigen Clopper-Pearson-Intervalle im Allgemeinen nicht gültig. Es werden hier optimale einseitige und gültige zweiseitige Konfidenzintervalle für die durchschnittliche Erfolgswahrscheinlichkeit im Bernoulli-Faltungs-Modell entwickelt. Die einseitigen Clopper-Pearson-Intervalle sind im Allgemeinen auch nicht gültig für das Schätzen der Erfolgswahrscheinlichkeit im hypergeometrischen Modell, das ein Teilmodell des Bernoulli-Faltungs-Modells ist. Für das hypergeometrische Modell mit festem Stichprobenumfang und bekannter Urnengröße sind die optimalen einseitigen Konfidenzintervalle bekannt. Bei festem Stichprobenumfang und unbekannter Urnengröße werden aus den im Bernoulli-Faltungs-Modell optimalen Konfidenzintervallen optimale Konfidenzintervalle für das hypergeometrische Modell entwickelt. Außerdem wird der Fall betrachtet, dass eine obere Schranke für die unbekannte Urnengröße gegeben ist.

Design and structural optimization has become a very important field in industrial applications over the last years. Due to economical and ecological reasons, the efficient use of material is of highly industrial interest. Therefore, computational tools based on optimization theory have been developed and studied in the last decades. In this work, different structural optimization methods are considered. Special attention lies on the applicability to three-dimensional, large-scale, multiphysic problems, which arise from different areas of the industry. Based on the theory of PDE-constraint optimization, descent methods in structural optimization require knowledge of the (partial) derivatives with respect to shape or topology variations. Therefore, shape and topology sensitivity analysis is introduced and the connection between both sensitivities is given by the Topological-Shape Sensitivity Method. This method leads to a systematic procedure to compute the topological derivative by terms of the shape sensitivity. Due to the framework of moving boundaries in structural optimization, different interface tracking techniques are presented. If the topology of the domain is preserved during the optimization process, explicit interface tracking techniques, combined with mesh-deformation, are used to capture the interface. This techniques fit very well the requirements in classical shape optimization. Otherwise, an implicit representation of the interface is of advantage if the optimal topology is unknown. In this case, the level set method is combined with the concept of the topological derivative to deal with topological perturbation. The resulting methods are applied to different industrial problems. On the one hand, interface shape optimization for solid bodies subject to a transient heat-up phase governed by both linear elasticity and thermal stresses is considered. Therefore, the shape calculus is applied to coupled heat and elasticity problems and a generalized compliance objective function is studied. The resulting thermo-elastic shape optimization scheme is used for compliance reduction of realistic hotplates. On the other hand, structural optimization based on the topological derivative for three-dimensional elasticity problems is observed. In order to comply typical volume constraints, a one-shot augmented Lagrangian method is proposed. Additionally, a multiphase optimization approach based on mesh-refinement is used to reduce the computational costs and the method is illustrated by classical minimum compliance problems. Finally, the topology optimization algorithm is applied to aero-elastic problems and numerical results are presented.

The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.

This work investigates the industrial applicability of graphics and stream processors in the field of fluid simulations. For this purpose, an explicit Runge-Kutta discontinuous Galerkin method in arbitrarily high order is implemented completely for the hardware architecture of GPUs. The same functionality is simultaneously realized for CPUs and compared to GPUs. Explicit time steppings as well as established implicit methods are under consideration for the CPU. This work aims at the simulation of inviscid, transsonic flows over the ONERA M6 wing. The discontinuities which typically arise in hyperbolic equations are treated with an artificial viscosity approach. It is further investigated how this approach fits into the explicit time stepping and works together with the special architecture of the GPU. Since the treatment of artificial viscosity is close to the simulation of the Navier-Stokes equations, it is reviewed how GPU-accelerated methods could be applied for computing viscous flows. This work is based on a nodal discontinuous Galerkin approach for linear hyperbolic problems. Here, it is extended to non-linear problems, which makes the application of numerical quadrature obligatory. Moreover, the representation of complex geometries is realized using isoparametric mappings. Higher order methods are typically very sensitive with respect to boundaries which are not properly resolved. For this purpose, an approach is presented which fits straight-sided DG meshes to curved geometries which are described by NURBS surfaces. The mesh is modeled as an elastic body and deformed according to the solution of closest point problems in order to minimize the gap to the original spline surface. The sensitivity with respect to geometry representations is reviewed in the end of this work in the context of shape optimization. Here, the aerodynamic drag of the ONERA M6 wing is minimized according to the shape gradient which is implicitly smoothed within the mesh deformation approach. In this context a comparison to the classical Laplace-Beltrami operator is made in a Stokes flow situation.

In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.

In this thesis, we study the convergence behavior of an efficient optimization method used for the identification of parameters for underdetermined systems. The research is motivated by optimization problems arising from the estimation of parameters in neural networks as well as in option pricing models. In the first application, we are concerned with neural networks used to forecasting stock market indices. Since neural networks are able to describe extremely complex nonlinear structures they are used to improve the modelling of the nonlinear dependencies occurring in the financial markets. Applying neural networks to the forecasting of economic indicators, we are confronted with a nonlinear least squares problem of large dimension. Furthermore, in this application the number of parameters of the neural network to be determined is usually much larger than the number of patterns which are available for the determination of the unknowns. Hence, the residual function of our least squares problem is underdetermined. In option pricing, an important but usually not known parameter is the volatility of the underlying asset of the option. Assuming that the underlying asset follows a one-factor continuous diffusion model with nonconstant drift and volatility term, the value of an European call option satisfies a parabolic initial value problem with the volatility function appearing in one of the coefficients of the parabolic differential equation. Using this system equation, the estimation of the volatility function is described by a nonlinear least squares problem. Since the adaption of the volatility function is based only on a small number of observed market data these problems are naturally ill-posed. For the solution of these large-scale underdetermined nonlinear least squares problems we use a fully iterative inexact Gauss-Newton algorithm. We show how the structure of a neural network as well as that of the European call price model can be exploited using iterative methods. Moreover, we present theoretical statements for the convergence of the inexact Gauss-Newton algorithm applied to the less examined case of underdetermined nonlinear least squares problems. Finally, we present numerical results for the application of neural networks to the forecasting of stock market indices as well as for the construction of the volatility function in European option pricing models. In case of the latter application, we discretize the parabolic differential equation using a finite difference scheme and we elucidate convergence problems of the discrete scheme when the initial condition is not everywhere differentiable.

The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.

Bei der Preisberechnung von Finanzderivaten bieten sogenannte Jump-diffusion-Modelle mit lokaler Volatilität viele Vorteile. Aus mathematischer Sicht jedoch sind sie sehr aufwendig, da die zugehörigen Modellpreise mittels einer partiellen Integro-Differentialgleichung (PIDG) berechnet werden. Wir beschäftigen uns mit der Kalibrierung der Parameter eines solchen Modells. In einem kleinste-Quadrate-Ansatz werden hierzu Marktpreise von europäischen Standardoptionen mit den Modellpreisen verglichen, was zu einem Problem optimaler Steuerung führt. Ein wesentlicher Teil dieser Arbeit beschäftigt sich mit der Lösung der PIDG aus theoretischer und vor allem aus numerischer Sicht. Die durch ein implizites Zeitdiskretisierungsverfahren entstandenen, dicht besetzten Gleichungssysteme werden mit einem präkonditionierten GMRES-Verfahren gelöst, was zu beinahe linearem Aufwand bezüglich Orts- und Zeitdiskretisierung führt. Trotz dieser effizienten Lösungsmethode sind Funktionsauswertungen der kleinste-Quadrate-Zielfunktion immer noch teuer, so dass im Hauptteil der Arbeit Modelle reduzierter Ordnung basierend auf Proper Orthogonal Decomposition Anwendung finden. Lokale a priori Fehlerabschätzungen für die reduzierte Differentialgleichung sowie für die reduzierte Zielfunktion, kombiniert mit einem Trust-Region-Ansatz zur Globalisierung liefern einen effizienten Algorithmus, der die Rechenzeit deutlich verkürzt. Das Hauptresultat der Arbeit ist ein Konvergenzbeweis für diesen Algorithmus für eine weite Klasse von Optimierungsproblemen, in die auch das betrachtete Kalibrierungsproblem fällt.

Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons,rnthis mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as designrnunknowns in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operatorrnsymbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians.rnThe Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form whenrnthe local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.