Refine
Year of publication
Document Type
- Doctoral Thesis (62)
- Habilitation (2)
- Article (1)
Keywords
- Optimierung (7)
- Approximation (6)
- Approximationstheorie (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Funktionalanalysis (5)
- universal functions (5)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)
Institute
- Mathematik (65) (remove)
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
In this thesis, we aim to study the sampling allocation problem of survey statistics under uncertainty. We know that the stratum specific variances are generally not known precisely and we have no information about the distribution of uncertainty. The cost of interviewing each person in a stratum is also a highly uncertain parameter as sometimes people are unavailable for the interview. We propose robust allocations to deal with the uncertainty in both stratum specific variances and costs. However, in real life situations, we can face such cases when only one of the variances or costs is uncertain. So we propose three different robust formulations representing these different cases. To the best of our knowledge robust allocation in the sampling allocation problem has not been considered so far in any research.
The first robust formulation for linear problems was proposed by Soyster (1973). Bertsimas and Sim (2004) proposed a less conservative robust formulation for linear problems. We study these formulations and extend them for the nonlinear sampling allocation problem. It is very unlikely to happen that all of the stratum specific variances and costs are uncertain. So the robust formulations are in such a way that we can select how many strata are uncertain which we refer to as the level of uncertainty. We prove that an upper bound on the probability of violation of the nonlinear constraints can be calculated before solving the robust optimization problem. We consider various kinds of datasets and compute robust allocations. We perform multiple experiments to check the quality of the robust allocations and compare them with the existing allocation techniques.
Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)
An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
Quadratische Optimierungsprobleme (QP) haben ein breites Anwendungsgebiet, wie beispielsweise kombinatorische Probleme einschließlich des maximalen Cliquenroblems. Motzkin und Straus [25] zeigten die Äquivalenz zwischen dem maximalen Cliquenproblem und dem standard quadratischen Problem. Auch mathematische Statistik ist ein weiteres Anwendungsgebiet von (QP), sowie eine Vielzahl von ökonomischen Modellen basieren auf (QP), z.B. das quadratische Rucksackproblem. In [5] Bomze et al. haben das standard quadratische Optimierungsproblem (StQP) in ein Copositive-Problem umformuliert. Im Folgenden wurden Algorithmen zur Lösung dieses copositiviten Problems von Bomze und de Klerk in [6] und Dür und Bundfuss in [9] entwickelt. Während die Implementierung dieser Algorithmen einige vielversprechende numerische Ergebnisse hervorbrachten, konnten die Autoren nur die copositive Neuformulierung des (StQP)s lösen. In [11] präsentierte Burer eine vollständig positive Umformulierung für allgemeine (QP)s, sogar mit binären Nebenbedingungen. Leider konnte er keine Methode zur Lösung für ein solches vollständig positives Problem präsentieren, noch wurde eine copositive Formulierung vorgeschlagen, auf die man die oben erwähnten Algorithmen modifizieren und anwenden könnte, um diese zu lösen. Diese Arbeit wird einen neuen endlichen Algorithmus zur Lösung eines standard quadratischen Optimierungsproblems aufstellen. Desweiteren werden in dieser Thesis copositve Darstellungen für ungleichungsbeschränkte sowie gleichungsbeschränkte quadratische Optimierungsprobleme vorgestellt. Für den ersten Ansatz wurde eine vollständig positive Umformulierung des (QP) entwickelt. Die copositive Umformulierung konnte durch Betrachtung des dualen Problems des vollständig positiven Problems erhalten werden. Ein direkterer Ansatz wurde gemacht, indem das Lagrange-Duale eines äquivalenten quadratischen Optimierungsproblems betrachtet wurde, das durch eine semidefinite quadratische Nebenbedingung beschränkt wurde. In diesem Zusammenhang werden Bedingungen für starke Dualität vorgeschlagen.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night. In a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population. Using the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive. Since bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.
In dieser Arbeit untersuchen wir das Optimierungsproblem der optimalen Materialausrichtung orthotroper Materialien in der Hülle von dreidimensionalen Schalenkonstruktionen. Ziel der Optimierung ist dabei die Minimierung der Gesamtnachgiebigkeit der Konstruktion, was der Suche nach einem möglichst steifen Design entspricht. Sowohl die mathematischen als auch die mechanischen Grundlagen werden in kompakter Form zusammengetragen und basierend darauf werden sowohl gradientenbasierte als auch auf mechanischen Prinzipien beruhende, neue Erweiterungen punktweise formulierter Optimierungsverfahren entwickelt und implementiert. Die vorgestellten Verfahren werden anhand des Beispiels des Modells einer Flugzeugtragfläche mit praxisrelevanter Problemgröße getestet und verglichen. Schließlich werden die untersuchten Methoden in ihrer Koppelung mit einem Verfahren zur Topologieoptimierung, basierend auf dem topologischen Gradienten untersucht.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.
The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.
Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x ∈ X at time zero, its state at time n ∈ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x ∈ X as the time n increases. On the one hand, it is possible that there exist certain states x ∈ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for n→∞. On the other hand, it might be the case that the system runs unstably for some initial states x ∈ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (g∘f^n) on D, where D is an open subset of F_f with f(D)⊂ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (g∘f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i ∈ I} of continuous functions T_i:X→Y, an element x ∈ X is called universal for the family {T_i: i ∈ I} if the set {T_i(x): i ∈ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x ∈ X is called hypercyclic for T if it is universal for the family {T^n: n ∈ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)⊂ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)→H(D), C_f(g):=g∘f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D ⊂ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g ∈ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {g∘f^n|_D: n ∈ N} is dense in H(D) so that the sequence of compositions (g∘f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (g∘f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.
Die vorliegende Arbeit teilt sich in die zwei titelgebenden Themengebiete. Inhalt des ersten Teils dieser Arbeit ist die Untersuchung der Proximität, also einer gewissen Messung der Nähe, von Binomial- und Poisson-Verteilungen. Speziell wird die uniforme Struktur des Totalvariationsabstandes auf der abgeschlossenen Menge aller Binomial- und Poisson-Verteilungen charakterisiert, und zwar mit Hilfe der die Verteilungen eindeutig bestimmenden zugehörigen Erwartungswerte und Varianzen. Insbesondere wird eine obere Abschätzung des Totalvariationsabstandes auf der Menge der Binomial- und Poisson-Verteilungen durch eine entsprechende Funktion der zugehörigen Erwartungswerte und Varianzen angegeben. Der zweite Teil der Arbeit widmet sich Konfidenzintervallen für Durchschnitte von Erfolgswahrscheinlichkeiten. Eine der ersten und bekanntesten Arbeiten zu Konfidenzintervallen von Erfolgswahrscheinlichkeiten ist die von Clopper und Pearson (1934). Im Binomialmodell werden hier bei bekanntem Stichprobenumfang und Konfidenzniveau Konfidenzintervalle für die unbekannte Erfolgswahrscheinlichkeit entwickelt. Betrachtet man bei festem Stichprobenumfang statt einer Binomialverteilung, also dem Bildmaß einer homogenen Bernoulli-Kette unter der Summationsabbildung, das entsprechende Bildmaß einer inhomogenen Bernoulli-Kette, so erhält man eine Bernoulli-Faltung mit den entsprechenden Erfolgswahrscheinlichkeiten. Für das Schätzen der durchschnittlichen Erfolgswahrscheinlichkeit im größeren Bernoulli-Faltungs-Modell sind z. B. die einseitigen Clopper-Pearson-Intervalle im Allgemeinen nicht gültig. Es werden hier optimale einseitige und gültige zweiseitige Konfidenzintervalle für die durchschnittliche Erfolgswahrscheinlichkeit im Bernoulli-Faltungs-Modell entwickelt. Die einseitigen Clopper-Pearson-Intervalle sind im Allgemeinen auch nicht gültig für das Schätzen der Erfolgswahrscheinlichkeit im hypergeometrischen Modell, das ein Teilmodell des Bernoulli-Faltungs-Modells ist. Für das hypergeometrische Modell mit festem Stichprobenumfang und bekannter Urnengröße sind die optimalen einseitigen Konfidenzintervalle bekannt. Bei festem Stichprobenumfang und unbekannter Urnengröße werden aus den im Bernoulli-Faltungs-Modell optimalen Konfidenzintervallen optimale Konfidenzintervalle für das hypergeometrische Modell entwickelt. Außerdem wird der Fall betrachtet, dass eine obere Schranke für die unbekannte Urnengröße gegeben ist.
Zu den klassischen Verteilungen der mathematischen Statistik zählen die zentralen F- und t-Verteilungen. Die vorliegende Arbeit untersucht Verallgemeinerungen dieser Verteilungen, die sogenannten doppelt nichtzentralen F- und t-Verteilungen, welche in der statistischen Testtheorie von Bedeutung sind. Die Tatsache, dass die zugehörigen Wahrscheinlichkeitsdichten nur in Form von Parameterintegral- bzw. Doppelreihendarstellungen gegeben sind, stellt eine große Herausforderung bei der Untersuchung analytischer Eigenschaften dar. Unter Verwendung von Techniken aus der Theorie der vorzeichenregulären Funktionen gelingt es, die bisher vermutete, jedoch lediglich aus Approximationen abgeleitete, strikt unimodale Gestalt der Dichtefunktion für eine große Klasse doppelt nichtzentraler Verteilungen zu zeigen. Dieses Resultat gestattet die Untersuchung des eindeutig bestimmten Modus als Funktion gewisser Nichtzentralitätsparameter. Hier erweist sich die Theorie der vorzeichenregulären Funktionen als wichtiges Hilfsmittel, um monotone Abhängigkeiten nachzuweisen.
The main topic of this treatise is the solution of two problems from the general theory of linear partial differential equations with constant coefficients. While surjectivity criteria for linear partial differential operators in spaces of smooth functions over an open subset of euclidean space and distributions were proved by B. Malgrange and L. Hörmander in 1955, respectively 1962, concrete evaluation of these criteria is still a highly non-trivial task. In particular, it is well-known that surjectivity in the space of smooth functions over an open subset of euclidean space does not automatically imply surjectivity in the space of distributions. Though, examples for this fact all live in three or higher dimensions. In 1966, F. Trèves conjectured that in the two dimensional setting surjectivity of a linear partial differential operator on the smooth functions indeed implies surjectivity on the space of distributions. An affirmative solution to this problem is presented in this treatise. The second main result solves the so-called problem of (distributional) parameter dependence for solutions of linear partial differential equations with constant coefficients posed by J. Bonet and P. Domanski in 2006. It is shown that, in dimensions three or higher, this problem in general has a negative solution even for hypoelliptic operators. Moreover, it is proved that the two dimensional case is again an exception, because in this setting the problem of parameter dependence always has a positive solution.
In a paper of 1996 the british mathematician Graham R. Allan posed the question, whether the product of two stable elements is again stable. Here stability describes the solvability of a certain infinite system of equations. Using a method from the theory of homological algebra, it is proved that in the case of topological algebras with multiplicative webs, and thus in all common locally convex topological algebras that occur in standard analysis, the answer of Allan's question is affirmative.
In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 -> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(Ω) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(Ω) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(Ω) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.
In dieser Dissertation beschäftigen wir uns mit der konstruktiven und generischen Gewinnung universeller Funktionen. Unter einer universellen Funktion verstehen wie dabei eine solche holomorphe Funktion, die in gewissem Sinne ganze Klassen von Funktionen enthält. Die konstruktive Methode beinhaltet die explizite Konstruktion einer universellen Funktion über einen Grenzprozess, etwa als Polynomreihe. Die generische Methode definiert zunächst rein abstrakt die jeweils gewünschte Klasse von universellen Funktionen. Mithilfe des Baireschen Dichtesatzes wird dann gezeigt, dass die Klasse dieser Funktionen nicht nur nichtleer, sondern sogar G_delta und dicht in dem betrachteten Funktionenraum ist. Beide Methoden bedienen sich der Approximationssätze von Runge und von Mergelyan. Die Hauptergebnisse sind die folgenden: (1) Wir haben konstruktiv die Existenz von universellen Laurentreihen auf mehrfach zusammenhängenden Gebieten bewiesen. Zusätzlich haben wir gezeigt, dass die Menge solcher universeller Laurentreihen dicht im Raum der auf dem betrachteten Gebiet holomorphen Funktionen ist. (2) Die Existenz von universellen Faberreihen auf gewissen Gebieten wurde sowohl konstruktiv als auch generisch bewiesen. (3) Zum einen haben wir konstruktiv gezeigt, dass es so genannte ganze T-universelle Funktionen mit vorgegebenen Approximationswegen gibt. Die Approximationswege sind durch eine hinreichend variable funktionale Form vorgegeben. Die Menge solcher Funktionen ist im Raum der ganzen Funktionen eine dichte G_delta-Menge. Zum anderen haben wir generisch die Existenz von auf einem beschränkten Gebiet T-universellen Funktionen bezüglich gewisser vorgegebener Approximationswege bewiesen. Die Approximationswege sind auch hier genügend allgemein.
This work investigates the industrial applicability of graphics and stream processors in the field of fluid simulations. For this purpose, an explicit Runge-Kutta discontinuous Galerkin method in arbitrarily high order is implemented completely for the hardware architecture of GPUs. The same functionality is simultaneously realized for CPUs and compared to GPUs. Explicit time steppings as well as established implicit methods are under consideration for the CPU. This work aims at the simulation of inviscid, transsonic flows over the ONERA M6 wing. The discontinuities which typically arise in hyperbolic equations are treated with an artificial viscosity approach. It is further investigated how this approach fits into the explicit time stepping and works together with the special architecture of the GPU. Since the treatment of artificial viscosity is close to the simulation of the Navier-Stokes equations, it is reviewed how GPU-accelerated methods could be applied for computing viscous flows. This work is based on a nodal discontinuous Galerkin approach for linear hyperbolic problems. Here, it is extended to non-linear problems, which makes the application of numerical quadrature obligatory. Moreover, the representation of complex geometries is realized using isoparametric mappings. Higher order methods are typically very sensitive with respect to boundaries which are not properly resolved. For this purpose, an approach is presented which fits straight-sided DG meshes to curved geometries which are described by NURBS surfaces. The mesh is modeled as an elastic body and deformed according to the solution of closest point problems in order to minimize the gap to the original spline surface. The sensitivity with respect to geometry representations is reviewed in the end of this work in the context of shape optimization. Here, the aerodynamic drag of the ONERA M6 wing is minimized according to the shape gradient which is implicitly smoothed within the mesh deformation approach. In this context a comparison to the classical Laplace-Beltrami operator is made in a Stokes flow situation.
The Hadamard product of two holomorphic functions which is defined via a convolution integral constitutes a generalization of the Hadamard product of two power series which is obtained by pointwise multiplying their coefficients. Based on the integral representation mentioned above, an associative law for this convolution is shown. The main purpose of this thesis is the examination of the linear and continuous Hadamard convolution operators. These operators map between spaces of holomorphic functions and send - with a fixed function phi - a function f to the convolution of phi and f. The transposed operator is computed and turns out to be a Hadamard convolution operator, too, mapping between spaces of germs of holomorphic functions. The kernel of Hadamard convolution operators is investigated and necessary and sufficient conditions for those operators to be injective or to have dense range are given. In case that the domain of holomorphy of the function phi allows a Mellin transform of phi, certain (generalized) monomials are identified as eigenfunctions of the corresponding operator. By means of this result and some extract of the theory of growth of entire functions, further propositions concerning the injectivity, the denseness of the range or the surjectivity of Hadamard convolution operators are shown. The relationship between Hadamard convolution operators, operators which are defined via the convolution with an analytic functional and differential operators of infinite order is investigated and the results which are obtained in the thesis are put into the research context. The thesis ends with an application of the results to the approximation of holomorphic functions by lacunary polynomials. On the one hand, the question under which conditions lacunary polynomials are dense in the space of all holomorphic functions is investigated and on the other hand, the rate of approximation is considered. In this context, a result corresponding to the Bernstein-Walsh theorem is formulated.
Copositive programming is concerned with the problem of optimizing a linear function over the copositive cone, or its dual, the completely positive cone. It is an active field of research and has received a growing amount of attention in recent years. This is because many combinatorial as well as quadratic problems can be formulated as copositive optimization problems. The complexity of these problems is then moved entirely to the cone constraint, showing that general copositive programs are hard to solve. A better understanding of the copositive and the completely positive cone can therefore help in solving (certain classes of) quadratic problems. In this thesis, several aspects of copositive programming are considered. We start by studying the problem of computing the projection of a given matrix onto the copositive and the completely positive cone. These projections can be used to compute factorizations of completely positive matrices. As a second application, we use them to construct cutting planes to separate a matrix from the completely positive cone. Besides the cuts based on copositive projections, we will study another approach to separate a triangle-free doubly nonnegative matrix from the completely positive cone. A special focus is on copositive and completely positive programs that arise as reformulations of quadratic optimization problems. Among those we start by studying the standard quadratic optimization problem. We will show that for several classes of objective functions, the relaxation resulting from replacing the copositive or the completely positive cone in the conic reformulation by a tractable cone is exact. Based on these results, we develop two algorithms for solving standard quadratic optimization problems and discuss numerical results. The methods presented cannot immediately be adapted to general quadratic optimization problems. This is illustrated with examples.
Design and structural optimization has become a very important field in industrial applications over the last years. Due to economical and ecological reasons, the efficient use of material is of highly industrial interest. Therefore, computational tools based on optimization theory have been developed and studied in the last decades. In this work, different structural optimization methods are considered. Special attention lies on the applicability to three-dimensional, large-scale, multiphysic problems, which arise from different areas of the industry. Based on the theory of PDE-constraint optimization, descent methods in structural optimization require knowledge of the (partial) derivatives with respect to shape or topology variations. Therefore, shape and topology sensitivity analysis is introduced and the connection between both sensitivities is given by the Topological-Shape Sensitivity Method. This method leads to a systematic procedure to compute the topological derivative by terms of the shape sensitivity. Due to the framework of moving boundaries in structural optimization, different interface tracking techniques are presented. If the topology of the domain is preserved during the optimization process, explicit interface tracking techniques, combined with mesh-deformation, are used to capture the interface. This techniques fit very well the requirements in classical shape optimization. Otherwise, an implicit representation of the interface is of advantage if the optimal topology is unknown. In this case, the level set method is combined with the concept of the topological derivative to deal with topological perturbation. The resulting methods are applied to different industrial problems. On the one hand, interface shape optimization for solid bodies subject to a transient heat-up phase governed by both linear elasticity and thermal stresses is considered. Therefore, the shape calculus is applied to coupled heat and elasticity problems and a generalized compliance objective function is studied. The resulting thermo-elastic shape optimization scheme is used for compliance reduction of realistic hotplates. On the other hand, structural optimization based on the topological derivative for three-dimensional elasticity problems is observed. In order to comply typical volume constraints, a one-shot augmented Lagrangian method is proposed. Additionally, a multiphase optimization approach based on mesh-refinement is used to reduce the computational costs and the method is illustrated by classical minimum compliance problems. Finally, the topology optimization algorithm is applied to aero-elastic problems and numerical results are presented.
In der modernen Survey-Statistik treten immer häufifiger Optimierungsprobleme auf, die es zu lösen gilt. Diese sind oft von hoher Dimension und Simulationsstudien erfordern das mehrmalige Lösen dieser Optimierungsprobleme. Um dies in angemessener Zeit durchführen zu können, sind spezielle Algorithmen und Lösungsansätze erforderlich, welche in dieser Arbeit entwickelt und untersucht werden. Bei den Optimierungsproblemen handelt es sich zum einen um Allokationsprobleme zur Bestimmung optimaler Teilstichprobenumfänge. Hierbei werden neben auf einem Nullstellenproblem basierende, stetige Lösungsmethoden auch ganzzahlige, auf der Greedy-Idee basierende Lösungsmethoden untersucht und die sich ergebenden Optimallösungen miteinander verglichen.Zum anderen beschäftigt sich diese Arbeit mit verschiedenen Kalibrierungsproblemen. Hierzu wird ein alternativer Lösungsansatz zu den bisher praktizierten Methoden vorgestellt. Dieser macht das Lösen eines nichtglatten Nullstellenproblemes erforderlich, was mittels desrnnichtglatten Newton Verfahrens erfolgt. Im Zusammenhang mit nichtglatten Optimierungsalgorithmen spielt die Schrittweitensteuerung eine große Rolle. Hierzu wird ein allgemeiner Ansatz zur nichtmonotonen Schrittweitensteuerung bei Bouligand-differenzierbaren Funktionen betrachtet. Neben der klassischen Kalibrierung wird ferner ein Kalibrierungsproblem zur kohärenten Small Area Schätzung unter relaxierten Nebenbedingungen und zusätzlicher Beschränkung der Variation der Designgewichte betrachtet. Dieses Problem lässt sich in ein hochdimensionales quadratisches Optimierungsproblem umwandeln, welches die Verwendung von Lösern für dünn besetzte Optimierungsprobleme erfordert.Die in dieser Arbeit betrachteten numerischen Probleme können beispielsweise bei Zensen auftreten. In diesem Zusammenhang werden die vorgestellten Ansätze abschließend in Simulationsstudien auf eine mögliche Anwendung auf den Zensus 2011 untersucht, die im Rahmen des Zensus-Stichprobenforschungsprojektes untersucht wurden.
In this thesis, global surrogate models for responses of expensive simulations are investigated. Computational fluid dynamics (CFD) have become an indispensable tool in the aircraft industry. But simulations of realistic aircraft configurations remain challenging and computationally expensive despite the sustained advances in computing power. With the demand for numerous simulations to describe the behavior of an output quantity over a design space, the need for surrogate models arises. They are easy to evaluate and approximate quantities of interest of a computer code. Only a few number of evaluations of the simulation are stored for determining the behavior of the response over a whole range of the input parameter domain. The Kriging method is capable of interpolating highly nonlinear, deterministic functions based on scattered datasets. Using correlation functions, distinct sensitivities of the response with respect to the input parameters can be considered automatically. Kriging can be extended to incorporate not only evaluations of the simulation, but also gradient information, which is called gradient-enhanced Kriging. Adaptive sampling strategies can generate more efficient surrogate models. Contrary to traditional one-stage approaches, the surrogate model is built step-by-step. In every stage of an adaptive process, the current surrogate is assessed in order to determine new sample locations, where the response is evaluated and the new samples are added to the existing set of samples. In this way, the sampling strategy learns about the behavior of the response and a problem-specific design is generated. Critical regions of the input parameter space are identified automatically and sampled more densely for reproducing the response's behavior correctly. The number of required expensive simulations is decreased considerably. All these approaches treat the response itself more or less as an unknown output of a black-box. A new approach is motivated by the assumption that for a predefined problem class, the behavior of the response is not arbitrary, but rather related to other instances of the mutual problem class. In CFD, for example, responses of aerodynamic coefficients share structural similarities for different airfoil geometries. The goal is to identify the similarities in a database of responses via principal component analysis and to use them for a generic surrogate model. Characteristic structures of the problem class can be used for increasing the approximation quality in new test cases. Traditional approaches still require a large number of response evaluations, in order to achieve a globally high approximation quality. Validating the generic surrogate model for industrial relevant test cases shows that they generate efficient surrogates, which are more accurate than common interpolations. Thus practical, i.e. affordable surrogates are possible already for moderate sample sizes. So far, interpolation problems were regarded as separate problems. The new approach uses the structural similarities of a mutual problem class innovatively for surrogate modeling. Concepts from response surface methods, variable-fidelity modeling, design of experiments, image registration and statistical shape analysis are connected in an interdisciplinary way. Generic surrogate modeling is not restricted to aerodynamic simulation. It can be applied, whenever expensive simulations can be assigned to a larger problem class, in which structural similarities are expected.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
Bei der Preisberechnung von Finanzderivaten bieten sogenannte Jump-diffusion-Modelle mit lokaler Volatilität viele Vorteile. Aus mathematischer Sicht jedoch sind sie sehr aufwendig, da die zugehörigen Modellpreise mittels einer partiellen Integro-Differentialgleichung (PIDG) berechnet werden. Wir beschäftigen uns mit der Kalibrierung der Parameter eines solchen Modells. In einem kleinste-Quadrate-Ansatz werden hierzu Marktpreise von europäischen Standardoptionen mit den Modellpreisen verglichen, was zu einem Problem optimaler Steuerung führt. Ein wesentlicher Teil dieser Arbeit beschäftigt sich mit der Lösung der PIDG aus theoretischer und vor allem aus numerischer Sicht. Die durch ein implizites Zeitdiskretisierungsverfahren entstandenen, dicht besetzten Gleichungssysteme werden mit einem präkonditionierten GMRES-Verfahren gelöst, was zu beinahe linearem Aufwand bezüglich Orts- und Zeitdiskretisierung führt. Trotz dieser effizienten Lösungsmethode sind Funktionsauswertungen der kleinste-Quadrate-Zielfunktion immer noch teuer, so dass im Hauptteil der Arbeit Modelle reduzierter Ordnung basierend auf Proper Orthogonal Decomposition Anwendung finden. Lokale a priori Fehlerabschätzungen für die reduzierte Differentialgleichung sowie für die reduzierte Zielfunktion, kombiniert mit einem Trust-Region-Ansatz zur Globalisierung liefern einen effizienten Algorithmus, der die Rechenzeit deutlich verkürzt. Das Hauptresultat der Arbeit ist ein Konvergenzbeweis für diesen Algorithmus für eine weite Klasse von Optimierungsproblemen, in die auch das betrachtete Kalibrierungsproblem fällt.
Krylov subspace methods are often used to solve large-scale linear equations arising from optimization problems involving partial differential equations (PDEs). Appropriate preconditioning is vital for designing efficient iterative solvers of this type. This research consists of two parts. In the first part, we compare two different kinds of preconditioners for a conjugate gradient (CG) solver attacking one partial integro-differential equation (PIDE) in finance, both theoretically and numerically. An analysis on mesh independence and rate of convergence of the CG solver is included. The knowledge of preconditioning the PIDE is applied to a relevant optimization problem. The second part aims at developing a new preconditioning technique by embedding reduced order models of nonlinear PDEs, which are generated by proper orthogonal decomposition (POD), into deflated Krylov subspace algorithms in solving corresponding optimization problems. Numerical results are reported for a series of test problems.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Das erste Beispiel einer so genannten universellen holomorphen Funktion stammt von Birkhoff, welcher im Jahre 1929 die Existenz einer ganzen Funktion beweisen konnte, die gewissermaßen jede ganze Funktion durch geeignete Translationen approximieren kann. In der Folgezeit hat sich der Bereich der "universellen Approximation" zu einem eigenständigen Gebiet innerhalb der komplexen Approximationstheorie entwickelt, und es gibt eine Vielzahl an Ergebnissen über universelle Funktionen. Hierbei wurde sich allerdings fast ausschließlich auf das Studium holomorpher und ganzer Funktionen beschränkt, insbesondere die Klasse der meromorphen Funktionen wurde bisher kaum auf das Phänomen der Universalität hin untersucht. Die vorliegende Arbeit beschäftigt sich mit universeller meromorpher Approximation, und geht der Fragestellung nach, ob meromorphe Funktionen mit gewissen Universalitätseigenschaften existieren, und ob die klassischen Ergebnisse aus der universellen holomorphen Approximation auf den meromorphen Fall erweiterbar sind. Hierbei wird zunächst zwischen Translations- und Streckungsuniversalität unterschieden und bewiesen, dass in beiden Fällen jeweils eine im Raum der meromorphen Funktionen residuale Menge an universellen Funktionen existiert. Weiterhin werden die Eigenschaften dieser Funktionen ausführlich studiert. Anschließend werden meromorphe Funktionen auf Ableitungsuniversalität hin untersucht. Hierbei wird einerseits gezeigt, dass im Allgemeinen keine positiven Ergebnisse möglich sind, während andererseits eine spezielle Klasse meromorpher Funktionen betrachtet wird, für welche universelles Verhalten der sukzessiven Ableitungen nachgewiesen werden kann.
Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
Die Ménage-Polynome (engl.: ménage hit polynomials) ergeben sich in natürlicher Weise aus den in der Kombinatorik auftretenden Ménage-Zahlen. Eine Verbindung zu einer gewissen Klasse hypergeometrischer Polynome führt auf die Untersuchung spezieller Folgen von Polynomen vom Typ 3-F-1. Unter Verwendung einer Modifikation der komplexen Laplace-Methode zur gleichmäßigen asymptotischen Auswertung von Parameterintegralen sowie einiger Hilfsmittel aus der Potentialtheorie der komplexen Ebene werden starke und schwache Asymptotiken für die in Rede stehenden Polynomfolgen hergeleitet.
Diese Arbeit beschäftigt sich mit (frequent) universellen Funktionen bezüglich Differentialoperatoren und gewichteten Shiftoperatoren. Hierbei wird ein Charakteristikum von Funktionen vom Exponentialtyp untersucht, das bisher im Rahmen der Universalität noch nicht betrachtet wurde: Das konjugierte Indikatordiagramm. Dabei handelt es sich um eine kompakte und konvexe Menge, die einer Funktion vom Exponentialtyp zugeordnet ist und gewisse Rückschlüsse über das Wachstum und die mögliche Nullstellenverteilung zulässt. Mittels einer speziellen Transformation werden (frequent) universelle Funktionen vom Exponentialtyp bezüglich verschiedener Differentialoperatoren ineinander überführt. Hierdurch ist eine genaue Lokalisation der konjugierten Indikatordiagramme möglicher (frequent) universeller Funktionen für diese Operatoren ableitbar. Durch Konjugation der Differentiation mit gewichteten Shiftoperatoren über das Hadamardprodukt, wird auch für diese Operatoren eine Lokalisation möglicher konjugierter Indikatordiagramme ihrer (frequent) universellen Funktionen erreicht.
This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).
The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
The thesis studies the question how universal behavior is inherited by the Hadamard product. The type of universality that is considered here is universality by overconvergence; a definition will be given in chapter five. The situation can be described as follows: Let f be a universal function, and let g be a given function. Is the Hadamard product of f and g universal again? This question will be studied in chapter six. Starting with the Hadamard product for power series, a definition for a more general context must be provided. For plane open sets both containing the origin this has already been done. But in order to answer the above question, it becomes necessary to have a Hadamard product for functions that are not holomorphic at the origin. The elaboration of such a Hadamard product and its properties are the second central part of this thesis; chapter three will be concerned with them. The idea of the definition of such a Hadamard product will follow the case already known: The Hadamard product will be defined by a parameter integral. Crucial for this definition is the choice of appropriate integration curves; these will be introduced in chapter two. By means of the Hadamard product- properties it is possible to prove the Hadamard multiplication theorem and the Borel-Okada theorem. A generalization of these theorems will be presented in chapter four.
Wenn eine stets von Null verschiedene Nullfolge h_n gegeben ist, dann existieren nach einem Satz von Marcinkiewicz stetige Funktionen f vom Intervall [0,1] in die reelle Achse, die in dem Sinne maximal nicht differenzierbar sind, dass zu jeder messbaren Funktion g ein Teilfolge n_k existiert, so dass (f(x+h_n_k)-f(x))/h_n_k fast sicher gegen g konvergiert. Im ersten Teil dieser Arbeit beweisen wir Erweiterungen dieses Satzes im Mehrdimensionalen und Analoga für Funktionen in der komplexen Ebene. Der zweite Teil dieser Arbeit befasst sich mit Operatoren die in enger Beziehung zum Satz von Korovkin über positive lineare Operatoren stehen. Wir zeigen, dass es Operatoren L_n gibt, die jeweils eine der Eigenschaften aus dem Satz von Korovkin nicht erfüllen und gleichzeitig eine residuale Menge von Funktionen f existiert, so dass L_nf nicht nur nicht gegen f konvergiert, sondern sogar dicht im Raum aller stetigen Funktionen des Intervalls [0,1] ist. Ähnliche Phänomene werden bei polynomieller Interpolation untersucht.
In this thesis, we investigate the quantization problem of Gaussian measures on Banach spaces by means of constructive methods. That is, for a random variable X and a natural number N, we are searching for those N elements in the underlying Banach space which give the best approximation to X in the average sense. We particularly focus on centered Gaussians on the space of continuous functions on [0,1] equipped with the supremum-norm, since in that case all known methods failed to achieve the optimal quantization rate for important Gauss-processes. In fact, by means of Spline-approximations and a scheme based on the Best-Approximations in the sense of the Kolmogorov n-width we were able to attain the optimal rate of convergence to zero for these quantization problems. Moreover, we established a new upper bound for the quantization error, which is based on a very simple criterion, the modulus of smoothness of the covariance function. Finally, we explicitly constructed those quantizers numerically.
Considering the numerical simulation of mathematical models it is necessary to have efficient methods for computing special functions. We will focus our considerations in particular on the classes of Mittag-Leffler and confluent hypergeometric functions. The PhD Thesis can be structured in three parts. In the first part, entire functions are considered. If we look at the partial sums of the Taylor series with respect to the origin we find that they typically only provide a reasonable approximation of the function in a small neighborhood of the origin. The main disadvantages of these partial sums are the cancellation errors which occur when computing in fixed precision arithmetic outside this neighborhood. Therefore, our aim is to quantify and then to reduce this cancellation effect. In the next part we consider the Mittag-Leffler and the confluent hypergeometric functions in detail. Using the method we developed in the first part, we can reduce the cancellation problems by "modifying" the functions for several parts of the complex plane. Finally, in in the last part two other approaches to compute Mittag-Leffler type and confluent hypergeometric functions are discussed. If we want to evaluate such functions on unbounded intervals or sectors in the complex plane, we have to consider methods like asymptotic expansions or continued fractions for large arguments z in modulus.
Die Dissertation mit dem Thema "Cross-Border-Leasing als Instrument der Kommunalfinanzierung " Eine finanzwirtschaftliche Analyse unter besonderer Berücksichtigung der Risiken - befasst sich am Beispiel des primär steuerinduzierten, grenzüberschreitenden Cross-Border-Leasings (CBL) mit einem innovativen, strukturierten Finanzierungsinstrument, das sich im Spannungsfeld von Rechtsstaatlichkeit und privatwirtschaftlichem Management öffentlicher Akteure befindet. Dazu werden bereits finanzierte und sich im Betrieb befindliche Assets in Variationen von langfristigen Leasingverträge eingebracht. Durch die geschickte Ausnutzung steuerlicher Zurechnungskriterien werden unter Einbindung mehrerer Jurisdiktionen Gewinnverschiebungsmöglichkeiten und Steueroptimierungspotenziale geschaffen, wobei die generierten Zusatzerträge unter den Akteuren aufgeteilt werden. Die Untersuchung orientiert sich an einem umfassenden forschungsleitenden Fragenkatalog, der sehr vielschichtig und zudem interdisziplinär die komplexen Aspekte des CBLs theoretisch sowie praktisch an einem Fallbeispiel untersucht. Zunächst erfolgt die Einbettung des CBLs in den kommunalen Hintergrund. Daran schliesst sich eine Darstellung des Untersuchungsgegenstands im Hinblick auf seine elementare Grundstruktur, Zahlungsströme, Vertragsparteien und deren bilateralen Verpflechtungen an. Daneben erfolgt eine Analyse der öffentlich-rechtlichen Implikationen des CBLs sowie der regulatorischen kommunalaufsichtsrechtlichen Anforderungen. Im zentralen empirischen Teil der Dissertation wird eine idealtypische CBL-Transaktion einer bundesdeutschen Metropole als Fallstudie analysiert: im Rahmen einer erstmaligen wissenschaftlichen Analyse einer Orginaldokumentation werden zunächst die strukturellen Rahmenparameter untersucht, um dann den Finanzierungsvorteil der Transaktion zu ermitteln. Eine Klassifikation erfolgt dabei in diejenigen Risken, die sich unmittelbar im Einflussbereich der Kommune befinden und somit direkt, d.h. durch aktives eigenes Handeln, minimiert oder vermieden werden können und in solche, die aus ihrer Sicht extern sind. Abgerundet wird die Risikoanalyse durch eine Abschätzung der maximalen Risikoposition in Form der Schadensersatzzahlungen, die die Kommune in vertraglich vereinbarten Fällen leisten muss. Dabei ermittelt die Verfasserin den Break-Even der Transaktion und setzt Szenarien sowie mathematische Modelle ein, um die inhärenten Risiken aufgrund ihrer Kostenfolgen sorgfältig gegenüber dem vereinnahmten kurzfristigen Vorteil abzuwägen. Die Untersuchung bedient sich dem anerkannten mathematisch-statistischen Value-at-Risk-Verfahren (VaR), das unter Verwendung von Ansätzen der Wahrscheinlichkeitsverteilung das Marktpreisrisiko zu quantifizieren vermag. Um zu validen Ergebnissen zu gelangen, werden zur Ermittlung des VaRs die beiden bekanntesten (nicht-parametrischen) Tools des VaR-Ansatzes angewendet, um die potenziellen Performanceschwankungen des Depotwertes unter Zugrundelegung bestimmter Wahrscheinlichkeiten abschätzen zu können. Dies ist das Verfahren der Historischen Simulation sowie die als mathematisch sehr anspruchsvoll eingestufte Monte-Carlo-Simulation. Als Weiterentwicklung des VaR-Modells wird zudem der Conditional VaR berechnet, der Aussagen über das Ausmaß der erwarteten Verluste zulässt. Anhand dieser Ergebnisse wird die maximale finanzielle Risikoposition der Kommune, bezogen auf das Kapitaldepot, abgeleitet. Darüber hinaus wird das CBL im Rahmen eines mathematischen Modells insgesamt beurteilt, indem eine Gegenüberstellung von vereinnahmtem Finanzierungsvorteil und den mit Eintrittswahrscheinlichkeiten gewichteten Ausfallrisiken, unter Berücksichtigung des jeweiligen Eintrittszeitpunktes, durchgeführt wird. Diese Vorgehensweise führt zu einer Symbiose aus Finanzierungsvorteil und den Risikomaßzahlen VaR, Expected Shortfall und Expected Loss. Die ermittelten finanzwirtschaftlichen Risikomaßzahlen führen zu überraschenden Ergebnissen, die die propagierte Risikolosigkeit und das vermeintlich attraktive Renditepotenzial derartiger Transaktionen eindeutig verneinen. Aus den gewonnenen Erkenntnissen leitet die Verfasserin praktische Handlungsempfehlungen und Absicherungsmöglichkeiten für kommunale Entscheidungsträger ab. Die sich aufgrund der US-Steuerrechtsänderung vom Februar 2005 ergebenden Auswirkungen auf bestehende Transaktionen wie auch auf Neugeschäfte werden im Ausblick dargelegt.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.