### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (59)
- Habilitation (2)

#### Keywords

- Optimierung (7)
- Approximationstheorie (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Approximation (5)
- Funktionalanalysis (5)
- universal functions (5)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)

#### Institute

- Mathematik (61) (remove)

Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)

An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.

A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem.rnrnIn addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2.rnrnMoreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization.rnrnThis problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6.rnrnOn the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4.rnrnIn the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets.rnrnTo obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone.rnrnAs a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2.rnNumerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.

We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.

Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.

Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.

This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.rn

In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night.rnIn a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population.rnUsing the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive.rnSince bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.

In dieser Arbeit untersuchen wir das Optimierungsproblem der optimalen Materialausrichtung orthotroper Materialien in der Hülle von dreidimensionalen Schalenkonstruktionen. Ziel der Optimierung ist dabei die Minimierung der Gesamtnachgiebigkeit der Konstruktion, was der Suche nach einem möglichst steifen Design entspricht. Sowohl die mathematischen als auch die mechanischen Grundlagen werden in kompakter Form zusammengetragen und basierend darauf werden sowohl gradientenbasierte als auch auf mechanischen Prinzipien beruhende, neue Erweiterungen punktweise formulierter Optimierungsverfahren entwickelt und implementiert. Die vorgestellten Verfahren werden anhand des Beispiels des Modells einer Flugzeugtragfläche mit praxisrelevanter Problemgröße getestet und verglichen. Schließlich werden die untersuchten Methoden in ihrer Koppelung mit einem Verfahren zur Topologieoptimierung, basierend auf dem topologischen Gradienten untersucht.

The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.

Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.

The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.

Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.

In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.

In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x âˆˆ X at time zero, its state at time n âˆˆ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x âˆˆ X as the time n increases. On the one hand, it is possible that there exist certain states x âˆˆ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for nâ†’âˆž. On the other hand, it might be the case that the system runs unstably for some initial states x âˆˆ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (gâˆ˜f^n) on D, where D is an open subset of F_f with f(D)âŠ‚ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (gâˆ˜f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i âˆˆ I} of continuous functions T_i:Xâ†’Y, an element x âˆˆ X is called universal for the family {T_i: i âˆˆ I} if the set {T_i(x): i âˆˆ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x âˆˆ X is called hypercyclic for T if it is universal for the family {T^n: n âˆˆ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)âŠ‚ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)â†’H(D), C_f(g):=gâˆ˜f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D âŠ‚ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g âˆˆ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {gâˆ˜f^n|_D: n âˆˆ N} is dense in H(D) so that the sequence of compositions (gâˆ˜f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (gâˆ˜f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.

Die vorliegende Arbeit teilt sich in die zwei titelgebenden Themengebiete. Inhalt des ersten Teils dieser Arbeit ist die Untersuchung der Proximität, also einer gewissen Messung der Nähe, von Binomial- und Poisson-Verteilungen. Speziell wird die uniforme Struktur des Totalvariationsabstandes auf der abgeschlossenen Menge aller Binomial- und Poisson-Verteilungen charakterisiert, und zwar mit Hilfe der die Verteilungen eindeutig bestimmenden zugehörigen Erwartungswerte und Varianzen. Insbesondere wird eine obere Abschätzung des Totalvariationsabstandes auf der Menge der Binomial- und Poisson-Verteilungen durch eine entsprechende Funktion der zugehörigen Erwartungswerte und Varianzen angegeben. Der zweite Teil der Arbeit widmet sich Konfidenzintervallen für Durchschnitte von Erfolgswahrscheinlichkeiten. Eine der ersten und bekanntesten Arbeiten zu Konfidenzintervallen von Erfolgswahrscheinlichkeiten ist die von Clopper und Pearson (1934). Im Binomialmodell werden hier bei bekanntem Stichprobenumfang und Konfidenzniveau Konfidenzintervalle für die unbekannte Erfolgswahrscheinlichkeit entwickelt. Betrachtet man bei festem Stichprobenumfang statt einer Binomialverteilung, also dem Bildmaß einer homogenen Bernoulli-Kette unter der Summationsabbildung, das entsprechende Bildmaß einer inhomogenen Bernoulli-Kette, so erhält man eine Bernoulli-Faltung mit den entsprechenden Erfolgswahrscheinlichkeiten. Für das Schätzen der durchschnittlichen Erfolgswahrscheinlichkeit im größeren Bernoulli-Faltungs-Modell sind z. B. die einseitigen Clopper-Pearson-Intervalle im Allgemeinen nicht gültig. Es werden hier optimale einseitige und gültige zweiseitige Konfidenzintervalle für die durchschnittliche Erfolgswahrscheinlichkeit im Bernoulli-Faltungs-Modell entwickelt. Die einseitigen Clopper-Pearson-Intervalle sind im Allgemeinen auch nicht gültig für das Schätzen der Erfolgswahrscheinlichkeit im hypergeometrischen Modell, das ein Teilmodell des Bernoulli-Faltungs-Modells ist. Für das hypergeometrische Modell mit festem Stichprobenumfang und bekannter Urnengröße sind die optimalen einseitigen Konfidenzintervalle bekannt. Bei festem Stichprobenumfang und unbekannter Urnengröße werden aus den im Bernoulli-Faltungs-Modell optimalen Konfidenzintervallen optimale Konfidenzintervalle für das hypergeometrische Modell entwickelt. Außerdem wird der Fall betrachtet, dass eine obere Schranke für die unbekannte Urnengröße gegeben ist.

Zu den klassischen Verteilungen der mathematischen Statistik zählen die zentralen F- und t-Verteilungen. Die vorliegende Arbeit untersucht Verallgemeinerungen dieser Verteilungen, die sogenannten doppelt nichtzentralen F- und t-Verteilungen, welche in der statistischen Testtheorie von Bedeutung sind. Die Tatsache, dass die zugehörigen Wahrscheinlichkeitsdichten nur in Form von Parameterintegral- bzw. Doppelreihendarstellungen gegeben sind, stellt eine große Herausforderung bei der Untersuchung analytischer Eigenschaften dar. Unter Verwendung von Techniken aus der Theorie der vorzeichenregulären Funktionen gelingt es, die bisher vermutete, jedoch lediglich aus Approximationen abgeleitete, strikt unimodale Gestalt der Dichtefunktion für eine große Klasse doppelt nichtzentraler Verteilungen zu zeigen. Dieses Resultat gestattet die Untersuchung des eindeutig bestimmten Modus als Funktion gewisser Nichtzentralitätsparameter. Hier erweist sich die Theorie der vorzeichenregulären Funktionen als wichtiges Hilfsmittel, um monotone Abhängigkeiten nachzuweisen.

The main topic of this treatise is the solution of two problems from the general theory of linear partial differential equations with constant coefficients. While surjectivity criteria for linear partial differential operators in spaces of smooth functions over an open subset of euclidean space and distributions were proved by B. Malgrange and L. Hörmander in 1955, respectively 1962, concrete evaluation of these criteria is still a highly non-trivial task. In particular, it is well-known that surjectivity in the space of smooth functions over an open subset of euclidean space does not automatically imply surjectivity in the space of distributions. Though, examples for this fact all live in three or higher dimensions. In 1966, F. Trèves conjectured that in the two dimensional setting surjectivity of a linear partial differential operator on the smooth functions indeed implies surjectivity on the space of distributions. An affirmative solution to this problem is presented in this treatise. The second main result solves the so-called problem of (distributional) parameter dependence for solutions of linear partial differential equations with constant coefficients posed by J. Bonet and P. Domanski in 2006. It is shown that, in dimensions three or higher, this problem in general has a negative solution even for hypoelliptic operators. Moreover, it is proved that the two dimensional case is again an exception, because in this setting the problem of parameter dependence always has a positive solution.

In a paper of 1996 the british mathematician Graham R. Allan posed the question, whether the product of two stable elements is again stable. Here stability describes the solvability of a certain infinite system of equations. Using a method from the theory of homological algebra, it is proved that in the case of topological algebras with multiplicative webs, and thus in all common locally convex topological algebras that occur in standard analysis, the answer of Allan's question is affirmative.

In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 âˆ’> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(â„¦) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(â„¦) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(â„¦) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.

In dieser Dissertation beschäftigen wir uns mit der konstruktiven und generischen Gewinnung universeller Funktionen. Unter einer universellen Funktion verstehen wie dabei eine solche holomorphe Funktion, die in gewissem Sinne ganze Klassen von Funktionen enthält. Die konstruktive Methode beinhaltet die explizite Konstruktion einer universellen Funktion über einen Grenzprozess, etwa als Polynomreihe. Die generische Methode definiert zunächst rein abstrakt die jeweils gewünschte Klasse von universellen Funktionen. Mithilfe des Baireschen Dichtesatzes wird dann gezeigt, dass die Klasse dieser Funktionen nicht nur nichtleer, sondern sogar G_delta und dicht in dem betrachteten Funktionenraum ist. Beide Methoden bedienen sich der Approximationssätze von Runge und von Mergelyan. Die Hauptergebnisse sind die folgenden: (1) Wir haben konstruktiv die Existenz von universellen Laurentreihen auf mehrfach zusammenhängenden Gebieten bewiesen. Zusätzlich haben wir gezeigt, dass die Menge solcher universeller Laurentreihen dicht im Raum der auf dem betrachteten Gebiet holomorphen Funktionen ist. (2) Die Existenz von universellen Faberreihen auf gewissen Gebieten wurde sowohl konstruktiv als auch generisch bewiesen. (3) Zum einen haben wir konstruktiv gezeigt, dass es so genannte ganze T-universelle Funktionen mit vorgegebenen Approximationswegen gibt. Die Approximationswege sind durch eine hinreichend variable funktionale Form vorgegeben. Die Menge solcher Funktionen ist im Raum der ganzen Funktionen eine dichte G_delta-Menge. Zum anderen haben wir generisch die Existenz von auf einem beschränkten Gebiet T-universellen Funktionen bezüglich gewisser vorgegebener Approximationswege bewiesen. Die Approximationswege sind auch hier genügend allgemein.