Refine
Year of publication
Document Type
- Doctoral Thesis (64)
- Habilitation (2)
- Article (1)
Has Fulltext
- yes (67)
Keywords
- Optimierung (7)
- Approximation (6)
- Approximationstheorie (6)
- Funktionalanalysis (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- universal functions (5)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)
Institute
- Mathematik (67) (remove)
The Hadamard product of two holomorphic functions which is defined via a convolution integral constitutes a generalization of the Hadamard product of two power series which is obtained by pointwise multiplying their coefficients. Based on the integral representation mentioned above, an associative law for this convolution is shown. The main purpose of this thesis is the examination of the linear and continuous Hadamard convolution operators. These operators map between spaces of holomorphic functions and send - with a fixed function phi - a function f to the convolution of phi and f. The transposed operator is computed and turns out to be a Hadamard convolution operator, too, mapping between spaces of germs of holomorphic functions. The kernel of Hadamard convolution operators is investigated and necessary and sufficient conditions for those operators to be injective or to have dense range are given. In case that the domain of holomorphy of the function phi allows a Mellin transform of phi, certain (generalized) monomials are identified as eigenfunctions of the corresponding operator. By means of this result and some extract of the theory of growth of entire functions, further propositions concerning the injectivity, the denseness of the range or the surjectivity of Hadamard convolution operators are shown. The relationship between Hadamard convolution operators, operators which are defined via the convolution with an analytic functional and differential operators of infinite order is investigated and the results which are obtained in the thesis are put into the research context. The thesis ends with an application of the results to the approximation of holomorphic functions by lacunary polynomials. On the one hand, the question under which conditions lacunary polynomials are dense in the space of all holomorphic functions is investigated and on the other hand, the rate of approximation is considered. In this context, a result corresponding to the Bernstein-Walsh theorem is formulated.
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
The optimal control of fluid flows described by the Navier-Stokes equations requires massive computational resources, which has led researchers to develop reduced-order models, such as those derived from proper orthogonal decomposition (POD), to reduce the computational complexity of the solution process. The object of the thesis is the acceleration of such reduced-order models through the combination of POD reduced-order methods with finite element methods at various discretization levels. Special stabilization methods required for high-order solution of flow problems with dominant convection on coarse meshes lead to numerical data that is incompatible with standard POD methods for reduced-order modeling. We successfully adapt the POD method for such problems by introducing the streamline diffusion POD method (SDPOD). Using the novel SDPOD method, we experiment with multilevel recursive optimization at Reynolds numbers of Re=400 and Re=10,000.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
The main topic of this treatise is the solution of two problems from the general theory of linear partial differential equations with constant coefficients. While surjectivity criteria for linear partial differential operators in spaces of smooth functions over an open subset of euclidean space and distributions were proved by B. Malgrange and L. Hörmander in 1955, respectively 1962, concrete evaluation of these criteria is still a highly non-trivial task. In particular, it is well-known that surjectivity in the space of smooth functions over an open subset of euclidean space does not automatically imply surjectivity in the space of distributions. Though, examples for this fact all live in three or higher dimensions. In 1966, F. Trèves conjectured that in the two dimensional setting surjectivity of a linear partial differential operator on the smooth functions indeed implies surjectivity on the space of distributions. An affirmative solution to this problem is presented in this treatise. The second main result solves the so-called problem of (distributional) parameter dependence for solutions of linear partial differential equations with constant coefficients posed by J. Bonet and P. Domanski in 2006. It is shown that, in dimensions three or higher, this problem in general has a negative solution even for hypoelliptic operators. Moreover, it is proved that the two dimensional case is again an exception, because in this setting the problem of parameter dependence always has a positive solution.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x ∈ X at time zero, its state at time n ∈ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x ∈ X as the time n increases. On the one hand, it is possible that there exist certain states x ∈ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for n→∞. On the other hand, it might be the case that the system runs unstably for some initial states x ∈ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (g∘f^n) on D, where D is an open subset of F_f with f(D)⊂ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (g∘f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i ∈ I} of continuous functions T_i:X→Y, an element x ∈ X is called universal for the family {T_i: i ∈ I} if the set {T_i(x): i ∈ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x ∈ X is called hypercyclic for T if it is universal for the family {T^n: n ∈ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)⊂ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)→H(D), C_f(g):=g∘f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D ⊂ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g ∈ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {g∘f^n|_D: n ∈ N} is dense in H(D) so that the sequence of compositions (g∘f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (g∘f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
Although universality has fascinated over the last decades, there are still numerous open questions in this field that require further investigation. In this work, we will mainly focus on classes of functions whose Fourier series are universal in the sense that they allow us to approximate uniformly any continuous function defined on a suitable subset of the unit circle.
The structure of this thesis is as follows. In the first chapter, we will initially introduce the most important notation which is needed for our following discussion. Subsequently, after recalling the notion of universality in a general context, we will revisit significant results concerning universality of Taylor series. The focus here is particularly on universality with respect to uniform convergence and convergence in measure. By a result of Menshov, we will transition to universality of Fourier series which is the central object of study in this work.
In the second chapter, we recall spaces of holomorphic functions which are characterized by the growth of their coefficients. In this context, we will derive a relationship to functions on the unit circle via an application of the Fourier transform.
In the second part of the chapter, our attention is devoted to the $\mathcal{D}_{\textup{harm}}^p$ spaces which can be viewed as the set of harmonic functions contained in the $W^{1,p}(\D)$ Sobolev spaces. In this context, we will also recall the Bergman projection. Thanks to the intensive study of the latter in relation to Sobolev spaces, we can derive a decomposition of $\mathcal{D}_{\textup{harm}}^p$ spaces which may be seen as analogous to the Riesz projection for $L^p$ spaces. Owing to this result, we are able to provide a link between $\mathcal{D}_{\textup{harm}}^p$ spaces and spaces of holomorphic functions on $\mathbb{C}_\infty \setminus \s$ which turns out to be a crucial step in determining the dual of $\mathcal{D}_{\textup{harm}}^p$ spaces.
The last section of this chapter deals with the Cauchy dual which has a close connection to the Fantappié transform. As an application, we will determine the Cauchy dual of the spaces $D_\alpha$ and $D_{\textup{harm}}^p$, two results that will prove to be very helpful later on. Finally, we will provide a useful criterion that establishes a connection between the density of a set in the direct sum $X \oplus Y$ and the Cauchy dual of the intersection of the respective spaces.
The subsequent chapter will delve into the theory of capacities and, consequently, potential theory which will prove to be essential in formulating our universality results. In addition to introducing further necessary terminologies, we will define capacities in the first section following [16], however in the frame of separable metric spaces, and revisit the most important results about them.
Simultaneously, we make preparations that allow us to define the $\mathrm{Li}_\alpha$-capacity which will turn out to be equivalent to the classical Riesz $\alpha$-capacity. The $\mathrm{Li}_\alpha$-capacity proves to be more adapted to the $D_\alpha$ spaces. It becomes apparent in the course of our discussion that the $\mathrm{Li}_\alpha$-capacity is essential to prove uniqueness results for the class $D_\alpha$. This leads to the centerpiece of this chapter which forms the energy formula for the $\mathrm{Li}_\alpha$-capacity on the unit circle. More precisely, this identity establishes a connection between the energy of a measure and its corresponding Fourier coefficients. We will briefly deal with the complement-equivalence of capacities before we revisit the concept of Bessel and Riesz capacities, this time, however, in a much more general context, where we will mainly rely on [1]. Since we defined capacities on separable metric spaces in the first section, we can draw a connection between Bessel capacities and $\mathrm{Li}_\alpha$-capacities. To conclude this chapter, we would like to take a closer look at the geometric meaning of capacities. Here, we will point out a connection between the Hausdorff dimension and the polarity of a set, and transfer it to the $\mathrm{Li}_\alpha$-capacity. Another aspect will be the comparison of Bessel capacities across different dimensions, in which the theory of Wolff potentials crystallizes as a crucial auxiliary tool.
In the fourth chapter of this thesis, we will turn our focus to the theory of sets of uniqueness, a subject within the broader field of harmonic analysis. This theory has a close relationship with sets of universality, a connection that will be further elucidated in the upcoming chapter.
The initial section of this chapter will be dedicated to the notion of sets of uniqueness that is specifically adapted to our current context. Building on this concept, we will recall some of the fundamental results of this theory.
In the subsequent section, we will primarily rely on techniques from previous chapters to determine the closed sets of uniqueness for the class $\mathcal{D}_{\alpha}$. The proofs we will discuss are largely influenced by [16, p.\ 178] and [9, pp.\ 82].
One more time, it will become evident that the introduction of the $\mathrm{Li}_\alpha$-capacity in the third chapter and the closely associated energy formula on the unit circle, were the pivotal factors that enabled us to carry out these proofs.
In the final chapter of our discourse, we will present our results on universality. To begin, we will recall a version of the universality criterion which traces back to the work of Grosse-Erdmann (see [26]). Coupled with an outcome from the second chapter, we will prove a result that allows us to obtain the universality of a class using the technique of simultaneous approximation. This tool will play a key role in the proof of our universality results which will follow hereafter.
Our attention will first be directed toward the class $D_\alpha$ with $\alpha$ in the interval $(0,1]$. Here, we summarize that universality with respect to uniform convergence occurs on closed and $\alpha$-polar sets $E \subset \s$. Thanks to results of Carleson and further considerations, which particularly rely on the favorable behavior of the $\mathrm{Li}_\alpha$-kernel, we also find that this result is sharp. In particular, it may be seen as a generalization of the universality result for the harmonic Dirichlet space.
Following this, we will investigate the same class, however, this time for $\alpha \in [-1,0)$. In this case, it turns out that universality with respect to uniform convergence occurs on closed and $(-\alpha)$-complement-polar sets $E \subset \s$. In particular, these sets of universality can have positive arc measure. In the final section, we will focus on the class $D_{\textup{harm}}^p$. Here, we manage to prove that universality occurs on closed and $(1,p)$-polar sets $E \subset \s$. Through results of Twomey [68] combined with an observation by Girela and Pélaez [23], as well as the decomposition of $D_{\textup{harm}}^p$, we can deduce that the closed sets of universality with respect to uniform convergence of the class $D_{\textup{harm}}^p$ are characterized by $(1,p)$-polarity. We conclude our work with an application of the latter result to the class $D^p$. We will show that the closed sets of divergence for the class $D^p$ are given by the $(1,p)$-polar sets.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
Die Dissertation mit dem Thema "Cross-Border-Leasing als Instrument der Kommunalfinanzierung " Eine finanzwirtschaftliche Analyse unter besonderer Berücksichtigung der Risiken - befasst sich am Beispiel des primär steuerinduzierten, grenzüberschreitenden Cross-Border-Leasings (CBL) mit einem innovativen, strukturierten Finanzierungsinstrument, das sich im Spannungsfeld von Rechtsstaatlichkeit und privatwirtschaftlichem Management öffentlicher Akteure befindet. Dazu werden bereits finanzierte und sich im Betrieb befindliche Assets in Variationen von langfristigen Leasingverträge eingebracht. Durch die geschickte Ausnutzung steuerlicher Zurechnungskriterien werden unter Einbindung mehrerer Jurisdiktionen Gewinnverschiebungsmöglichkeiten und Steueroptimierungspotenziale geschaffen, wobei die generierten Zusatzerträge unter den Akteuren aufgeteilt werden. Die Untersuchung orientiert sich an einem umfassenden forschungsleitenden Fragenkatalog, der sehr vielschichtig und zudem interdisziplinär die komplexen Aspekte des CBLs theoretisch sowie praktisch an einem Fallbeispiel untersucht. Zunächst erfolgt die Einbettung des CBLs in den kommunalen Hintergrund. Daran schliesst sich eine Darstellung des Untersuchungsgegenstands im Hinblick auf seine elementare Grundstruktur, Zahlungsströme, Vertragsparteien und deren bilateralen Verpflechtungen an. Daneben erfolgt eine Analyse der öffentlich-rechtlichen Implikationen des CBLs sowie der regulatorischen kommunalaufsichtsrechtlichen Anforderungen. Im zentralen empirischen Teil der Dissertation wird eine idealtypische CBL-Transaktion einer bundesdeutschen Metropole als Fallstudie analysiert: im Rahmen einer erstmaligen wissenschaftlichen Analyse einer Orginaldokumentation werden zunächst die strukturellen Rahmenparameter untersucht, um dann den Finanzierungsvorteil der Transaktion zu ermitteln. Eine Klassifikation erfolgt dabei in diejenigen Risken, die sich unmittelbar im Einflussbereich der Kommune befinden und somit direkt, d.h. durch aktives eigenes Handeln, minimiert oder vermieden werden können und in solche, die aus ihrer Sicht extern sind. Abgerundet wird die Risikoanalyse durch eine Abschätzung der maximalen Risikoposition in Form der Schadensersatzzahlungen, die die Kommune in vertraglich vereinbarten Fällen leisten muss. Dabei ermittelt die Verfasserin den Break-Even der Transaktion und setzt Szenarien sowie mathematische Modelle ein, um die inhärenten Risiken aufgrund ihrer Kostenfolgen sorgfältig gegenüber dem vereinnahmten kurzfristigen Vorteil abzuwägen. Die Untersuchung bedient sich dem anerkannten mathematisch-statistischen Value-at-Risk-Verfahren (VaR), das unter Verwendung von Ansätzen der Wahrscheinlichkeitsverteilung das Marktpreisrisiko zu quantifizieren vermag. Um zu validen Ergebnissen zu gelangen, werden zur Ermittlung des VaRs die beiden bekanntesten (nicht-parametrischen) Tools des VaR-Ansatzes angewendet, um die potenziellen Performanceschwankungen des Depotwertes unter Zugrundelegung bestimmter Wahrscheinlichkeiten abschätzen zu können. Dies ist das Verfahren der Historischen Simulation sowie die als mathematisch sehr anspruchsvoll eingestufte Monte-Carlo-Simulation. Als Weiterentwicklung des VaR-Modells wird zudem der Conditional VaR berechnet, der Aussagen über das Ausmaß der erwarteten Verluste zulässt. Anhand dieser Ergebnisse wird die maximale finanzielle Risikoposition der Kommune, bezogen auf das Kapitaldepot, abgeleitet. Darüber hinaus wird das CBL im Rahmen eines mathematischen Modells insgesamt beurteilt, indem eine Gegenüberstellung von vereinnahmtem Finanzierungsvorteil und den mit Eintrittswahrscheinlichkeiten gewichteten Ausfallrisiken, unter Berücksichtigung des jeweiligen Eintrittszeitpunktes, durchgeführt wird. Diese Vorgehensweise führt zu einer Symbiose aus Finanzierungsvorteil und den Risikomaßzahlen VaR, Expected Shortfall und Expected Loss. Die ermittelten finanzwirtschaftlichen Risikomaßzahlen führen zu überraschenden Ergebnissen, die die propagierte Risikolosigkeit und das vermeintlich attraktive Renditepotenzial derartiger Transaktionen eindeutig verneinen. Aus den gewonnenen Erkenntnissen leitet die Verfasserin praktische Handlungsempfehlungen und Absicherungsmöglichkeiten für kommunale Entscheidungsträger ab. Die sich aufgrund der US-Steuerrechtsänderung vom Februar 2005 ergebenden Auswirkungen auf bestehende Transaktionen wie auch auf Neugeschäfte werden im Ausblick dargelegt.
Zu den klassischen Verteilungen der mathematischen Statistik zählen die zentralen F- und t-Verteilungen. Die vorliegende Arbeit untersucht Verallgemeinerungen dieser Verteilungen, die sogenannten doppelt nichtzentralen F- und t-Verteilungen, welche in der statistischen Testtheorie von Bedeutung sind. Die Tatsache, dass die zugehörigen Wahrscheinlichkeitsdichten nur in Form von Parameterintegral- bzw. Doppelreihendarstellungen gegeben sind, stellt eine große Herausforderung bei der Untersuchung analytischer Eigenschaften dar. Unter Verwendung von Techniken aus der Theorie der vorzeichenregulären Funktionen gelingt es, die bisher vermutete, jedoch lediglich aus Approximationen abgeleitete, strikt unimodale Gestalt der Dichtefunktion für eine große Klasse doppelt nichtzentraler Verteilungen zu zeigen. Dieses Resultat gestattet die Untersuchung des eindeutig bestimmten Modus als Funktion gewisser Nichtzentralitätsparameter. Hier erweist sich die Theorie der vorzeichenregulären Funktionen als wichtiges Hilfsmittel, um monotone Abhängigkeiten nachzuweisen.
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.