510 Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (77)
- Habilitation (2)
- Article (1)
Keywords
- Optimierung (13)
- Approximationstheorie (7)
- Approximation (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Funktionalanalysis (5)
- universal functions (5)
- Analysis (4)
- Numerische Mathematik (4)
Institute
- Mathematik (63)
- Fachbereich 4 (16)
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.