510 Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (75)
- Habilitation (2)
- Article (1)
Keywords
- Optimierung (12)
- Approximationstheorie (7)
- Approximation (6)
- Funktionentheorie (6)
- Partielle Differentialgleichung (6)
- Universalität (6)
- Funktionalanalysis (5)
- universal functions (5)
- Analysis (4)
- Numerische Mathematik (4)
- Numerische Strömungssimulation (4)
- Optimale Kontrolle (4)
- Quadratische Optimierung (4)
- Shape Optimization (4)
- Gestaltoptimierung (3)
- Hadamard product (3)
- Kompositionsoperator (3)
- Navier-Stokes-Gleichung (3)
- Nichtlineare Optimierung (3)
- Operatortheorie (3)
- Parameterschätzung (3)
- Sequentielle quadratische Optimierung (3)
- Trust-Region-Algorithmus (3)
- Universelle Funktionen (3)
- binomial (3)
- proper orthogonal decomposition (3)
- Adjungierte Differentialgleichung (2)
- Aerodynamic Design (2)
- Approximation im Komplexen (2)
- Baire's theorem (2)
- Binomial (2)
- Binomialverteilung (2)
- Dichtesatz (2)
- Faber series (2)
- Faberreihen (2)
- GPU (2)
- Hadamard, Jacques (2)
- Hadamardprodukt (2)
- Homologische Algebra (2)
- Hyperzyklizität (2)
- Konvexe Optimierung (2)
- Laurentreihen (2)
- Maschinelles Lernen (2)
- Mathematik (2)
- Monte-Carlo-Simulation (2)
- Navier-Stokes equations (2)
- Neuronales Netz (2)
- Numerical Optimization (2)
- One-Shot (2)
- POD-Methode (2)
- Parameteridentifikation (2)
- Regularisierung (2)
- Shape Spaces (2)
- Simulation (2)
- Statistik (2)
- Stochastischer Prozess (2)
- Strömungsmechanik (2)
- convergence (2)
- functional analysis (2)
- lacunary approximation (2)
- laurent series (2)
- optimal control (2)
- partial integro-differential equations (2)
- prescribed approximation curves (2)
- shape optimization (2)
- universality (2)
- universelle Funktionen (2)
- vorgegebene Approximationswege (2)
- Überkonvergenz (2)
- Adjoint (1)
- Adjoint Equation (1)
- Adjoint Method (1)
- Alternierende Projektionen (1)
- Amtliche Statistik (1)
- Analytisches Funktional (1)
- Arbitrage-Pricing-Theorie (1)
- Asymptotik (1)
- Ausdehnungsoperator (1)
- Auslöschung (1)
- Banach Algebras (1)
- Banach space (1)
- Banach-Algebra (1)
- Banach-Raum (1)
- Berechnungskomplexität (1)
- Berry-Esseen (1)
- Birkhoff functions (1)
- Birkhoff-Funktionen (1)
- Borel transform (1)
- Branch-and-Bound-Methode (1)
- Branching Diffusion (1)
- Bregman distance (1)
- Bregman-Distanz (1)
- Brownian Motion (1)
- Brownsche Bewegung (1)
- Buehler, Robert J. (1)
- Bündel-Methode (1)
- Calibration (1)
- Cancellation (1)
- Cech cohomology of leafwise constant functions (1)
- Cech-de Rham cohomology (1)
- Cesàro-Mittel (1)
- Chaotisches System (1)
- Codebuch (1)
- Combinatorial Optimization (1)
- Common Noise (1)
- Composition algebra (1)
- Composition operator (1)
- Computational Fluid Dynamics (1)
- Computational complexity (1)
- Convergence (1)
- Coposititive, Infinite Dimension (1)
- Copositive und Vollständig positive Optimierung (1)
- Couple constraints (1)
- Césaro-Mittel (1)
- Decomposition (1)
- Dekomposition (1)
- Derivat <Wertpapier> (1)
- Dichte <Stochastik> (1)
- Differentialgeometrie (1)
- Direkte numerische Simulation (1)
- Discontinuous Galerkin (1)
- Discrete Optimization, Linear Programming, Integer Programming, Extended Formulation, Graph Theory, Branch & Bound (1)
- Discrete optimization (1)
- Discrete-Time Impulse Control (1)
- Diskontinuierliche Galerkin-Methode (1)
- Diskretisierung (1)
- Distribution (1)
- Distribution <Funktionalanalysis> (1)
- Doppelt nichtzentrale F-Verteilung (1)
- Doppelt nichtzentrale t-Verteilung (1)
- Doubly noncentral F-distribution (1)
- Doubly noncentral t-distribution (1)
- Dualitätstheorie (1)
- Elastizität (1)
- Electricity market equilibrium models (1)
- Entire Function (1)
- Error Estimates (1)
- Error function (1)
- Ersatzmodellierung (1)
- Extended sign regular (1)
- Extensionsoperatoren (1)
- Faltungsoperator (1)
- Fehlerabschätzung (1)
- Fehleranalyse (1)
- Fehlerfunktion (1)
- Finanzmathematik (1)
- Fledermäuse (1)
- Formenräume (1)
- Formoptimierung (1)
- Fréchet-Algebra (1)
- Functor (1)
- Funktor (1)
- Gaussian measures (1)
- Gauß-Maß (1)
- Gebietszerlegung (1)
- Gemischt-ganzzahlige Optimierung (1)
- Gittererzeugung (1)
- Globale Konvergenz (1)
- Globale Optimierung (1)
- Graphentheorie (1)
- Graphikprozessor (1)
- Grundwasserstrom (1)
- Gärung (1)
- HPC (1)
- Hadamard cycle (1)
- Hadamardzyklus (1)
- Hassler Whitney (1)
- Hauptkomponentenanalyse (1)
- Hybrid Modelling (1)
- Hypercyclicity (1)
- Hypergeometric 3-F-1 Polynomials (1)
- Hypergeometrische 3-F-1 Polynome (1)
- Hypergeometrische Funktionen (1)
- Hypoelliptischer Operator (1)
- Individuenbasiertes Modell (1)
- Induktiver Limes (1)
- Inkorrekt gestelltes Problem (1)
- Innere-Punkte-Methode (1)
- Integer Optimization (1)
- Integrodifferentialgleichung (1)
- Intervallalgebra (1)
- Kegel (1)
- Kleinman (1)
- Kombinatorische Optimierung (1)
- Komplexe Approximation (1)
- Kompositionsalgebra (1)
- Konfidenzbereich (1)
- Konfidenzintervall (1)
- Konfidenzintervalle (1)
- Konfluente hypergeometrische Funktion (1)
- Kontrolltheorie (1)
- Konvektions-Diffusionsgleichung (1)
- Konvergenz (1)
- Konvergenztheorie (1)
- Korovkin-Satz (1)
- Kriging (1)
- Krylov subspace methods (1)
- Krylov-Verfahren (1)
- LB-Algebra (1)
- Laplace Method (1)
- Laplace Methode (1)
- Laplace-Differentialgleichung (1)
- Level Set Methode (1)
- Level constraints (1)
- Linear complementarity problems (1)
- Lineare Dynamik (1)
- Lineare Funktionalanalysis (1)
- Linearer partieller Differentialoperator (1)
- Lückenapproximation (1)
- Lückenreihe (1)
- Markov Inkrement (1)
- Markov-Kette (1)
- Matching (1)
- Matching polytope (1)
- Matrixcone (1)
- Matrixzerlegung (1)
- Mean Field Games (1)
- Mehrgitterverfahren (1)
- Mellin transformation (1)
- Mellin-Transformierte (1)
- Menage (1)
- Mergelyan (1)
- Mesh Generation (1)
- Mesh Quality (1)
- Methode der kleinsten Quadrate (1)
- Methode der logarithmischen Barriere (1)
- Mischung (1)
- Mittag-Leffler Funktion (1)
- Mittag-Leffler function (1)
- Mixed Local-Nonlocal PDE (1)
- Mixed-integer optimization (1)
- Modellierung (1)
- Modellprädiktive Regelung (1)
- Modified Bessel function (1)
- Modifizierte Besselfunktion (1)
- Monte Carlo Simulation (1)
- Monte-Carlo Methods (1)
- Multilineare Algebra (1)
- Multinomial (1)
- Multiplikationssatz (1)
- Ménage Polynome (1)
- Ménage Polynomials (1)
- Nash–Cournot competition (1)
- Nebenbedingung (1)
- Newton (1)
- Newton-Verfahren (1)
- Nichtfortsetzbare Potenzreihe (1)
- Nichtglatte Optimierung (1)
- Nichtkonvexe Optimierung (1)
- Nonlinear Optimization (1)
- Nonlocal Diffusion (1)
- Nonlocal convection-diffusion (1)
- Normalverteilung (1)
- Nullstellen (1)
- Numerisches Verfahren (1)
- Operations Research (1)
- Optimal Control on Unbounded Space Domains (1)
- Optimierung bei nichtlinearen partiellen Differentialgleichungen (1)
- Optimierung unter Unsicherheiten (1)
- Optimization under Uncertainty (1)
- Optionspreis (1)
- Orthogonale Zerlegung (1)
- Overconvergence (1)
- Overconvergent power series and matrix-transforms (1)
- P-Konvexität für Träger (1)
- P-Konvexität für singuläre Träger (1)
- P-convexity for singular supports (1)
- P-convexity for supports (1)
- PDE Beschränkungen (1)
- PDE Constraints (1)
- PDE-constrained optimization (1)
- PIDE constrained Optimal Control (1)
- Parameter dependence of solutions of linear partial differential equations (1)
- Parameterabhängige Lösungen linearer partieller Differentialgeichungen (1)
- Parameterabhängigkeit (1)
- Parametrische Optimierung (1)
- Penalty-Methode (1)
- Perfect competition (1)
- Poisson (1)
- Polyeder (1)
- Polynom (1)
- Polynom-Interpolationsverfahren (1)
- Populationsmodellierung (1)
- Potenzialtheorie (1)
- Prediction (1)
- Projective Limit (1)
- Projektiver Limes (1)
- Proper Orthogonal Decomposition (1)
- Proximal-Punkt-Verfahren (1)
- Quantisierung (1)
- Quantisierungkugel (1)
- Quantisierungsradius (1)
- Quantization (1)
- Randverhalten (1)
- Rechteckwahrscheinlichkeit (1)
- Regression (1)
- Regressionsanalyse (1)
- Regressionsmodell (1)
- Regularisierungsverfahren (1)
- Robust optimization (1)
- Robuste Statistik (1)
- Robustheit (1)
- Rundungsfehler (1)
- Scan Statistik (1)
- Schalenkonstruktionen (1)
- Schnittebenen (1)
- Schätzfunktion (1)
- Schätztheorie (1)
- Selbst-Concordanz (1)
- Semiinfinite Optimierung (1)
- Shape Calculus (1)
- Shape Kalkül (1)
- Shape Optimiztion (1)
- Shape SQP Methods (1)
- Small area estimation (1)
- Spatial Ramsey Model (1)
- Spektrum <Mathematik> (1)
- Spezielle Funktionen (1)
- Splitting (1)
- Stark stetige Halbgruppe (1)
- Stichprobe (1)
- Stochastic Differential Equation (1)
- Stochastische Approximation (1)
- Stochastische Differentialgleichungen (1)
- Stochastische Konvergenz (1)
- Stochastische Quantisierung (1)
- Stochastische optimale Kontrolle (1)
- Strukturoptimierung (1)
- Subset Selection (1)
- Survey Statistics (1)
- Survey-Statistik (1)
- Taylor Shift Operator (1)
- Taylor shift operator (1)
- Topological Algebra (1)
- Topologieoptimierung (1)
- Topologische Algebra (1)
- Topologische Algebra mit Gewebe (1)
- Topologische Sensitivität (1)
- Transaktionskosten (1)
- Transitivität (1)
- Trust Region (1)
- Ueberkonvergenz (1)
- Ultradistribut (1)
- Unimodality (1)
- Unimodalität (1)
- Universal approximation (1)
- Universal functions (1)
- Universal overconvergence (1)
- Universal power series (1)
- Universalitäten (1)
- Universelle Approximation (1)
- Universelle Funktion (1)
- Universelle Potenzreihen (1)
- Universelle trigonometrische Reihe (1)
- Universelle ueberkonvergente Potenzreihen und Matrix-Transformierte (1)
- Universelle Überkonvergenz (1)
- Variationsungleichung (1)
- Versuchsplanung (1)
- Verteilungsapproximation (1)
- Volkszählung (1)
- Vorkonditionierung (1)
- Vorzeichenreguläre Funktionen (1)
- Wahrscheinlichkeitsverteilung (1)
- Webbed Spaces (1)
- Weingärung (1)
- Wertpapie (1)
- Whitney jets (1)
- Whitney's extension problem (1)
- Whitneys Extensionsproblem (1)
- Windkraftwerk (1)
- Zwillingsformel (1)
- alternating projections (1)
- amarts (1)
- analytic functional (1)
- approximation (1)
- approximation in the complex plane (1)
- asymptotic analysis (1)
- asymptotically optimal codebooks (1)
- asymptotisch optimale Codebücher (1)
- auxiliary problem principle (1)
- binary (1)
- boundary behavior (1)
- branch-and-bound (1)
- bundle-method (1)
- combinatorial optimization (1)
- completely positive (1)
- completely positive cone (1)
- completely positive modelling and optimization (1)
- complex analysis (1)
- complex approximation (1)
- complex dynamics (1)
- complexity reduction (1)
- complimentarity (1)
- composition operator (1)
- computational fluid dynamics (1)
- confidence intervals (1)
- confidence region (1)
- confluent hypergeometric function (1)
- convergence theory (1)
- convolution operator (1)
- copositive cone (1)
- copositive optimization (1)
- cutting planes (1)
- de Rham cohomology (1)
- design of experiments (1)
- dilute particle suspension (1)
- domain decomposition (1)
- eigenfunction expansion (1)
- exponential type (1)
- extension operator (1)
- final set (1)
- financial derivatives (1)
- finite element method (1)
- flow control (1)
- foliated manifolds (1)
- fractional Poisson equation (1)
- frequently hypercyclic operator (1)
- ganze Funktion (1)
- gap power series (1)
- gewöhnliche Differentialgleichungen (1)
- growth (1)
- homological algebra (1)
- homological methods (1)
- homologische Methoden (1)
- hypercyclic operator (1)
- hypercyclicity (1)
- hypergeometric functions (1)
- incompressible Newtonian fluid (1)
- individual based model (1)
- inexact (1)
- inexact Gauss-Newton methods (1)
- kombinatorische Optimierung (1)
- komplexe Dynamik (1)
- konvexe Reforumlierungen (1)
- kopositiver Kegel (1)
- large scale problems (1)
- linear dynamics (1)
- linear elasticity (1)
- lineare Elastizität (1)
- local limit (1)
- local quantization error (1)
- logarithmic-quadratic distance function (1)
- logarithmisch-quadratische Distanzfunktion (1)
- lokaler Quantisierungsfehler (1)
- markov increment (1)
- mean field approximation (1)
- meromorphic functions (1)
- minimal compliance (1)
- minimale Nachgiebigkeit (1)
- mixing (1)
- model order reduction (1)
- model predictive control (1)
- monotone (1)
- multigrid (1)
- multilevel Toeplitz (1)
- multilinear algebra (1)
- multinomial (1)
- n.a. (1)
- nichtnegativ (1)
- non-convex (1)
- nonlinear optimization (1)
- nonnegative (1)
- normal approximation (1)
- numerical analysis (1)
- optimal continuity estimates (1)
- optimal quantization (1)
- optimale Quantisierung (1)
- optimale Stetigkeitsabschätzungen (1)
- optimization (1)
- ordinary differential equations (1)
- orthotrope Materialien (1)
- orthotropic material (1)
- parameter dependence (1)
- parameter estimation (1)
- parameter identification (1)
- partial differential equations (1)
- partial differential operators of first order as generators of C0-semigroups (1)
- partial integro-differential equation (1)
- partielle Differentialgleichungen (1)
- partielle Differentialoperatoren erster Ordnung als Erzeuger von C0-Halbgruppen (1)
- partielle Integro Differentialgleichung (1)
- partielle Integro-Differentialgleichungen (1)
- partielle Integrodifferentialgleichungen (1)
- penalty (1)
- population modelling (1)
- port-Hamiltonian (1)
- preconditioning (1)
- pricing (1)
- principal component analysis (1)
- quantization ball (1)
- quantization radius (1)
- rationale und meromorphe Approximation (1)
- rectangular probabilities (1)
- reduced order modelling (1)
- reduced-order modelling (1)
- robustness (1)
- scan statistics (1)
- second order cone (1)
- self-concodrance (1)
- series expansion (1)
- shape calculus (1)
- shell construction (1)
- special functions (1)
- splitting (1)
- starke und schwache Asymptotiken (1)
- statistics (1)
- stochastic Predictor-Corrector-Scheme (1)
- stochastic partial differential algebraic equation (1)
- stochastic processes (1)
- strong and weak asymptotics (1)
- structural optimization (1)
- structure-preserving (1)
- sukzessive Ableitungen (1)
- surrogate modeling (1)
- tensor methods (1)
- topological derivative (1)
- topology optimization (1)
- transaction costs (1)
- transitivity (1)
- trust-region method (1)
- trust-region methods (1)
- underdetermined nonlinear least squares problem (1)
- universal (1)
- universal power series (1)
- universal trigonometric series (1)
- universalities (1)
- vollständig positiv (1)
- vollständig positiver Kegel (1)
- wine fermentation (1)
- zeros (1)
Institute
- Mathematik (63)
- Fachbereich 4 (15)
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.