Filtern
Dokumenttyp
- Dissertation (60)
- Wissenschaftlicher Artikel (2)
- Habilitation (1)
Schlagworte
- Optimierung (6)
- Deutschland (4)
- Finanzierung (4)
- Schätzung (4)
- Stichprobe (4)
- Unternehmen (4)
- Erhebungsverfahren (3)
- Familienbetrieb (3)
- Social Entrepreneurship (3)
- Social entrepreneurship (3)
Institut
- Fachbereich 4 (63) (entfernen)
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
For decades, academics and practitioners aim to understand whether and how (economic) events affect firm value. Optimally, these events occur exogenously, i.e. suddenly and unexpectedly, so that an accurate evaluation of the effects on firm value can be conducted. However, recent studies show that even the evaluation of exogenous events is often prone to many challenges that can lead to diverse interpretations, resulting in heated debates. Recently, there have been intense debates in particular on the impact of takeover defenses and of Covid-19 on firm value. The announcements of takeover defenses and the propagation of Covid-19 are exogenous events that occur worldwide and are economically important, but have been insufficiently examined. By answering open research questions, this dissertation aims to provide a greater understanding about the heterogeneous effects that exogenous events such as the announcements of takeover defenses and the propagation of Covid-19 have on firm value. In addition, this dissertation analyzes the influence of certain firm characteristics on the effects of these two exogenous events and identifies influencing factors that explain contradictory results in the existing literature and thus can reconcile different views.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
In vielen Branchen und vor allem in großen Unternehmen gehört eine Unterstützung von Geschäftsprozessen durch Workflow-Management-Systeme zum gelebten Alltag. Im Zentrum steht dabei die Steuerung kontrollflussorientierter Abläufe, während Prozesse mit einem Schwerpunkt auf Daten, Informationen und Wissen meist außen vor bleiben. Solche wissensintensive Prozesse (engl.: knowledge intensive processes) (KiPs) sind Untersuchungsgegenstand in vielen aktuellen Studien, welche ein derzeit aktives Forschungsgebiet formen.
Im Vordergrund solcher KiPs steht dabei das durch die mitwirkenden Personen eingebrachte Wissen, welches in einem wesentlichen Maß die Prozessausführung beeinflusst, hierdurch jedoch die Bearbeitung komplexer und meist hoch volatiler Prozesse ermöglicht. Hierbei handelt es sich zumeist um entscheidungsintensive Prozesse, Prozesse zur Wissensakquisition oder Prozesse, die zu einer Vielzahl unterschiedlicher Prozessabläufe führen können.
Im Rahmen dieser Arbeit wird ein Ansatz entwickelt und vorgestellt, der sich der Modellierung, Visualisierung und Ausführung wissensintensiver Prozesse unter Verwendung Semantischer Technologien widmet. Hierzu werden als die zentralen Anforderungen zur Ausführung von KiPs Flexibilität, Adaptivität und Zielorientierung definiert. Daran anknüpfend werden drei zentrale Grundprinzipien der Prozessmodellierung identifiziert, welche in der ersten Forschungsfrage aufgegriffen werden: „Können die drei Grundprinzipien in einem einheitlichen datenzentrierten, deklarativen, semantischen Ansatz (welcher mit ODD-BP bezeichnet wird) kombiniert werden und können damit die zentralen Anforderungen von KiPs erfüllt werden?”
Die Grundlage für ODD-BP bildet ein Metamodell, welches als Sprachkonstrukt fungiert und die Definition der angestrebten Prozessmodelle erlaubt. Darauf aufbauend wird mit Hilfe von Inferenzierungsregeln ein Verfahren entwickelt, welches das Schlussfolgern von Prozesszuständen ermöglicht und somit eine klassische Workflow-Engine überflüssig macht. Zudem wird eine Methodik eingeführt, die für jede in einem Prozess mitwirkende Person eine maßgeschneiderte, adaptive Prozessvisualisierung ermöglicht, um neben dem Freiheitsgrad der Flexibilität auch eine fundierte Prozessunterstützung bei der Ausführung von KiPs leisten zu können. All dies erfolgt innerhalb einer einheitlichen Wissensbasis, die zum einen die Grundlage für eine vollständige semantische Prozessmodellierung bildet und zum anderen die Möglichkeit zur Integration von Expertenwissen eröffnet. Dieses Expertenwissen kann einen expliziten Beitrag bei der Ausführung wissensintensiver Prozesse leisten und somit die Kollaboration von Mensch und Maschine durch Technologien der symbolischen KI ermöglichen. Die zweite Forschungsfrage greift diesen Aspekt auf: „Kann in dem ODD-BP Ansatz ontologisches Wissen so integriert werden, dass dieses in einer Prozessausführung einen Beitrag leistet?”
Das Metamodell sowie die entwickelten Methoden und Verfahren werden in einem prototypischen, generischen System realisiert, welches grundsätzlich für alle Anwendungsgebiete mit KiPs geeignet ist. Zur Validierung des ODD-BP Ansatzes erfolgt eine Ausrichtung auf den Anwendungsfall einer Notrufabfrage aus dem Leitstellenumfeld. Im Zuge der Evaluation wird gezeigt, wie dieser wissensintensive Ablauf von einer flexiblen, adaptiven und zielorientierten Prozessausführung profitiert. Darüber hinaus wird medizinisches Expertenwissen in den Prozessablauf integriert und es wird nachgewiesen, wie dieses zu verbesserten Prozessergebnissen beiträgt.
Wissensintensive Prozesse stellen Unternehmen und Organisationen in allen Branchen und Anwendungsfällen derzeit vor große Herausforderungen und die Wissenschaft und Forschung widmet sich der Suche nach praxistauglichen Lösungen. Diese Arbeit präsentiert mit ODD-BP einen vielversprechenden Ansatz, indem die Möglichkeiten Semantischer Technologien dazu genutzt werden, eine eng verzahnte Zusammenarbeit zwischen Mensch und Maschine bei der Ausführung von KiPs zu ermöglichen. Die zur Evaluation fokussierte Notrufabfrage innerhalb von Leitstellen stellt zudem einen höchst relevanten Anwendungsfall dar, da in einem akuten Notfall in kürzester Zeit Entscheidungen getroffen werden müssen, um weitreichenden Schaden abwenden und Leben retten zu können. Durch die Berücksichtigung umfassender Datenmengen und das Ausnutzen verfügbaren Expertenwissens kann so eine schnelle Lagebewertung mit Hilfe der maschinellen Unterstützung erreicht und der Mensch beim Treffen von richtigen Entscheidungen unterstützt werden.
Modellbildung und Umsetzung von Methoden zur energieeffizienten Nutzung von Containertechnologien
(2021)
Die Nutzung von Cloud-Software und skalierten Web-Apps sowie Web-Services hat in den letzten Jahren extrem zugenommen, was zu einem Anstieg der Hochleistungs-Cloud-Rechenzentren führt. Neben der Verbesserung der Dienste spiegelt sich dies auch im weltweiten Stromverbrauch von Rechenzentren wider, der derzeit etwas mehr als 1% (entspricht etwa 200 TWh) beträgt. Prognosen sagen für die kommenden Jahre einen massiven Anstieg des Stromverbrauchs von Cloud-Rechenzentren voraus. Grundlage dieser Bewegung ist die Beschleunigung von Administration und Entwicklung, die unter anderem durch den Einsatz von Containern entsteht. Als Basis für Millionen von Web-Apps und -Services beschleunigen sie die Skalierung, Bereitstellung und Aktualisierung von Cloud-Diensten.
In dieser Arbeit wird aufgezeigt, dass Container zusätzlich zu ihren vielen technischen Vorteilen Möglichkeiten zur Reduzierung des Energieverbrauchs von Cloud-Rechenzentren bieten, die aus
einer ineffizienten Konfiguration von Containern sowie Container-Laufzeitumgebungen resultieren. Basierend auf einer Umfrage und einer Auswertung geeigneter Literatur werden in einem ersten Schritt wahrscheinliche Probleme beim Einsatz von Containern aufgedeckt. Weiterhin wird die Sensibilität von Administratoren und Entwicklern bezüglich des Energieverbrauchs von Container-Software ermittelt. Aufbauend auf den Ergebnissen der Umfrage und der Auswertung werden anhand von Standardszenarien im Containerumfeld die Komponenten des de facto Standards Docker untersucht. Anschließend wird ein Modell, bestehend aus Messmethodik, Empfehlungen für eine effiziente
Konfiguration von Containern und Tools, beschrieben. Die Messmethodik sollte einfach anwendbar sein und gängige Technologien in Rechenzentren unterstützen. Darüber hinaus geben die Handlungsempfehlungen sowohl Entwicklern als auch Administratoren die Möglichkeit zu entscheiden, welche Komponenten von Docker im Sinne eines energieeffizienten Einsatzes und in Abhängigkeit vom Einsatzszenario der Container genutzt werden sollten und welche weggelassen werden könnten. Die resultierenden Container können im Sinne der Energieeffizienz auf Servern und gleichermaßen auf PCs und Embedded Systems (als Teil von IoT und Edge Cloud) eingesetzt werden und somit nicht nur dem zuvor beschriebenen Problem in der Cloud entgegenwirken.
Die Arbeit beschäftigt sich zudem mit dem Verhalten von skalierten Webanwendungen. Gängige Orchestrierungswerkzeuge definieren statische Skalierungspunkte für Anwendungen, die in den meisten Fällen auf der CPU-Auslastung basieren. Es wird dargestellt, dass dabei weder die tatsächliche Erreichbarkeit noch der Stromverbrauch der Anwendungen berücksichtigt werden. Es wird der Autoscaler des Open-Source-Container-Orchestrierungswerkzeugs Kubernetes betrachtet, der um ein neu entwickeltes Werkzeug erweitert wird. Es wird deutlich, dass eine dynamische Anpassung der Skalierungspunkte durch eine Vorabauswertung gängiger Nutzungsszenarien sowie Informationen über deren Stromverbrauch und die Erreichbarkeit bei steigender Last erreicht werden kann.
Schließlich folgt eine empirische Untersuchung des generierten Modells in Form von drei Simulationen, die die Auswirkungen auf den Energieverbrauch von Cloud-Rechenzentren darlegen sollen.
Institutional and cultural determinants of speed of government responses during COVID-19 pandemic
(2021)
This article examines institutional and cultural determinants of the speed of government responses during the COVID-19 pandemic. We define the speed as the marginal rate of stringency index change. Based on cross-country data, we find that collectivism is associated with higher speed of government response. We also find a moderating role of trust in government, i.e., the association of individualism-collectivism on speed is stronger in countries with higher levels of trust in government. We do not find significant predictive power of democracy, media freedom and power distance on the speed of government responses.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Die vorliegende Arbeit liefert eine Kritik der Performativity-of-Economics-Debatte, welcher theoretische Probleme unterstellt werden. Dies betrifft insbesondere Defizite hinsichtlich einer handlungstheoretischen Erschließung und Erklärung ihres Gegenstandes.
Zur Überwindung dieses Problems wird eine Verknüpfung mit dem Mechanism Approach der analytischen Soziologie vorgeschlagen, welcher erstens einen explizit handlungstheoretischen Zugang bietet, zweitens über die Identifikation der zugrundeliegenden sozialen Mechanismen die Entschlüsselung sozialer Dynamiken und Prozesse erlaubt und, drittens, verschiedene Ausprägungen des zu untersuchenden Phänomens (die Performativität ökonomischer Theorien) in Theorien mittlerer Reichweite übersetzen kann. Eine Verbindung wird durch den Mechanismus der Self-fulfilling Theory als spezifische Form der Self-Fulfilling prophecy hergestellt, welche im weiteren Verlauf der Argumentation als Erklärungsinstrument des Mechanism Approach verwendet und dabei kritisch reflektiert wird.
Die handlungsbasierte Erklärung eines spezifischen Typs der Performativität ökonomischer Theorien wird schließlich anhand eines Fallbeispiels – dem Aufstieg und der Verbreitung des Shareholder-Value-Ansatzes und der zugrundeliegenden Agency Theory – empirisch demonstriert. Es kann gezeigt werden, dass mechanismenbasierte Erklärungen zur allgemeinen theoretischen Aufwertung der besagten Debatte beitragen können. Der Mechanismus der Self-fulfilling Theory im Speziellen bietet zur Erklärung des untersuchten Phänomens verschiedene Vor- und Nachteile, kann allerdings als eine theoretische Brücke ebenfalls einen fruchtbaren Beitrag leisten, nicht zuletzt indem er eine differenzierte Betrachtung des Zusammenhangs zwischen starken Formen von Performativität und selbsterfüllenden Prophezeiungen erlaubt.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
This work deals with the current support landscape for Social Entrepreneurship (SE) in the DACH region. It provides answers to the questions of which actors support SE, how and why they do so, and which social ventures are supported. In addition, there is a focus on the motives for supporting SE as well as the decision-making process while selecting social ventures. In both cases, it is examined whether certain characteristics of the decision-maker and the organization influence the weighting of motives and decision-making criteria. More precise, the gender of the decision-maker as well as the kind of support by the organization is analyzed. The concrete examples of foundations and venture philanthropy organizations (VPOs) will give a deeper look at the SE support motives and decision-making behavior. In a quantitative empirical data collection, by means of an online survey, decision-makers from SE supporting organizations in the DACH region were asked to participate in a conjoint experiment and to fill in a questionnaire. The results illustrate a positive development of the SE support landscape in the German-speaking area as well as the heterogeneity of the organizational types, the financial and non-financial support instruments and the supported social ventures. Regarding the motives for SE-support, a general endeavor to change and to create an impact has proven to be particularly important at the organizational and the individual level. At the individual level female and male decision-makers have subtle differences in their motives to promote SE. Robustness checks by analyzing certain subsamples provide information about that. Individuals from foundations and VPOs, on the other hand, hardly differ from each other, even though here individuals with a rather social background face individuals with a business background. At the organizational level crucial differences can be identified for the motives, depending on the nature of the organization's support, and again comparing foundations with VPOs. Especially for the motives 'financial interests', 'reputation' and 'employee development' there are big differences between the considered groups. Eventually, by means of cluster analysis and still with respect to the support motives, two types of decision-makers could be determined on both the individual and the organizational level.
In terms of the decision-making behavior, and the weighting of certain decision-making criteria respectively, it has emerged that it is worthwhile having a closer look: The 'importance of the social problem' and the 'authenticity of the start-up team' are consistently the two most important criteria when it comes to selecting social ventures for supporting them. However, comparing male and female decision-makers, foundations and VPOs, as well as the two groups of financially and non-financially supporting organizations, there are certain specifics which are highly relevant for SE practice. Here as well a cluster analysis uncovered patterns of criteria weighting by identifying three different types of decision-makers.
The formerly communist countries in Central and Eastern Europe (transitional economies in Europe and the Soviet Union – for example, East Germany, Czech Republic, Hungary, Lithuania, Poland, Russia) and transitional economies in Asia – for example, China, Vietnam had centrally planned economies, which did not allow entrepreneurship activities. Despite the political-socioeconomic transformations in transitional economies around 1989, they still had an institutional heritage that affects individuals’ values and attitudes, which, in turn, influence intentions, behaviors, and actions, including entrepreneurship.
While prior studies on the long-lasting effects of socialist legacy on entrepreneurship have focused on limited geographical regions (e.g., East-West Germany, and East-West Europe), this dissertation focuses on the Vietnamese context, which offers a unique quasi-experimental setting. In 1954, Vietnam was divided into the socialist North and the non-socialist South, and it was then reunified under socialist rule in 1975. Thus, the intensity of differences in socialist treatment in North-South Vietnam (about 21 years) is much shorter than that in East-West Germany (about 40 years) and East-West Europe (about 70 years when considering former Soviet Union countries).
To assess the relationship between socialist history and entrepreneurship in this unique setting, we survey more than 3,000 Vietnamese individuals. This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
In the second empirical investigation, this dissertation focuses on how succession intentions differ from others (e.g., founding, and employee intentions) regarding career choice motivation, and the effect of three main elements of the theory of planned behavior (e.g., entrepreneurial attitude, subjective norms, and perceived behavioral control) in transition economy – Vietnam context. The findings of this thesis suggest that an intentional founder is labeled with innovation, an intentional successor is labeled with roles motivation, and an intentional employee is labeled with social mission. Additionally, this thesis reveals that entrepreneurial attitude and perceived behavioral control are positively associated with the founding intention, whereas there is no difference in this effect between succession and employee intentions.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.