Filtern
Dokumenttyp
- Dissertation (56)
- Wissenschaftlicher Artikel (2)
- Habilitation (1)
Schlagworte
- Optimierung (6)
- Deutschland (4)
- Finanzierung (4)
- Schätzung (4)
- Stichprobe (4)
- Unternehmen (4)
- Erhebungsverfahren (3)
- Familienbetrieb (3)
- Social Entrepreneurship (3)
- survey statistics (3)
- Amtliche Statistik (2)
- Analysis (2)
- Approximation (2)
- Beschäftigung (2)
- COVID-19 (2)
- Entrepreneurial Finance (2)
- Gestaltoptimierung (2)
- Haushalt (2)
- Investitionsentscheidung (2)
- Kapitalstruktur (2)
- Maschinelles Lernen (2)
- Mittelstand (2)
- Neuronales Netz (2)
- Numerical Optimization (2)
- Regressionsmodell (2)
- Risikokapital (2)
- Schätzfunktion (2)
- Schätztheorie (2)
- Social entrepreneurship (2)
- Unternehmensgründung (2)
- Unternehmenskauf (2)
- small area estimation (2)
- Ökonometrisches Modell (2)
- Adjoint Methods (1)
- Age Diversity (1)
- Ageing Workforce (1)
- Agency Theory (1)
- Aktienrendite (1)
- Analysis on fractals (1)
- Anstrengung (1)
- Approximationstheorie (1)
- Assistance System (1)
- Aufsatzsammlung (1)
- Automation of Simulation (1)
- BWL (1)
- Bergman space (1)
- Beteiligungsfinanzierung (1)
- Bildungsökonomik (1)
- Branch-and-Bound-Methode (1)
- Branching Diffusion (1)
- Business Angel (1)
- Business Angels (1)
- CPI; revision; substitution bias; distortion; official statistics; terms of trade; time series; free digital products; consumer value; count models (1)
- Calibration (1)
- Capital structure (1)
- Cauchy transforms (1)
- Cauchy-Transformierte (1)
- Cech cohomology of leafwise constant functions (1)
- Cech-de Rham cohomology (1)
- China (1)
- Coastal Erosion (1)
- Common Liability (1)
- Common Noise (1)
- Complex survey data (1)
- Computational Statistics (1)
- Conjoint Experiment (1)
- Constraint-Erfüllung (1)
- Container (1)
- Coposititive, Infinite Dimension (1)
- Crowdfunding (1)
- DSGE (1)
- Datenerhebung (1)
- Decision-making behavior (1)
- Density Estimation (1)
- Deutschland, Bundesrepublik (1)
- Deutschland, DDR (1)
- Differentialgeometrie (1)
- Digital transformation (1)
- Digitalisierung (1)
- Discrete Optimization, Linear Programming, Integer Programming, Extended Formulation, Graph Theory, Branch & Bound (1)
- Discrete optimization (1)
- Discrete-Time Impulse Control (1)
- Diskretisierung (1)
- Docker (1)
- Energieeffizienz (1)
- Entrepreneurship (1)
- Epistemology of Simulation (1)
- Erosion (1)
- Erwerbstätigkeitsstatistik (1)
- Europäische Union (1)
- Exchange Rates (1)
- Faber operator, Faber set, Polynomial approximation, Harmonic approximation, Dirichlet-problem (1)
- Faber-Operator, Faber-Menge, Polynomielle Approximation, Harmonische Approximation, Dirichlet-Problem (1)
- Fallbasiertes Schließen (1)
- Family business (1)
- Family firm (1)
- Fernsehen (1)
- Firm performance (1)
- Fiskalpolitik (1)
- Fraktal (1)
- Frame Mathematik (1)
- Funktionentheorie (1)
- Gehirn (1)
- Geistiges Eigentum (1)
- Geldpolitik (1)
- Gemischt-ganzzahlige Optimierung (1)
- Generalized Variance Functions (1)
- Generationsbeziehung (1)
- Gewerkschaft (1)
- Haftung (1)
- Hardy space (1)
- Hidden Champions (1)
- Hybrid Modelling (1)
- Hypothesis Testing (1)
- Impact Investing (1)
- Industrielle Beziehungen (1)
- Initial Coin Offerings (ICOs) (1)
- Integrierbarkeit (1)
- Intergenerational knowledge transfer (1)
- Investmentfonds (1)
- Investor (1)
- Kapitalertrag (1)
- Karbon (1)
- Klima (1)
- Künstliche Intelligenz (1)
- Küstenmeer (1)
- Laplace-Differentialgleichung (1)
- Later-stage ventures (1)
- M&A decision criteria (1)
- M&A process (1)
- Macroeconomics (1)
- Makroökonomisches Modell (1)
- Marke (1)
- Marktführer (1)
- Mathematik (1)
- Mean Field Games (1)
- Mechanism Approach (1)
- Medien (1)
- Menschenbild (1)
- Mergelyan (1)
- Mesh Quality (1)
- Messung (1)
- Meta-Analysis (1)
- Meta-analysis (1)
- Mietpreis (1)
- Mikrozensus (1)
- Mission Drift (1)
- Mixed Local-Nonlocal PDE (1)
- Mixed-Gamble-Logik (1)
- Mixed-integer optimization (1)
- Modellierung (1)
- Multi-Level Modelling (1)
- Multilineare Algebra (1)
- Navier-Stokes-Gleichung (1)
- Nichtlineare Optimierung (1)
- Nonlocal Diffusion (1)
- Nonlocal convection-diffusion (1)
- Numerical Methods (1)
- Numerische Mathematik (1)
- Näherungsverfahren (1)
- Official Statistics (1)
- Operations Research (1)
- Optimal Control on Unbounded Space Domains (1)
- Optimal Multivariate Allocation (1)
- Organisational learning (1)
- PIDE constrained Optimal Control (1)
- Parameterschätzung (1)
- Partielle Differentialgleichung (1)
- Penalized Maximum Likelihood (1)
- Penalty-Methode (1)
- Performativität (1)
- Performativität <Kulturwissenschaften> (1)
- Personalentwicklung (1)
- Phänomenologische Soziologie (1)
- Politisches Handeln (1)
- Politisches System (1)
- Potential theory (1)
- Prediction (1)
- Preis (1)
- Private Equity (1)
- Prozessmanagement (1)
- Prozessmodellierung (1)
- Psychologie (1)
- Regierung (1)
- Regionalentwicklung (1)
- Regression (1)
- Regression estimator, household surveys, calibration, weighting, integrated weighting (1)
- Regression models (1)
- Regressionsanalyse (1)
- Regulierung (1)
- Relatives Alter (1)
- Robust Statistics (1)
- Robuste Statistik (1)
- Schüler (1)
- Selbsterfüllende Prophezeiung (1)
- Self-organizing Maps (1)
- Semantische Technologien (1)
- Shallow Water Equations (1)
- Shape Calculus (1)
- Shape Optimization (1)
- Shape Optimiztion (1)
- Shape Spaces (1)
- Shareholder Value (1)
- Shareholder-Value-Analyse (1)
- Sicherheit und Ordnung (1)
- Simulation (1)
- Simulation Studies (1)
- Small area estimation (1)
- Social Enterprise (1)
- Social Innovation (1)
- Social Media (1)
- Socialism, Socialist values and attitudes, Socialist legacy, Literature review, Entrepreneurship intention, Business takeover, Career choice reasons, and TPB model. (1)
- Sozialismus (1)
- Spatial Ramsey Model (1)
- Staatsanleihe (1)
- Statistical Properties (1)
- Statistik (1)
- Stichprobenfehler (1)
- Stichprobennahme (1)
- Stiftungsunternehmen (1)
- Strategische Planung (1)
- Structured Eurobonds (1)
- Student (1)
- Subset Selection (1)
- Survey Methodology (1)
- Survey Statistics (1)
- Surveys (1)
- Sustainability (1)
- Synthetic micro data generation (1)
- Synthetische Daten (1)
- Takeover defenses, Covid-19, firm value, exogenous shocks, family firm, family involvement, crisis (1)
- Tarifverhandlung (1)
- Television, social media, habit formation (1)
- Topologieoptimierung (1)
- Total Survey Error (1)
- Trademarks (1)
- Umfrage (1)
- Unbewegliche Sache (1)
- Uncertainty (1)
- Unsicherheit (1)
- Unternehmensbewertung (1)
- Unternehmenswachstum (1)
- Unternehmernachfolge (1)
- Unvollkommene Information (1)
- Venture Capital (1)
- Venture Capital (VC) (1)
- Verarbeitendes Gewerbe (1)
- Vermögen (1)
- Virtualisierung (1)
- Visualisierung (1)
- Wealth surveys (1)
- Wechselkurs (1)
- Weighted Regression (1)
- Wirtschaftstheorie (1)
- Wissensintensive Prozesse (1)
- Wissensvermittlung (1)
- Währungsunion (1)
- Zeitallokation (1)
- acquisition (1)
- asymptotic analysis (1)
- binary (1)
- branch-and-bound (1)
- business surveys (1)
- choice-based conjoint analysis (1)
- cluster analysis (1)
- complimentarity (1)
- de Rham cohomology (1)
- decision making pattern (1)
- dilute particle suspension (1)
- empirical taxonomy (1)
- employment estimation (1)
- family business (1)
- family management (1)
- finite element method (1)
- foliated manifolds (1)
- fractional Poisson equation (1)
- frame errors (1)
- generational stage (1)
- incompressible Newtonian fluid (1)
- intellectual property rights (1)
- lacunary approximation (1)
- local limit (1)
- mean field approximation (1)
- missing data (1)
- multilevel Toeplitz (1)
- multilinear algebra (1)
- non-convex (1)
- non-family business (1)
- nonlinear optimization (1)
- numerical analysis (1)
- official statistics (1)
- patents (1)
- penalty (1)
- pre-acquisition phase (1)
- public perception (1)
- rental prices (1)
- sampling frame (1)
- shape optimization (1)
- statistical modelling (1)
- stochastic partial differential algebraic equation (1)
- strategic acquisition (1)
- target screening and selection (1)
- tensor methods (1)
- trademarks (1)
- transgenerational intention (1)
- universal power series (1)
- weighting (1)
- Überstunde (1)
- Überstunden (1)
Institut
- Fachbereich 4 (59) (entfernen)
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
In the modeling context, non-linearities and uncertainty go hand in hand. In fact, the utility function's curvature determines the degree of risk-aversion. This concept is exploited in the first article of this thesis, which incorporates uncertainty into a small-scale DSGE model. More specifically, this is done by a second-order approximation, while carrying out the derivation in great detail and carefully discussing the more formal aspects. Moreover, the consequences of this method are discussed when calibrating the equilibrium condition. The second article of the thesis considers the essential model part of the first paper and focuses on the (forward-looking) data needed to meet the model's requirements. A large number of uncertainty measures are utilized to explain a possible approximation bias. The last article keeps to the same topic but uses statistical distributions instead of actual data. In addition, theoretical (model) and calibrated (data) parameters are used to produce more general statements. In this way, several relationships are revealed with regard to a biased interpretation of this class of models. In this dissertation, the respective approaches are explained in full detail and also how they build on each other.
In summary, the question remains whether the exact interpretation of model equations should play a role in macroeconomics. If we answer this positively, this work shows to what extent the practical use can lead to biased results.
The economic growth theory analyses which factors affect economic growth and tries to analyze how it can last. A popular neoclassical growth model is the Ramsey-Cass-Koopmans model, which aims to determine how much of its income a nation or an economy should save in order to maximize its welfare. In this thesis, we present and analyze an extended capital accumulation equation of a spatial version of the Ramsey model, balancing diffusive and agglomerative effects. We model the capital mobility in space via a nonlocal diffusion operator which allows for jumps of the capital stock from one location to an other. Moreover, this operator smooths out heterogeneities in the factor distributions slower, which generated a more realistic behavior of capital flows. In addition to that, we introduce an endogenous productivity-production operator which depends on time and on the capital distribution in space. This operator models the technological progress of the economy. The resulting mathematical model is an optimal control problem under a semilinear parabolic integro-differential equation with initial and volume constraints, which are a nonlocal analog to local boundary conditions, and box-constraints on the state and the control variables. In this thesis, we consider this problem on a bounded and unbounded spatial domain, in both cases with a finite time horizon. We derive existence results of weak solutions for the capital accumulation equations in both settings and we proof the existence of a Ramsey equilibrium in the unbounded case. Moreover, we solve the optimal control problem numerically and discuss the results in the economic context.
The harmonic Faber operator
(2018)
P. K. Suetin points out in the beginning of his monograph "Faber Polynomials and Faber Series" that Faber polynomials play an important role in modern approximation theory of a complex variable as they are used in representing analytic functions in simply connected domains, and many theorems on approximation of analytic functions are proved with their help [50]. In 1903, the Faber polynomials were firstly discovered by G. Faber. It was Faber's aim to find a generalisation of Taylor series of holomorphic functions in the open unit disc D in the following way. As any holomorphic function in D has a Taylor series representation f(z)=\sum_{\nu=0}^{\infty}a_{\nu}z^{\nu} (z\in\D) converging locally uniformly inside D, for a simply connected domain G, Faber wanted to determine a system of polynomials (Q_n) such that each function f being holomorphic in G can be expanded into a series
f=\sum_{\nu=0}^{\infty}b_{\nu}Q_{\nu} converging locally uniformly inside G. Having this goal in mind, Faber considered simply connected domains bounded by an analytic Jordan curve. He constructed a system of polynomials (F_n) with this property. These polynomials F_n were named after him as Faber polynomials. In the preface of [50], a detailed summary of results concerning Faber polynomials and results obtained by the aid of them is given. An important application of Faber polynomials is e.g. the transfer of known assertions concerning polynomial approximation of functions belonging to the disc algebra to results of the approximation of functions being continuous on a compact continuum K which contains at least two points and has a connected complement and being holomorphic in the interior of K. In this field, the Faber operator denoted by T turns out to be a powerful tool (for an introduction, see e.g. D. Gaier's monograph). It
assigns a polynomial of degree at most n given in the monomial basis \sum_{\nu=0}^{n}a_{\nu}z^{\nu} with a polynomial of degree at most n given in the basis of Faber polynomials \sum_{\nu=0}^{n}a_{\nu}F_{\nu}. If the Faber operator is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the disc algebra onto the space of functions being continuous on the whole compact continuum and holomorphic in its interior. For all f being element of the disc algebra and all polynomials P, via the obvious estimate for the uniform norms ||T(f)-T(P)||<= ||T|| ||f-P||, it can be seen that the original task of approximating F=T(f) by polynomials is reduced to the polynomial approximation of the function f. Therefore, the question arises under which conditions the Faber operator is continuous and surjective. A fundamental result in this regard was established by J. M. Anderson and J. Clunie who showed that if the compact continuum is bounded by a rectifiable Jordan curve with bounded boundary rotation and free from cusps, then the Faber operator with respect to the uniform norms is a topological isomorphism. Now, let f be a harmonic function in D. Similar as above, we find that f has a uniquely determined representation f=\sum_{\nu=-\infty}^{\infty}a_{\nu}p_{\nu}
converging locally uniformly inside D where p_{n}(z)=z^{n} for n\in\N_{0} and p_{-n}(z)=\overline{z}^{n} for n\in\N}. One may ask whether there is an analogue for harmonic functions on simply connected domains G. Indeed, for a domain G bounded by an analytic Jordan curve, the conjecture that each function f being harmonic in G has a uniquely determined representation f=\sum_{\nu= \infty}^{\infty}b_{\nu}F_{\nu} where F_{-n}(z)=\overline{F_{n}(z\)} for n\inN, converging locally uniformly inside G, holds true. Let now K be a compact continuum containing at least two points and having a connected complement. A main component of this thesis will be the examination of the harmonic Faber operator mapping a harmonic polynomial given in the basis of the harmonic monomials \sum_{\nu=-n}^{n}a_{\nu}p_{\nu} to a harmonic polynomial given as \sum_{\nu=-n}^{n}a_{\nu}F_{\nu}.
If this operator, which is based on an idea of J. Müller, is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the functions being continuous on \partial\D onto the continuous functions on K being
harmonic in the interior of K. Harmonic Faber polynomials and the harmonic Faber operator will be the objects accompanying us throughout
our whole discussion. After having given an overview about notations and certain tools we will use in our consideration in the first chapter, we begin our studies with an introduction to the Faber operator and the harmonic Faber operator. We start modestly and consider domains bounded by an analytic Jordan curve. In Section 2, as a first result, we will show that, for such a domain G, the harmonic Faber operator has a unique continuous extension to an operator mapping the space of the harmonic functions in D onto the space
of the harmonic functions in G, and moreover, the harmonic Faber
operator is an isomorphism with respect to the topologies of locally
uniform convergence. In the further sections of this chapter, we illumine the behaviour of the (harmonic) Faber operator on certain function spaces. In the third chapter, we leave the situation of compact continua bounded by an analytic Jordan curve. Instead we consider closures of domains bounded by Jordan curves having a Dini continuous curvature. With the aid of the concept of compact operators and the Fredholm alternative, we are able to show that the harmonic Faber operator is a topological isomorphism. Since, in particular, the main result of the third chapter holds true for closures K of domains bounded by analytic Jordan curves, we can make use of it to obtain new results concerning the approximation of functions being continuous on K and harmonic in the interior of K by harmonic polynomials. To do so, we develop techniques applied by L. Frerick and J. Müller in [11] and adjust them to our setting. So, we can transfer results about the classic Faber operator to the harmonic Faber operator. In the last chapter, we will use the theory of harmonic Faber polynomials
to solve certain Dirichlet problems in the complex plane. We pursue
two different approaches: First, with a similar philosophy as in [50],
we develop a procedure to compute the coefficients of a series \sum_{\nu=-\infty}^{\infty}c_{\nu}F_{\nu} converging uniformly to the solution of a given Dirichlet problem. Later, we will point out how semi-infinite programming with harmonic Faber polynomials as ansatz functions can be used to get an approximate solution of a given Dirichlet problem. We cover both approaches first from a theoretical point of view before we have a focus on the numerical implementation of concrete examples. As application of the numerical computations, we considerably obtain visualisations of the concerned Dirichlet problems rounding out our discussion about the harmonic Faber polynomials and the harmonic Faber operator.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
The formerly communist countries in Central and Eastern Europe (transitional economies in Europe and the Soviet Union – for example, East Germany, Czech Republic, Hungary, Lithuania, Poland, Russia) and transitional economies in Asia – for example, China, Vietnam had centrally planned economies, which did not allow entrepreneurship activities. Despite the political-socioeconomic transformations in transitional economies around 1989, they still had an institutional heritage that affects individuals’ values and attitudes, which, in turn, influence intentions, behaviors, and actions, including entrepreneurship.
While prior studies on the long-lasting effects of socialist legacy on entrepreneurship have focused on limited geographical regions (e.g., East-West Germany, and East-West Europe), this dissertation focuses on the Vietnamese context, which offers a unique quasi-experimental setting. In 1954, Vietnam was divided into the socialist North and the non-socialist South, and it was then reunified under socialist rule in 1975. Thus, the intensity of differences in socialist treatment in North-South Vietnam (about 21 years) is much shorter than that in East-West Germany (about 40 years) and East-West Europe (about 70 years when considering former Soviet Union countries).
To assess the relationship between socialist history and entrepreneurship in this unique setting, we survey more than 3,000 Vietnamese individuals. This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
In the second empirical investigation, this dissertation focuses on how succession intentions differ from others (e.g., founding, and employee intentions) regarding career choice motivation, and the effect of three main elements of the theory of planned behavior (e.g., entrepreneurial attitude, subjective norms, and perceived behavioral control) in transition economy – Vietnam context. The findings of this thesis suggest that an intentional founder is labeled with innovation, an intentional successor is labeled with roles motivation, and an intentional employee is labeled with social mission. Additionally, this thesis reveals that entrepreneurial attitude and perceived behavioral control are positively associated with the founding intention, whereas there is no difference in this effect between succession and employee intentions.
Zeitgleich mit stetig wachsenden gesellschaftlichen Herausforderungen haben im vergangenen Jahrzehnt Sozialunternehmen stark an Bedeutung gewonnen. Sozialunternehmen verfolgen das Ziel, mit unternehmerischen Mitteln gesellschaftliche Probleme zu lösen. Da der Fokus von Sozialunternehmen nicht hauptsächlich auf der eigenen Gewinnmaximierung liegt, haben sie oftmals Probleme, geeignete Unternehmensfinanzierungen zu erhalten und Wachstumspotenziale zu verwirklichen.
Zur Erlangung eines tiefergehenden Verständnisses des Phänomens der Sozialunternehmen untersucht der erste Teil dieser Dissertation anhand von zwei Studien auf der Basis eines Experiments das Entscheidungsverhalten der Investoren von Sozialunternehmen. Kapitel 2 betrachtet daher das Entscheidungsverhalten von Impact-Investoren. Der von diesen Investoren verfolgte Investmentansatz „Impact Investing“ geht über eine reine Orientierung an Renditen hinaus. Anhand eines Experiments mit 179 Impact Investoren, die insgesamt 4.296 Investitionsentscheidungen getroffen haben, identifiziert eine Conjoint-Studie deren wichtigste Entscheidungskriterien bei der Auswahl der Sozialunternehmen. Kapitel 3 analysiert mit dem Fokus auf sozialen Inkubatoren eine weitere spezifische Gruppe von Unterstützern von Sozialunternehmen. Dieses Kapitel veranschaulicht auf der Basis des Experiments die Motive und Entscheidungskriterien der Inkubatoren bei der Auswahl von Sozialunternehmen sowie die von ihnen angebotenen Formen der nichtfinanziellen Unterstützung. Die Ergebnisse zeigen unter anderem, dass die Motive von sozialen Inkubatoren bei der Unterstützung von Sozialunternehmen unter anderem gesellschaftlicher, finanzieller oder reputationsbezogener Natur sind.
Der zweite Teil erörtert auf der Basis von zwei quantitativ empirischen Studien, inwiefern die Registrierung von Markenrechten sich zur Messung sozialer Innovationen eignet und mit finanziellem und sozialem Wachstum von sozialen Startups in Verbindung steht. Kapitel 4 erörtert, inwiefern Markenregistrierungen zur Messung von sozialen Innovationen dienen können. Basierend auf einer Textanalyse der Webseiten von 925 Sozialunternehmen (> 35.000 Unterseiten) werden in einem ersten Schritt vier Dimensionen sozialer Innovationen (Innovations-, Impact-, Finanz- und Skalierbarkeitsdimension) ermittelt. Darauf aufbauend betrachtet dieses Kapitel, wie verschiedene Markencharakteristiken mit den Dimensionen sozialer Innovationen zusammenhängen. Die Ergebnisse zeigen, dass insbesondere die Anzahl an registrierten Marken als Indikator für soziale Innovationen (alle Dimensionen) dient. Weiterhin spielt die geografische Reichweite der registrierten Marken eine wichtige Rolle. Aufbauend auf den Ergebnissen von Kapitel 4 untersucht Kapitel 5 den Einfluss von Markenregistrierungen in frühen Unternehmensphasen auf die weitere Entwicklung der hybriden Ergebnisse von sozialen Startups. Im Detail argumentiert Kapitel 5, dass sowohl die Registrierung von Marken an sich als auch deren verschiedene Charakteristiken unterschiedlich mit den sozialen und ökonomischen Ergebnissen von sozialen Startups in Verbindung stehen. Anhand eines Datensatzes von 485 Sozialunternehmen zeigen die Analysen aus Kapitel 5, dass soziale Startups mit einer registrierten Marke ein vergleichsweise höheres Mitarbeiterwachstum aufweisen und einen größeren gesellschaftlichen Beitrag leisten.
Die Ergebnisse dieser Dissertation weiten die Forschung im Social Entrepreneurship-Bereich weiter aus und bieten zahlreiche Implikationen für die Praxis. Während Kapitel 2 und 3 das Verständnis über die Eigenschaften von nichtfinanziellen und finanziellen Unterstützungsorganisationen von Sozialunternehmen vergrößern, schaffen Kapitel 4 und 5 ein größeres Verständnis über die Bedeutung von Markenanmeldungen für Sozialunternehmen.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Die vorliegende Arbeit liefert eine Kritik der Performativity-of-Economics-Debatte, welcher theoretische Probleme unterstellt werden. Dies betrifft insbesondere Defizite hinsichtlich einer handlungstheoretischen Erschließung und Erklärung ihres Gegenstandes.
Zur Überwindung dieses Problems wird eine Verknüpfung mit dem Mechanism Approach der analytischen Soziologie vorgeschlagen, welcher erstens einen explizit handlungstheoretischen Zugang bietet, zweitens über die Identifikation der zugrundeliegenden sozialen Mechanismen die Entschlüsselung sozialer Dynamiken und Prozesse erlaubt und, drittens, verschiedene Ausprägungen des zu untersuchenden Phänomens (die Performativität ökonomischer Theorien) in Theorien mittlerer Reichweite übersetzen kann. Eine Verbindung wird durch den Mechanismus der Self-fulfilling Theory als spezifische Form der Self-Fulfilling prophecy hergestellt, welche im weiteren Verlauf der Argumentation als Erklärungsinstrument des Mechanism Approach verwendet und dabei kritisch reflektiert wird.
Die handlungsbasierte Erklärung eines spezifischen Typs der Performativität ökonomischer Theorien wird schließlich anhand eines Fallbeispiels – dem Aufstieg und der Verbreitung des Shareholder-Value-Ansatzes und der zugrundeliegenden Agency Theory – empirisch demonstriert. Es kann gezeigt werden, dass mechanismenbasierte Erklärungen zur allgemeinen theoretischen Aufwertung der besagten Debatte beitragen können. Der Mechanismus der Self-fulfilling Theory im Speziellen bietet zur Erklärung des untersuchten Phänomens verschiedene Vor- und Nachteile, kann allerdings als eine theoretische Brücke ebenfalls einen fruchtbaren Beitrag leisten, nicht zuletzt indem er eine differenzierte Betrachtung des Zusammenhangs zwischen starken Formen von Performativität und selbsterfüllenden Prophezeiungen erlaubt.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
This thesis sheds light on the heterogeneous hedging behavior of airlines. The focus lies on financial hedging, operational hedging and selective hedging. The unbalanced panel data set includes 74 airlines from 39 countries. The period of analysis is 2005 until 2014, resulting in 621 firm years. The random effects probit and fixed effects OLS models provide strong evidence of a convex relation between derivative usage and a firm’s leverage, opposing the existing financial distress theory. Airlines with lower leverage had higher hedge ratios. In addition, the results show that airlines with interest rate and currency derivatives were more likely to engage in fuel price hedging. Moreover, the study results support the argument that operational hedging is a complement to financial hedging. Airlines with more heterogeneous fleet structures exhibited higher hedge ratios.
Also, airlines which were members of a strategic alliance were more likely to be hedging airlines. As alliance airlines are rather financially sound airlines, the positive relation between alliance membership and hedging reflects the negative results on the leverage
ratio. Lastly, the study presents determinants of an airlines’ selective hedging behavior. Airlines with prior-period derivative losses, recognized in income, changed their hedge portfolios more frequently. Moreover, the sample airlines acted in accordance with herd behavior theory. Changes in the regional hedge portfolios influenced the hedge portfolio of the individual airline in the same direction.
Even though proper research on Cauchy transforms has been done, there are still a lot of open questions. For example, in the case of representation theorems, i.e. the question when a function can be represented as a Cauchy transform, there is 'still no completely satisfactory answer' ([9], p. 84). There are characterizations for measures on the circle as presented in the monograph [7] and for general compactly supported measures on the complex plane as presented in [27]. However, there seems to exist no systematic treatise of the Cauchy transform as an operator on $L_p$ spaces and weighted $L_p$ spaces on the real axis.
This is the point where this thesis draws on and we are interested in developing several characterizations for the representability of a function by Cauchy transforms of $L_p$ functions. Moreover, we will attack the issue of integrability of Cauchy transforms of functions and measures, a topic which is only partly explored (see [43]). We will develop different approaches involving Fourier transforms and potential theory and investigate into sufficient conditions and characterizations.
For our purposes, we shall need some notation and the concept of Hardy spaces which will be part of the preliminary Chapter 1. Moreover, we introduce Fourier transforms and their complex analogue, namely Fourier-Laplace transforms. This will be of extraordinary usage due to the close connection of Cauchy and Fourier(-Laplace) transforms.
In the second chapter we shall begin our research with a discussion of the Cauchy transformation on the classical (unweighted) $L_p$ spaces. Therefore, we start with the boundary behavior of Cauchy transforms including an adapted version of the Sokhotski-Plemelj formula. This result will turn out helpful for the determination of the image of the Cauchy transformation under $L_p(\R)$ for $p\in(1,\infty).$ The cases $p=1$ and $p=\infty$ are playing special roles here which justifies a treatise in separate sections. For $p=1$ we will involve the real Hardy space $H_{1}(\R)$ whereas the case $p=\infty$ shall be attacked by an approach incorporating intersections of Hardy spaces and certain subspaces of $L_{\infty}(\R).$
The third chapter prepares ourselves for the study of the Cauchy transformation on subspaces of $L_{p}(\R).$ We shall give a short overview of the basic facts about Cauchy transforms of measures and then proceed to Cauchy transforms of functions with support in a closed set $X\subset\R.$ Our goal is to build up the main theory on which we can fall back in the subsequent chapters.
The fourth chapter deals with Cauchy transforms of functions and measures supported by an unbounded interval which is not the entire real axis. For convenience we restrict ourselves to the interval $[0,\infty).$ Bringing once again the Fourier-Laplace transform into play, we deduce complex characterizations for the Cauchy transforms of functions in $L_{2}(0,\infty).$ Moreover, we analyze the behavior of Cauchy transform on several half-planes and shall use these results for a fairly general geometric characterization. In the second section of this chapter, we focus on Cauchy transforms of measures with support in $[0,\infty).$ In this context, we shall derive a reconstruction formula for these Cauchy transforms holding under pretty general conditions as well as results on the behaviur on the left half-plane. We close this chapter by rather technical real-type conditions and characterizations for Cauchy transforms of functions in $L_p(0,\infty)$ basing on an approach in [82].
The most common case of Cauchy transforms, those of compactly supported functions or measures, is the subject of Chapter 5. After complex and geometric characterizations originating from similar ideas as in the fourth chapter, we adapt a functional-analytic approach in [27] to special measures, namely those with densities to a given complex measure $\mu.$ The chapter is closed with a study of the Cauchy transformation on weighted $L_p$ spaces. Here, we choose an ansatz through the finite Hilbert transform on $(-1,1).$
The sixth chapter is devoted to the issue of integrability of Cauchy transforms. Since this topic has no comprehensive treatise in literature yet, we start with an introduction of weighted Bergman spaces and general results on the interaction of the Cauchy transformation in these spaces. Afterwards, we combine the theory of Zen spaces with Cauchy transforms by using once again their connection with Fourier transforms. Here, we shall encounter general Paley-Wiener theorems of the recent past. Lastly, we attack the issue of integrability of Cauchy transforms by means of potential theory. Therefore, we derive a Fourier integral formula for the logarithmic energy in one and multiple dimensions and give applications to Fourier and hence Cauchy transforms.
Two appendices are annexed to this thesis. The first one covers important definitions and results from measure theory with a special focus on complex measures. The second appendix contains Cauchy transforms of frequently used measures and functions with detailed calculations.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.