Refine
Year of publication
Document Type
- Doctoral Thesis (61)
- Article (3)
- Report (1)
- Working Paper (1)
Language
- German (36)
- English (29)
- Multiple languages (1)
Keywords
- Deutschland (7)
- China (3)
- Entrepreneurship (3)
- Nachhaltigkeit (3)
- Simulation (3)
- Amtliche Statistik (2)
- COVID-19 (2)
- Crowdfunding (2)
- Diversifikation (2)
- European Union (2)
Institute
- Wirtschaftswissenschaften (66) (remove)
Das Ziel dynamischer Mikrosimulationen ist es, die Entwicklung von Systemen über das Verhalten der einzelnen enthaltenen Bestandteile zu simulieren, um umfassende szenariobasierte Analysen zu ermöglichen. Im Bereich der Wirtschafts- und Sozialwissenschaften wird der Fokus üblicherweise auf Populationen bestehend aus Personen und Haushalten gelegt. Da politische und wirtschaftliche Entscheidungsprozesse meist auf lokaler Ebene getroffen werden, bedarf es zudem kleinräumiger Informationen, um gezielte Handlungsempfehlungen ableiten zu können. Das stellt Forschende wiederum vor große Herausforderungen im Erstellungsprozess regionalisierter Simulationsmodelle. Dieser Prozess reicht von der Generierung geeigneter Ausgangsdatensätze über die Erfassung und Umsetzung der dynamischen Komponenten bis hin zur Auswertung der Ergebnisse und Quantifizierung von Unsicherheiten. Im Rahmen dieser Arbeit werden ausgewählte Komponenten, die für regionalisierte Mikrosimulationen von besonderer Relevanz sind, beschrieben und systematisch analysiert.
Zunächst werden in Kapitel 2 theoretische und methodische Aspekte von Mikrosimulationen vorgestellt, um einen umfassenden Überblick über verschiedene Arten und Möglichkeiten der Umsetzung dynamischer Modellierungen zu geben. Im Fokus stehen dabei die Grundlagen der Erfassung und Simulation von Zuständen und Zustandsänderungen sowie die damit verbundenen strukturellen Aspekte im Simulationsprozess.
Sowohl für die Simulation von Zustandsänderungen als auch für die Erweiterung der Datenbasis werden primär logistische Regressionsmodelle zur Erfassung und anschließenden wahrscheinlichkeitsbasierten Vorhersage der Bevölkerungsstrukturen auf Mikroebene herangezogen. Die Schätzung beruht insbesondere auf Stichprobendaten, die in der Regel neben einem eingeschränktem Stichprobenumfang keine oder nur unzureichende regionale Differenzierungen zulassen. Daher können bei der Vorhersage von Wahrscheinlichkeiten erhebliche Differenzen zu bekannten Totalwerten entstehen. Um eine Harmonisierung mit den Totalwerten zu erhalten, lassen sich Methoden zur Anpassung von Wahrscheinlichkeiten – sogenannte Alignmentmethoden – anwenden. In der Literatur werden zwar unterschiedliche Möglichkeiten beschrieben, über die Auswirkungen dieser Verfahren auf die Güte der Modelle ist jedoch kaum etwas bekannt. Zur Beurteilung verschiedener Techniken werden diese im Rahmen von Kapitel 3 in umfassenden Simulationsstudien unter verschiedenen Szenarien umgesetzt. Hierbei kann gezeigt werden, dass durch die Einbindung zusätzlicher Informationen im Modellierungsprozess deutliche Verbesserungen sowohl bei der Schätzung der Parameter als auch bei der Vorhersage der Wahrscheinlichkeiten erzielt werden können. Zudem lassen sich dadurch auch bei fehlenden regionalen Identifikatoren in den Modellierungsdaten kleinräumige Wahrscheinlichkeiten erzeugen. Insbesondere die Maximierung der Likelihood des zugrundeliegenden Regressionsmodells unter der Nebenbedingung, dass die bekannten Totalwerte eingehalten werden, weist in allen Simulationsstudien überaus gute Ergebnisse auf.
Als eine der einflussreichsten Komponenten in regionalisierten Mikrosimulationen erweist sich die Umsetzung regionaler Mobilität. Gleichzeitig finden Wanderungen in vielen Mikrosimulationsmodellen keine oder nur unzureichende Beachtung. Durch den unmittelbaren Einfluss auf die gesamte Bevölkerungsstruktur führt ein Ignorieren jedoch bereits bei einem kurzen Simulationshorizont zu starken Verzerrungen. Während für globale Modelle die Integration von Wanderungsbewegungen über Landesgrenzen ausreicht, müssen in regionalisierten Modellen auch Binnenwanderungsbewegungen möglichst umfassend nachgebildet werden. Zu diesem Zweck werden in Kapitel 4 Konzepte für Wanderungsmodule erstellt, die zum einen eine unabhängige Simulation auf regionalen Subpopulationen und zum anderen eine umfassende Nachbildung von Wanderungsbewegungen innerhalb der gesamten Population zulassen. Um eine Berücksichtigung von Haushaltsstrukturen zu ermöglichen und die Plausibilität der Daten zu gewährleisten, wird ein Algorithmus zur Kalibrierung von Haushaltswahrscheinlichkeiten vorgeschlagen, der die Einhaltung von Benchmarks auf Individualebene ermöglicht. Über die retrospektive Evaluation der simulierten Migrationsbewegungen wird die Funktionalität der Wanderdungskonzepte verdeutlicht. Darüber hinaus werden über die Fortschreibung der Population in zukünftige Perioden divergente Entwicklungen der Einwohnerzahlen durch verschiedene Konzepte der Wanderungen analysiert.
Eine besondere Herausforderung in dynamischen Mikrosimulationen stellt die Erfassung von Unsicherheiten dar. Durch die Komplexität der gesamten Struktur und die Heterogenität der Komponenten ist die Anwendung klassischer Methoden zur Messung von Unsicherheiten oft nicht mehr möglich. Zur Quantifizierung verschiedener Einflussfaktoren werden in Kapitel 5 varianzbasierte Sensitivitätsanalysen vorgeschlagen, die aufgrund ihrer enormen Flexibilität auch direkte Vergleiche zwischen unterschiedlichsten Komponenten ermöglichen. Dabei erweisen sich Sensitivitätsanalysen nicht nur für die Erfassung von Unsicherheiten, sondern auch für die direkte Analyse verschiedener Szenarien, insbesondere zur Evaluation gemeinsamer Effekte, als überaus geeignet. In Simulationsstudien wird die Anwendung im konkreten Kontext dynamischer Modelle veranschaulicht. Dadurch wird deutlich, dass zum einen große Unterschiede hinsichtlich verschiedener Zielwerte und Simulationsperioden auftreten, zum anderen aber auch immer der Grad an regionaler Differenzierung berücksichtigt werden muss.
Kapitel 6 fasst die Erkenntnisse der vorliegenden Arbeit zusammen und gibt einen Ausblick auf zukünftige Forschungspotentiale.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.
The Belt and Road Initiative (BRI) has had a significant impact on China in political, economic, and cultural terms. This study focuses on the cultural domain, especially on scholarship students from the countries that signed bilateral cooperation agreements with China under the BRI. Using an integrated approach combining the difference-in-differences method and the gravity model, we explore the correlation between the BRI and the increasing number of international scholarship students funded by the Chinese government, as well as the determinants of students' decision to study in China. The panel data from 2010 to 2018 show that the launch of BRI has had a positive impact on the number of scholarship students from BRI countries. The number of scholarship recipients from non-BRI countries also increased, but at a much slower rate than those from BRI countries. The sole exception is the United States, which has trended downward for both state-funded and self-funded students.
The outbreak of the COVID-19 pandemic has also led to many conspiracy theories. While the origin of the pandemic in China led some, including former US president Donald Trump, to dub the pathogen “Chinese virus” and to support anti-Chinese conspiracy narratives, it caused Chinese state officials to openly support anti-US conspiracy theories about the “true” origin of the virus. In this article, we study whether nationalism, or more precisely uncritical patriotism, is related to belief in conspiracy theories among normal people. We hypothesize based on group identity theory and motivated reasoning that for the particular case of conspiracy theories related to the origin of COVID-19, such a relation should be stronger for Chinese than for Germans. To test this hypothesis, we use survey data from Germany and China, including data from the Chinese community in Germany. We also look at relations to other factors, in particular media consumption and xenophobia.
We study planned changes in protective routines after the COVID-19 pandemic: in a survey in Germany among >650 respondents, we find that the majority plans to use face masks in certain situations even after the end of the pandemic. We observe that this willingness is strongly related to the perception that there is something to be learned from East Asians’ handling of pandemics, even when controlling for perceived protection by wearing masks. Given strong empirical evidence that face masks help prevent the spread of respiratory diseases and given the considerable estimated health and economic costs of such diseases even pre-Corona, this would be a very positive side effect of the current crisis.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
Mittels Querschnittserhebungen ist es möglich Populationsparameter zu einem bestimmten Zeitpunkt zu schätzen. Jedoch ist meist die Veränderung von Populationsparametern von besonderem Interesse. So ist es zur Evaluation von politischen Zielvorgaben erforderlich die Veränderung von Indikatoren, wie Armutsmaßen, über die Zeit zu verfolgen. Um zu testen ob eine gemessene Veränderung sich signifikant von Null unterscheidet bedarf es einer Varianzschätzung für Veränderungen von Querschnitten. In diesem Zusammenhang ergeben sich oft zwei Probleme; Zum einen sind die relevanten Statistiken meist nicht-linear und zum anderen basieren die untersuchten Querschnittserhebungen auf Stichproben die nicht unabhängig voneinander gezogen wurden. Ziel der vorliegenden Dissertation ist es einen theoretischen Rahmen zur Herleitung und Schätzung der Varianz einer geschätzten Veränderung von nicht-linearen Statistiken zu geben. Hierzu werden die Eigenschaften von Stichprobendesigns erarbeitetet, die zur Koordination von Stichprobenziehungen in einer zeitlichen Abfolge verwendet werden. Insbesondere werden Ziehungsalgorithmen zur Koordination von Stichproben vorgestellt, erarbeitet und deren Eigenschaften beschrieben. Die Problematik der Varianzschätzung im Querschnitt für nicht-lineare Schätzer bei komplexen Stichprobendesigns wird ebenfalls behandelt. Schließlich wird ein allgemeiner Ansatz zur Schätzung von Veränderungen aufgezeigt und es werden Varianzschätzer für die Veränderung von Querschnittschätzern basierend auf koordinierten Querschnittstichproben untersucht. Insbesondere dem Fall einer sich über die Zeit verändernden Population wird eine besondere Bedeutung im Rahmen der Arbeit beigemessen, da diese im Anwendungsfall die Regel darstellen.
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
Earnings functions are an important tool in labor economics as they allow to test a variety of labor market theories. Most empirical earnings functions research focuses on testing hypotheses about sign and magnitude for the variables of interest. In contrast, there is little attention for the explanation power of the econometric models employed. Measures for explanation power are of interest, however, for assessing how successful econometric models are in explaining the real world. Are researchers able to draw a complete picture of the determination of earnings or is there room for further theories leading to alternate econometric models? This article seeks to answer the question with a large microeconometric data set from Germany. Using linear regression estimated by OLS and R2 as well as adjusted R2 as measures for explanation power, the results show that up to 60 percent of wage variation can be explained using only observable variables.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
Unternehmen aus güterproduzierenden Industrien und Sektoren entdecken in immer stärkerem Maße das Differenzierungs- und Erlöspotenzial des Angebots ergänzender Dienstleistungen zur Erlangung von strategischen Wettbewerbsvorteilen. In vielen Branchen ist dies bereits ein notwendiger Bestandteil im Angebotsportfolios der Hersteller um sich zu positionieren und wettbewerbsfähig zu bleiben. Ein besonders prägnantes Beispiel stellt die Automobilbranche dar, die schon vor Jahren begonnen hat in ihr Geschäftsmodell um das Kernprodukt "Automobil" auch sog. produktbegleitende Dienstleistungen (wie beispielsweise Finanzierungsdienstleistungen) zu integrieren, um sich durch Erhöhung des Kundennutzens von den Angeboten der Mitbewerber zu differenzieren. Vor dem Hintergrund, dass Marketingkonstrukte, wie Marke, Reputation, Kundenloyalität, aber auch technische Spezifikationen wie Motorisierung, Ausstattung und Zubehör die Fahrzeugwahl beeinflussen, stellt sich die Autorin die Frage, inwiefern ein Zusatzangebot von reinen produktbegleitenden Dienstleistungen einen Einfluss auf die Marken- und Fahrzeugwahl beim Autokauf hat. In diesem Zusammenhang ist ein Forschungsziel der vorliegenden Untersuchung die Konzeption einer branchenunabhängigen Wertschöpfungskette für produktbegleitende Dienstleistungen, um eine Identifikation des strategischen Differenzierungspotenzials produktbegleitender Dienstleistungen zu ermöglichen. Den Bezugsrahmen der Forschungsarbeit wird dabei aus Perspektive des Endkonsumenten bei der Automobilkaufentscheidung konstruiert, um Aussagen zur Wahrnehmung existierender Angebote produktbegleitender Dienstleistungen den individuellen Phasen der Kaufentscheidung zuordnen zu können. Dies bildet das methodische Fundament dieses empirisch geprägten Forschungsbeitrags, um die folgende Frage der Untersuchung beantworten zu können: "Haben produktbegleitende Dienstleistungen einen Einfluss auf die Kaufwahrscheinlichkeit beim konsumentenseitigen Kaufentscheidungsprozess bei Automobilen im Segment des Privat-PKW?" Als Forschungsstrategie wird die Anwendung der Kausalanalyse gewählt, um anhand zwei aufeinander aufbauenden Primärerhebungen (quantitative Datenerhebung anhand eines Online-Fragebogens) potenzielle Autokäufer hinsichtlich ihres Wissens und ihrer Wahrnehmung bezüglich produktbegleitender Dienstleistungen der einzelnen Automobilherstellermarken zu untersuchen. Die Ergebnisse der Datenauswertung lassen die Schlussfolgerung zu, dass produktbegleitende Dienstleistungen zwar einen positiven Einfluss auf die Kaufentscheidung beim potentiellen Automobilkäufer ausüben, jedoch aufseiten der Automobilhersteller und -händler durchaus großes Verbesserungspotenzial bezüglich der Kommunikation von solchen Value-Added-Leistungen vorliegt. Die vorliegende Dissertationsschrift wurde am Lehrstuhl für Organisation und Strategisches Dienstleistungsmanagement verfasst und beim Fachbereich IV der Universität Trier eingereicht.
Zum Einfluss von Transformationen schiefer Verteilungen auf die Analyse mit imputierten Daten
(2015)
Die korrekte Behandlung fehlender Daten in empirischen Untersuchungen spielt zunehmend eine wichtige Rolle in der anwendungsorientierten, quantitativen Forschung. Als zentrales flexibles Instrument wurde von Rubin (1987) die multiple Imputation entwickelt, welche unter regulären Bedingungen eine korrekte Inferenz der eigentlichen Schätzungen ermöglicht. Eine Reihe von Imputationsmethoden beruht im Wesentlichen auf der Normalverteilungsannahme. In der Empirie wird diese Annahme normalverteilter Daten zunehmend kritisiert. So erweisen sich Variablen auf Grund ihrer sehr schiefen Verteilungen für die Imputation als besonders problematisch. In dieser Arbeit steht die korrekte Behandlung fehlender Werte mit der Intention einer validen Inferenz der eigentlichen Schätzung im Vordergrund. Ein Instrument ist die Transformation schiefer Verteilungen, um mit Hilfe der transformierten und approximativ normalverteilten Daten Imputationen unter regulären Bedingungen durchzuführen. In der Arbeit wird ein multivariater Ansatz eingeführt. Anschließend wird im Rahmen mehrerer Monte-Carlo-Simulationsstudien gezeigt, dass der neue Ansatz bereits bekannte Verfahren dominiert und sich die Transformation positiv auf die Analyse mit imputierten Daten auswirkt.
Fehlende Werte und deren Kompensation über Imputation stellen eine große Herausforderung für die Varianzschätzung eines Punktschätzers dar. Dies gilt auch in der Amtlichen Statistik. Um eine unverzerrte Varianzschätzung zu gewährleisten, müssen alle Komponenten der Varianz berücksichtigt werden. Hierzu wird häufig eine Zerlegung der Gesamtvarianz durchgeführt mit dem Ziel, detaillierte Informationen über ihre Komponenten zu erhalten und diese vollständig zu erfassen. In dieser Arbeit stehen Resampling-Methoden im Vordergrund. Es wird ein Ansatz entwickelt, wie neuere Resampling-Methoden, welche alle Elemente der ursprünglichen Stichprobe berücksichtigen, hinsichtlich der Anwendung von Imputation übertragen werden können. Zum Vergleich verschiedener Varianzschätzer wird eine Monte-Carlo-Simulationsstudie durchgeführt. Mit Hilfe einer Monte-Carlo-Simulation findet zudem eine Zerlegung der Gesamtvarianz unter verschiedenen Parameterkonstellationen statt.
Die vorliegende Meta-Analyse zeigt eindeutig, dass von Familienmitgliedern geführte Familienunternehmen eine schlechtere Performance aufweisen als Unternehmen, die von Managern geleitet werden, die der Inhaberfamilie nicht angehören. Basierend auf uni- und multivariaten Analysen von 270 wissenschaftlichen Publikationen aus 42 verschiedenen Ländern, wurde die Performance von Familienunternehmen im Vergleich zu Nicht-Familienunternehmen untersucht. Das erste robuste Ergebnis zeigt eindeutig, dass Familienunternehmen hinsichtlich der Performance Nicht-Familienunternehmen übertreffen. Dieses Ergebnis ist im Einklang mit den meisten Primärstudien und früheren Meta-Analysen. Das zweite Ergebnis dieser Arbeit kann dem "Finance"-Forschungszweig zugeordnet werden und basiert auf der Unterscheidung von Markt- und Accounting-Performance-Kennzahlen. Markt-Performance-Kennzahlen, welche durch Analysten errechnet werden, zeigen, dass Familienunternehmen Nicht-Familienunternehmen hinsichtlich der Performance unterlegen sind. Dieses Ergebnis steht im Gegensatz zu den Accounting-Performance-Kennzahlen, welche von den Familienunternehmen selbst in ihren von Wirtschaftsprüfern freigegebenen Bilanzen veröffentlicht wurden. Die dritte Forschungsfrage untersucht im Detail, ob die Zusammensetzung des Datensatzes in Primärstudien das Gesamtergebnis in eine bestimmte Richtung verzerrt. Das Ergebnis wird nicht durch Datensätzen mit Unternehmen, welche öffentlich gelistet, im produzieren Gewerbe tätig oder Technologie getriebene Unternehmen, sind getrieben. Kleine und Mittlere Unternehmen (KMU) veröffentlichen kleinere Kennzahlen und reduzieren somit die Höhe der abhängigen Variable. Das vierte Ergebnis gibt eine Übersicht über die Art und Weise der Beteiligung der Familie an der Aufsicht oder dem operativen Geschäft des Unternehmens. Dieses Ergebnis zeigt klar, dass Manager aus Familien einen signifikanten negativen Einfluss auf die Performance des Unternehmens haben. Dies kann auf die Erhaltung des Wohlstandes der Familienmitglieder zurückzuführen sein und somit spielen finanzielle Kennzahlen keine vordergründige Rolle. Die letzte Forschungsfrage untersucht, ob die Performance von Familienunternehmen im Vergleich zu Nicht-Familienunternehmen auch durch institutionelle Faktoren beeinflusst wird. In Europa zeigen die Familienunternehmen im Vergleich zu Nordamerika eine geringere Performance hinsichtlich der Kennzahlen. Das ist darauf zurückzuführen, dass europäische Unternehmen im Vergleich zu nordamerikanischen unterbewertet sind (Caldwell, 07.06.2014). Darüber hinaus zeigen Familienunternehmen im Vergleich zu Nicht-Familienunternehmen eine bessere Performance in eher maskulin geprägten Kulturen. Maskulinität, ist nach Hofstede, gekennzeichnet durch höhere Wettbewerbsorientierung, Selbstbewusstsein, Streben nach Wohlstand und klar differenzierte Geschlechterrollen. Rechtsregime hingegen (Common- oder Civil-Law) spielen im Performance-Zusammenhang von Familienunternehmen keine Rolle. Die Durchsetzbarkeit der Gesetze hat jedoch einen signifikanten positiven Einfluss auf die Performance von Familienunternehmen im Vergleich zu Nicht-Familienunternehmen. Dies ist damit zu begründen, dass die Kosten für Kredite in Länder mit einer sehr guten Durchsetzbarkeit von Gesetzen für Familienunternehmen geringer sind.
The equity premium (Mehra and Prescott, 1985) is still a puzzle in the sense that there are still no convincing explanations for the size of the equity premium. In this dissertation, we study this long-standing puzzle and several possible behavioral explanations. First, we apply the IRR methodology proposed by Fama and French (1999) to achieve large firm level data on the equity premia for N = 28,256 companies in 54 countries around the world. Second, by using preferences data from the INTRA study (Rieger et. al., 2014), we could test the relevant risk factors together with time cognition to explain the equity premium. We document the failure of the Myopic Loss Aversion hypothesis by Benartzi and Thaler (1995) but provides rigorous empirical evidence to support the behavioral theory of ambiguity aversion to account for the equity premium. The observations shed some light on the new approach of integrating risk and ambiguity (together with time preferences) into a more general model of uncertainty, in which both risk premium and ambiguity premium play roles in asset pricing models.