Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (333) (entfernen)
Sprache
- Englisch (333) (entfernen)
Volltext vorhanden
- ja (333) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (16)
- Hydrocortison (13)
- Fernerkundung (10)
- Modellierung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (50)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (19)
- Informatik (16)
- Anglistik (11)
- Fachbereich 2 (7)
- Fachbereich 6 (7)
- Politikwissenschaft (3)
- Computerlinguistik und Digital Humanities (1)
- Fachbereich 3 (1)
- Japanologie (1)
- Sinologie (1)
- Universitätsbibliothek (1)
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
The daily dose of health information: A psychological view on the health information seeking process
(2021)
The search for health information is becoming increasingly important in everyday life, as well as socially and scientifically relevant Previous studies have mainly focused on the design and communication of information. However, the view of the seeker as well as individual
differences in skills and abilities has been a neglected topic so far. A psychological perspective on the process of searching for health information would provide important starting points for promoting the general dissemination of relevant information and thus improving health behaviour and health status. Within the present dissertation, the process of seeking health information was thus divided into sequential stages to identify relevant personality traits and skills. Accordignly, three studies are presented that focus on one stage
of the process respectively and empirically test potential crucial traits and skills: Study I investigates possible determinants of an intention for a comprehensive search for health information. Building an intention is considered as the basic step of the search process.
Motivational dispositions and self-regulatory skills were related to each other in a structural equation model and empirically tested based on theoretical investigations. Model fit showed an overall good fit and specific direct and indirect effects from approach and avoidance
motivation on the intention to seek comprehensively could be found, which supports the theoretical assumptions. The results show that as early as the formation of intention, the psychological perspective reveals influential personality traits and skills. Study II deals with the subsequent step, the selection of information sources. The preference for basic characteristics of information sources (i.e., accessibility, expertise, and interaction) is related to health information literacy as a collective term for relevant skills and intelligence as a personality trait. Furthermore, the study considers the influence of possible over- or underestimation of these characteristics. The results show not only a different predictive
contribution of health literacy and intelligence, but also the relevance of subjective and objective measurement.
Finally, Study III deals with the selection and evaluation of the health information previously found. The phenomenon of selective exposure is analysed, as this can be considered problematic in the health context. For this purpose, an experimental design was implemented in which a varying health threat was suggested to the participants. Relevant information was presented and the selective choice of this information was assessed. Health literacy was tested
as a moderator in a function of the induced threat and perceived vulnerability, triggering defence motives on the degree of bias. Findings show the importance of the consideration of the defence motives, which could cause a bias in the form of selective exposure. Furthermore, health literacy even seems to amplify this effect.
Results of the three studies are synthesized, discussed and general conclusions are drawn and implications for further research are determined.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
The present work explores how theories of motivation can be used to enhance video game research. Currently, Flow-Theory and Self-Determination Theory are the most common approaches in the field of Human-Computer Interaction. The dissertation provides an in-depth look into Motive Disposition Theory and how to utilize it to explain interindividual differences in motivation. Different players have different preferences and make different choices when playing games, and not every player experiences the same outcomes when playing the same game. I provide a short overview of the current state of the research on motivation to play video games. Next, Motive Disposition Theory is applied in the context of digital games in four different research papers, featuring seven studies, totaling 1197 participants. The constructs of explicit and implicit motives are explained in detail while focusing on the two social motives (i.e., affiliation and power). As dependent variables, behaviour, preferences, choices, and experiences are used in different game environments (i.e., Minecraft, League of Legends, and Pokémon). The four papers are followed by a general discussion about the seven studies and Motive Disposition Theory in general. Finally, a short overview is provided about other theories of motivation and how they could be used to further our understanding of the motivation to play digital games in the future. This thesis proposes that 1) Motive Disposition Theory represents a valuable approach to understand individual motivations within the context of digital games; 2) there is a variety of motivational theories that can and should be utilized by researchers in the field of Human-Computer Interaction to broaden the currently one-sided perspective on human motivation; 3) researchers should aim to align their choice of motivational theory with their research goals by choosing the theory that best describes the phenomenon in question and by carefully adjusting each study design to the theoretical assumptions of that theory.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
Soils in forest ecosystems bear a high potential as carbon (C) sinks in the mitigation of climate change. The amount and characteristics of soil organic matter (SOM) are driven by inputs, transformation, degradation and stabilization of organic substances. While tree species fuel the C cycle by producing aboveground and belowground litter, soil microorganisms are crucial for litter degradation as well as the formation and stabilization of SOM. Nonetheless, our knowledge about the tree species effect on the SOM status is limited, inconsistent and blurred. The investigation of tree species effects on SOM is challenging because in long-established forest ecosystems the spatial distribution of tree species is a result of the interplay of environmental factors including climate, geomorphology and soil chemistry. Moreover, tree distribution can further vary with forest successional stage and silvicultural management. Since these factors also directly affect the soil C-status, it is difficult to identify a pure “tree species effect” on the SOM status at regular forested sites. It therefore remains unclear in how far tree species-specific litter with different quality influences the microbial driven turnover and formation of SOM.
Tree species effects on SOM and related soil microbial properties were investigated by examining soil profiles (comprising organic forest floor horizons and mineral soil layers) in different forest stands at the recultivated spoil heap ‘Sophienhöhe’ located at the lignite open-cast mine Hambach near Jülich, Germany. The afforested sites comprised monocultural stands of Douglas fir (Pseudotsuga menziesii), black pine (Pinus nigra), European beech (Fagus sylvatica) and red oak (Quercus rubra) as well as a mixed deciduous stand site planted mainly with hornbeam (Carpinus betulus), lime (Tilia cordata) and common oak (Quercus robur) that were grown for 35 years under identical soil and geomorphological conditions. Because the parent material used for site recultivation was free from organic matter or coal material, the SOM accumulation is entirely the result of in situ soil development due to the impact of tree species.
The first study revealed that tree species had a significant effect on soil organic carbon (SOC) stocks, stoichiometric patterns of C, nitrogen (N), sulfur (S), hydrogen (H) and oxygen (O) as well as the microbial biomass carbon (MBC) content in the forest floor and the top mineral soil layers (0-5 cm, 5-10 cm, 10-30 cm). In general, forest floor SOC stocks were significantly higher at coniferous forest stands compared to deciduous tree species, whereas in mineral soil layers the differences were smaller. Thus, the impact of tree species decreased with increasing soil depth. By investigating the linkage of the natural abundance of 13C and 15N in the soil depth gradients with C:N and O:C stoichiometry, the second study showed that differences in SOC stocks and SOM quality resulted from a tree species-dependent turnover of SOM. Significantly higher turnover of organic matter in soils under deciduous tree species depended to 46 % on the quality of litterfall and root inputs (N content, C:N, O:C ratio), and on the initial isotopic signatures of litterfall. Hence, SOM composition and turnover also depends on additional – presumably microbially driven – factors. The subsequent results of the third study revealed that differences in SOM composition and related soil microbial properties were linked to different microbial communities. Phospholipid fatty acid (PLFA) patterns in the soil profiles indicated that the supply and availability of C and nutrient-rich substrates drive the distribution of fungi, Gram-positive (G+) bacteria and Gram-negative (G−) bacteria between tree species and along the soil depth gradients. The fourth study investigated the molecular composition of extractable soil microbial biomass-derived (SMB) and SOM-derived compounds by electrospray ionization Fourier transformation ion cyclotron resonance mass spectrometry (ESI-FT-ICR-MS). This was complemented by the analysis of nine monosaccharides representing microbial or plant origin. Microbially derived compounds substantially contributed to SOM and the contribution increased with soil depth. The supply of tree species-specific substrates resulted in different chemical composition of SMB with largest differences between deciduous and coniferous stands. At the same time, microorganisms contributed to SOM resulting in a strong similarity in the composition of SOM and SMB.
Overall, the complex interplay of tree species-specific litter inputs and the ability, activity and efficiency of the associated soil fauna and microbial community in metabolizing the organic substrates leads to significant differences in the amount, distribution, quality and consequently, the stability of SOM. These findings are useful for a targeted cultivation of tree species to optimize soil C sequestration and other forest ecosystems services.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
Our goal is to approximate energy forms on suitable fractals by discrete graph energies and certain metric measure spaces, using the notion of quasi-unitary equivalence. Quasi-unitary equivalence generalises the two concepts of unitary equivalence and norm resolvent convergence to the case of operators and energy forms defined in varying Hilbert spaces.
More precisely, we prove that the canonical sequence of discrete graph energies (associated with the fractal energy form) converges to the energy form (induced by a resistance form) on a finitely ramified fractal in the sense of quasi-unitary equivalence. Moreover, we allow a perturbation by magnetic potentials and we specify the corresponding errors.
This aforementioned approach is an approximation of the fractal from within (by an increasing sequence of finitely many points). The natural step that follows this realisation is the question whether one can also approximate fractals from outside, i.e., by a suitable sequence of shrinking supersets. We partly answer this question by restricting ourselves to a very specific structure of the approximating sets, namely so-called graph-like manifolds that respect the structure of the fractals resp. the underlying discrete graphs. Again, we show that the canonical (properly rescaled) energy forms on such a sequence of graph-like manifolds converge to the fractal energy form (in the sense of quasi-unitary equivalence).
From the quasi-unitary equivalence of energy forms, we conclude the convergence of the associated linear operators, convergence of the spectra and convergence of functions of the operators – thus essentially the same as in the case of the usual norm resolvent convergence.
Die Polargebiete sind geprägt von harschen Umweltbedingungen mit extrem kalten Temperaturen und Winden. Besonders während der polaren Nacht werden Temperaturen von bis zu -89.2°C}$ auf dem Antarktischen Plateau beobachtet. Infolge der starken Abkühlung beginnt das Ozeanwasser zu gefrieren und die Eisproduktion beginnt. Der Antarktische Ozean ist dabei von einer ausgeprägten zwischen- und innerjährlichen Variabilität geprägt und die Eisbedeckung variiert zwischen 2.07 * 10^6 km^2 im Sommer und 20.14 * 10^6 km^2 im Winter. Die Eisproduktion und Eisschmelze beeinflussen die atmosphärische und ozeanische Zirkulation. Dynamische Prozesse führen zur Bildung von Rissen im Eis und letztlich zum Entstehen von Eisrinnen (leads). Leads sind langgestreckte Risse die mindestens einige Meter breit und hunderte Meter bis hunderte Kilometer lang sein können. In diesen Eisrinnen ist das warme Ozeanwasser in Kontakt mit der kalten Atmosphäre, wodurch die Austauschraten fühlbarer und latenter Wärme, Feuchtigkeit und von Gasen stark erhöht sind. Eisrinnen tragen zur Eisproduktion in den Polargebieten bei und sind Habitat für zahlreiche Tiere. Eisrinnen, zentraler Bestandteil der präsentierten Studie, sind bis heute nur unzureichend im Südpolarmeer erforscht und beobachtet. Daher ist es Ziel einen Algorithmus zu entwickeln, um Eisrinnen in Fernerkundungsdaten automatisiert zu identifizieren. Dabei kommen thermal-Infrarot Satellitendaten des Moderate-Resolution Imaging Spectroradiometer (MODIS) zum Einsatz, welches auf den beiden Satelliten Aqua und Terra montiert ist und seit 2000 (Terra) bzw. 2002 (Aqua) Satellitenbilder bereitstellt. Die einzelnen Satellitenbilder beinhalten die Eisoberflächentemperatur des MOD/MYD 29 Produktes, welche in einem zweistufigen Algorithmus für den Zeitraum April bis September 2003 bis 2019 prozessiert werden.
Im ersten Schritt werden potentielle Eisrinnen anhand der lokalen positiven Temperaturanomalie identifiziert. Aufgrund von Artefakten werden weitere temperatur- und texturbasierte Parameter abgeleitet und zu täglichen Kompositen zusammengefügt. Diese werden in der zweiten Prozessierungsstufe verwendet, um Wolkenartefakte von echten Eisrinnen-Observationen zu trennen. Hier wird Fuzzy Logic genutzt und eine Antarktis-spezifische Konfiguration wird definiert. In diesem werden ausgewählte Eingabedaten aus dem ersten Prozessierungslevel genutzt, um einen finalen Proxy, den Lead Score (LS), zu berechnen. Der LS wird abschließend mittels manueller Qualitätskontrolle in eine Unsicherheit überführt. Die darüber identifizierten Artefakte können so zusätzlich zur MODIS-Wolkenmaske genutzt werden.
Auf Basis der Eisrinnenbeobachtungen wird ein klimatologischer Referenzdatensatz erstellt, der die repräsentative Eisrinnenverteilung im Antarktischen Ozean für die Wintermonate April bis September, 2003 bis 2019 zeigt. In diesem ist sichtbar, dass Eisrinnen in manchen Gegenden systematischer auftreten als in anderen. Das sind vor allem die Regionen entlang der Küstenregion, des kontinentalen Schelfabhangs und einigen Erhebungen und Kanälen in der Tiefsee. Dabei sind die erhöhten Frequenzen entlang des Schelfabhangs besonders interessant und der Einfluss von atmosphärischen und ozeanischen Einflüssen wird untersucht. Ein regionales Eis-Ozeanmodell wird genutzt, um ozeanische Einflüsse in Zusammenhang mit erhöhten Eisrinnenfrequenzen zu setzen.
In der vorliegenden Studie wird außerdem ein umfangreicher Überblick über die großskalige Variabilität von Antarktischem Meereis gegeben. Tägliche Eiskonzentrationsdaten, abgeleitet aus passiven Mikrowellendaten, werden aus dem Zeitraum 1979 bis 2018 für die Klassifikation genutzt. Der dk-means Algorithmus wird verwendet, um zehn repräsentative Eisklassen zu identifizieren. Die geographische Verteilung dieser Klassen wird als Karte dargestellt, in der der typische jährliche Eiszyklus je Klasse sichtbar ist.
Veränderungen in dem räumlichen Auftreten von Eisklassen werden identifiziert und qualitativ interpretiert. Positive Abweichungen hin zu höheren Eisklassen werden im Weddell- und dem Ross-Meer und einigen Regionen in der Ostantarktis identifiziert. Negative Abweichungen sind im Amundsen-Bellingshausen-Meer vorhanden. Der neu entwickelte (Climatological Sea Ice Anomaly Index) wird genutzt, um Klassenabweichungen in der Zeitreihe zu identifizieren. Damit werden drei Jahre (1986, 2007, 2014) für eine Fallstudie ausgewählt und in Relation zu atmosphärischen Daten aus ERA-Interim und Eisdrift-Daten untersucht. Für die beiden Jahre 1986 und 2007 können bestimmte atmosphärische Zirkulationsmuster identifiziert werden, die die entsprechende Eisklassifikation beeinflusst haben. Für das Jahr 2014 können keine besonders ausgeprägten atmosphärischen Anomalien ausgemacht werden.
Der Eisklassen-Datensatz kann in Zukunft als Ergänzung zu vorhandenen Studien und für die Validierung von Meereismodellen genutzt werden. Dabei sind vor allem Anwendungen in Bezug auf den Eisrinnen-Datensatz möglich.
Auf politischer Ebene hat die Finanzierung von Kleinstunternehmen, kleinen und mittleren Unternehmen (KMU) durch die europäische Finanz- und Wirtschaftskrise eine hohe Bedeutung erhalten, da mehr als 99% aller europäischen Unternehmen in Europa dieser Kategorie angehören. Als Reaktion auf die oftmals schwierige Finanzierungssituation von KMU, die maßgeblich zur Gefährdung der Innovationsfähigkeit und der Entwicklung der europäischen Wirtschaft beitragen kann, wurden spezielle staatliche Programme aufgelegt. Trotz des vermehrten Interesses auf politischer und akademischer Ebene bezüglich KMU-Finanzierung, gibt es jedoch auf europäischer Ebene nur wenig empirische Evidenz. Diese Dissertation beschäftigt sich daher in fünf verschiedenen empirischen Studien zu aktuellen Forschungslücken hinsichtlich der Finanzierung von Kleinstunternehmen, kleinen und mittleren Unternehmen in Europa und mit neuen Finanzierungsinstrumenten für innovative Unternehmen oder Start-Ups.
Zunächst wird basierend auf zwei empirischen Untersuchungen (Kapitel 2 und 3) der Status Quo der KMU-Finanzierung in Europa dargelegt. Die Finanzierung von KMU in Europa ist sehr heterogen. Einerseits sind KMU als Gruppe keine homogene Gruppe, da Kleinstunternehmen (< 10 Mitarbeiter), kleine (10–49 Mitarbeiter) und mittlere (50–249 Mitarbeiter) Unternehmen sich nicht nur in ihren Charakteristiken unterscheiden, sondern auch unterschiedliche Finanzierungsmöglichkeiten und -bedürfnisse besitzen. Andererseits existieren Länderunterschiede in der Finanzierung von KMU in Europa. Die Ergebnisse dieser beiden Studien (Kapitel 2 und 3), die auf einer Umfrage der Europäischen Zentralbank und der Europäischen Kommission („SAFE survey“) beruhen, verdeutlichen dies: KMU in Europa verwenden unterschiedliche Finanzierungsmuster und nutzen Finanzierungsmuster komplementär oder substitutiv zueinander. Die verschiedenen Finanzierungsmuster sind wiederum gekennzeichnet durch firmen-, produkt-, und länderspezifische Charakteristika, aber auch durch makroökonomische Variablen (z. B. Inflationsraten).
In Kapitel 3 der Dissertation werden gezielt die Unterschiede zwischen der Finanzierung von Kleinstunternehmen im Vergleich zu kleinen und mittleren Unternehmen untersucht. Während kleine und mittlere Unternehmen eine Vielzahl an verschiedenen Finanzierungsinstrumenten parallel zueinander nutzen (z. B. subventionierte Bankkredite parallel zu Banken-, Überziehungs- und Lieferantenkrediten), greifen Kleinstunternehmen auf wenige Instrumente gleichzeitig zurück (insbesondere kurzfristiges Fremdkapital). Folglich finanzieren sich Kleinstunternehmen entweder intern oder über Überziehungskredite. Die Ergebnisse der Dissertation zeigen somit, dass die Finanzierung der KMU nicht homogen ist. Insbesondere Kleinstunternehmen sollten als eine eigenständige Gruppe innerhalb der KMU mit charakteristischen Finanzierungsmustern behandelt werden.
Innovative Firmen und Start-Ups gelten als wichtiger Motor für die Entwicklung der regionalen Wirtschaft. Auch sie werden in der akademischen Literatur häufig mit Finanzierungsschwierigkeiten in Verbindung gebracht, die das Wachstum und Überleben dieser Unternehmen erschwert. Der zweite Teil der Dissertation beinhaltet daher zwei empirische Studien zu dieser Thematik. Zunächst werden in Kapitel 4 in einer ersten Studie die regionalen und firmenspezifischen Faktoren untersucht, die den Output des geistigen Eigentums erhöhen. Insbesondere regionale Faktoren wurden bisher unzureichend untersucht, welche jedoch speziell für die politischen Entscheidungsträger von besonderer Relevanz sind. Die Ergebnisse dieser Studie zeigen, dass der Erhalt von Venture Capital neben der Firmengröße einen signifikanten Einfluss auf die Höhe des geistigen Eigentums haben. Zwar spielen technische Universitäten keine Rolle bezüglich des Outputs, jedoch zeigt sich ein signifikant positiver Effekt der Studentenrate auf den jeweiligen Output des geistigen Eigentums. Basierend auf diesen Ergebnissen wird in einer zweiten Studie gezielt auf das Finanzierungsinstrument Venture Capital eingegangen und zwischen verschiedenen VC Typen unterschieden: staatliche, unabhängige und Corporate Venture Capital Firmen. Die Ergebnisse zeigen, dass insbesondere Regionen mit einem Angebot an qualifiziertem Humankapital staatliche Venture Capital Investitionen anziehen. Des Weiteren investieren insbesondere Corporate und staatliche Venture Capital Firmen vermehrt in ländliche Regionen.
Als neues Finanzierungsinstrument für besonders innovative Unternehmer hat sich das „Initial Coin Offering (ICO)“ in den letzten Jahren herauskristallisiert, womit sich Kapitel 5 näher beschäftigt. Mithilfe einer Zeitreihenanalyse werden Marktzyklen von ICO Kampagnen, bitcoin und Ether Preisen analysiert. Die Ergebnisse dieser Studie zeigen, dass vergangene ICOs die folgenden ICOs positiv beeinflussen. Zudem haben ICOs einen negativen Einfluss auf die Kryptowährungen Bitcoin und Ether, wohingegen sich der Preis des bitcoin positiv auf den Preis des Ethers auswirkt.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
Human behavior in regard to financial issues has long been explained in the light of the efficient market hypothesis. Following the strict interpretation of this theory, investors in the financial markets take into account that all relevant information is already included in the market price of an asset. Accordingly, information from the past does not affect future prices as all information is instantly incorporated. However, focussing on the actual behavior of humans, our empirical results indicate that the existing market conditions influence the behavior of stock market investors.
In the introductory chapter, we describe the difficulties of the efficient markets hypothesis in explaining the behavior of investors within a strictly rational frame. In the second chapter, we show that investors do consider the previous market development for their upcoming investment decisions. First, stock market patterns with predominantly positive days trigger significantly more trades than patterns with negative days. And second, after recent upward movements, investors sell proportionally more stocks than they buy. In the third chapter, we expound a theoretical framework that connects investment-related triggers of arousal, such as the performance of own stocks and the general market environment, with investors’ risk appetite in the decision-making processes. Our model predicts that aroused investors accept higher risks by holding stocks longer in comparison to their less aroused peers. In the fourth chapter, we show how two extreme market environments, the bull and the bear market, affect the disposition effect and especially learning to avoid this behavioral bias. Investors are subject to the bias in each market phase but with a far stronger propensity during the bear market. However, we show that investors also make the greatest progress in avoiding the disposition effect during this period.
These results suggest that future studies about investors’ behavior in the financial markets should consider the market environment as an important determinant.
This thesis sheds light on the heterogeneous hedging behavior of airlines. The focus lies on financial hedging, operational hedging and selective hedging. The unbalanced panel data set includes 74 airlines from 39 countries. The period of analysis is 2005 until 2014, resulting in 621 firm years. The random effects probit and fixed effects OLS models provide strong evidence of a convex relation between derivative usage and a firm’s leverage, opposing the existing financial distress theory. Airlines with lower leverage had higher hedge ratios. In addition, the results show that airlines with interest rate and currency derivatives were more likely to engage in fuel price hedging. Moreover, the study results support the argument that operational hedging is a complement to financial hedging. Airlines with more heterogeneous fleet structures exhibited higher hedge ratios.
Also, airlines which were members of a strategic alliance were more likely to be hedging airlines. As alliance airlines are rather financially sound airlines, the positive relation between alliance membership and hedging reflects the negative results on the leverage
ratio. Lastly, the study presents determinants of an airlines’ selective hedging behavior. Airlines with prior-period derivative losses, recognized in income, changed their hedge portfolios more frequently. Moreover, the sample airlines acted in accordance with herd behavior theory. Changes in the regional hedge portfolios influenced the hedge portfolio of the individual airline in the same direction.
The formerly communist countries in Central and Eastern Europe (transitional economies in Europe and the Soviet Union – for example, East Germany, Czech Republic, Hungary, Lithuania, Poland, Russia) and transitional economies in Asia – for example, China, Vietnam had centrally planned economies, which did not allow entrepreneurship activities. Despite the political-socioeconomic transformations in transitional economies around 1989, they still had an institutional heritage that affects individuals’ values and attitudes, which, in turn, influence intentions, behaviors, and actions, including entrepreneurship.
While prior studies on the long-lasting effects of socialist legacy on entrepreneurship have focused on limited geographical regions (e.g., East-West Germany, and East-West Europe), this dissertation focuses on the Vietnamese context, which offers a unique quasi-experimental setting. In 1954, Vietnam was divided into the socialist North and the non-socialist South, and it was then reunified under socialist rule in 1975. Thus, the intensity of differences in socialist treatment in North-South Vietnam (about 21 years) is much shorter than that in East-West Germany (about 40 years) and East-West Europe (about 70 years when considering former Soviet Union countries).
To assess the relationship between socialist history and entrepreneurship in this unique setting, we survey more than 3,000 Vietnamese individuals. This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
In the second empirical investigation, this dissertation focuses on how succession intentions differ from others (e.g., founding, and employee intentions) regarding career choice motivation, and the effect of three main elements of the theory of planned behavior (e.g., entrepreneurial attitude, subjective norms, and perceived behavioral control) in transition economy – Vietnam context. The findings of this thesis suggest that an intentional founder is labeled with innovation, an intentional successor is labeled with roles motivation, and an intentional employee is labeled with social mission. Additionally, this thesis reveals that entrepreneurial attitude and perceived behavioral control are positively associated with the founding intention, whereas there is no difference in this effect between succession and employee intentions.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
Imagery-based techniques have received increasing interest in psychotherapy research. Whereas their effectiveness has been shown for various psychological disorders, their underlying mechanisms remain unclear. Current research predominantly investigates intrapersonal processes, while interpersonal processes have received no attention to date. The aim of the current dissertation was to fill this lacuna. The three interrelated studies comprising this dissertation were the first to examine the effectiveness of imagery-based techniques in the treatment of test anxiety, relate physiological arousal to emotional processing, and investigate the association between physiological synchrony and multiple process measures.
Study I investigated the feasibility of a newly developed protocol, which integrates imagery-based and cognitive-behavioral components, to treat test anxiety in a sample of 31 students. The results indicated the protocol as acceptable, feasible, and effective in the treatment of test anxiety. Additionally, the imagery-based component was positively associated with therapeutic bond, session evaluation, and emotional experience.
Study II shifted the focus from the effectiveness of imagery-based techniques to client-therapist physiological synchrony as a putative mechanism of change in the same sample. The results suggested that physiological synchrony was greater than chance during both imagery-based and cognitive-behavioral components. Variability of physiological synchrony on the session-level during the imagery-based components and variability on both levels (session and dyad) during the cognitive-behavioral components were demonstrated. Furthermore, physiological synchrony of the imagery-based segments was positively assocatied with therapeutic bond. No association was found for the cognitive-behavioral components.
Study III examined both intrapersonal (i.e., clients’ electrodermal activity) and interpersonal (i.e., client-therapist electrodermal activity synchrony) processes and their associations with emotional processing in a sample of 49 client-therapist-dyads. The results suggested that higher client physiological arousal and a moderate level of physiological synchrony were associated with deeper emotional processing.
Taken together, the results highlight the effectiveness of imagery-based techniques in the treatment of test anxiety. Furthermore, the results of Studies II and III support the idea of physiological synchrony as a mechanism of change in imagery with and without rescripting. The current dissertation takes an important step towards optimizing process research within psychotherapy and contributes to a better understanding of the potency and mechanisms of change of imagery-based techniques. We hope that these studies’ implications will support everyday clinical practice.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
The present thesis is devoted to a construction which defies generalisations about the prototypical English noun phrase (NP) to such an extent that it has been termed the Big Mess Construction (Berman 1974). As illustrated by the examples in (1) and (2), the NPs under study involve premodifying adjective phrases (APs) which precede the determiner (always realised in the form of the indefinite article a(n)) rather than following it.
(1) NoS had not been hijacked – that was too strong a word. (BNC: CHU 1766)
(2) He was prepared for a battle if the porter turned out to be as difficult a customer as his wife. (BNC: CJX 1755)
Previous research on the construction is largely limited to contributions from the realms of theoretical syntax and a number of cursory accounts in reference grammars. No comprehensive investigation of its realisations and uses has as yet been conducted. My thesis fills this gap by means of an exhaustive analysis of the construction on the basis of authentic language data retrieved from the British National Corpus (BNC). The corpus-based approach allows me to examine not only the possible but also the most typical uses of the construction. Moreover, while previous work has almost exclusively focused on the formal realisations of the construction, I investigate both its forms and functions.
It is demonstrated that, while the construction is remarkably flexible as concerns its possible realisations, its use is governed by probabilistic constraints. For example, some items occur much more frequently inside the degree item slot than others (as, too and so stand out for their particularly high frequency). Contrary to what is assumed in most previous descriptions, the slot is not restricted in its realisation to a fixed number of items. Rather than representing a specialised structure, the construction is furthermore shown to be distributed over a wide range of possible text types and syntactic functions. On the other hand, it is found to be much less typical of spontaneous conversation than of written language; Big Mess NPs further display a strong preference for the function of subject complement. Investigations of the internal structural complexity of the construction indicate that its obligatory components can optionally be enriched by a remarkably wide range of optional (if infrequent) elements. In an additional analysis of the realisations of the obligatory but lexically variable slots (head noun and head of AP), the construction is highlighted to represent a productive pattern. With the help of the methods of Collexeme Analysis (Stefanowitsch and Gries 2003) and Co-varying Collexeme Analysis (Gries and Stefanowitsch 2004b, Stefanowitsch and Gries 2005), the two slots are, however, revealed to be strongly associated with general nouns and ‘evaluative’ and ‘dimension’ adjectives, respectively. On the basis of an inspection of the most typical adjective-noun combinations, I identify the prototypical semantics of the Big Mess Construction.
The analyses of the constructional functions centre on two distinct functional areas. First, I investigate Bolinger’s (1972) hypothesis that the construction fulfils functions in line with the Principle of Rhythmic Alternation (e.g. Selkirk 1984: 11, Schlüter 2005). It is established that rhythmic preferences co-determine the use of the construction to some extent, but that they clearly do not suffice to explain the phenomenon under study. In a next step, the discourse-pragmatic functions of the construction are scrutinised. Big Mess NPs are demonstrated to perform distinct information-structural functions in that the non-canonical position of the AP serves to highlight focal information (compare De Mönnink 2000: 134-35). Additionally, the construction is shown to place emphasis on acts of evaluation. I conclude the construction to represent a contrastive focus construction.
My investigations of the formal and functional characteristics of Big Mess NPs each include analyses which compare individual versions of the construction to one another (e.g. the As Big a Mess, Too Big a Mess and So Big a Mess Constructions). It is revealed that the versions are united by a shared core of properties while differing from one another at more abstract levels of description. The question of the status of the constructional versions as separate constructions further receives special emphasis as part of a discussion in which I integrate my results into the framework of usage-based Construction Grammar (e.g. Goldberg 1995, 2006).
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
With two-thirds to three-quarters of all companies, family firms are the most common firm type worldwide and employ around 60 percent of all employees, making them of considerable importance for almost all economies. Despite this high practical relevance, academic research took notice of family firms as intriguing research subjects comparatively late. However, the field of family business research has grown eminently over the past two decades and has established itself as a mature research field with a broad thematic scope. In addition to questions relating to corporate governance, family firm succession and the consideration of entrepreneurial families themselves, researchers mainly focused on the impact of family involvement in firms on their financial performance and firm strategy. This dissertation examines the financial performance and capital structure of family firms in various meta-analytical studies. Meta-analysis is a suitable method for summarizing existing empirical findings of a research field as well as identifying relevant moderators of a relationship of interest.
First, the dissertation examines the question whether family firms show better financial performance than non-family firms. A replication and extension of the study by O’Boyle et al. (2012) based on 1,095 primary studies reveals a slightly better performance of family firms compared to non-family firms. Investigating the moderating impact of methodological choices in primary studies, the results show that outperformance holds mainly for large and publicly listed firms and with regard to accounting-based performance measures. Concerning country culture, family firms show better performance in individualistic countries and countries with a low power distance.
Furthermore, this dissertation investigates the sensitivity of family firm performance with regard to business cycle fluctuations. Family firms show a pro-cyclical performance pattern, i.e. their relative financial performance compared to non-family firms is better in economically good times. This effect is particularly pronounced in Anglo-American countries and emerging markets.
In the next step, a meta-analytic structural equation model (MASEM) is used to examine the market valuation of public family firms. In this model, profitability and firm strategic choices are used as mediators. On the one hand, family firm status itself does not have an impact on firms‘ market value. On the other hand, this study finds a positive indirect effect via higher profitability levels and a negative indirect effect via lower R&D intensity. A split consideration of family ownership and management shows that these two effects are mainly driven by family ownership, while family management results in less diversification and internationalization.
Finally, the dissertation examines the capital structure of public family firms. Univariate meta-analyses indicate on average lower leverage ratios in family firms compared to non-family firms. However, there is significant heterogeneity in mean effect sizes across the 45 countries included in the study. The results of a meta-regression reveal that family firms use leverage strategically to secure their controlling position in the firm. While strong creditor protection leads to lower leverage ratios in family firms, strong shareholder protection has the opposite effect.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
Hypothalamic-pituitary-adrenal (HPA) axis-related genetic variants influence the stress response
(2019)
The physiological stress system includes the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic-adrenal-medullary system (SAM). Parameters representing these systems such as cortisol, blood pressure or heart rate define the physiological reaction in response to a stressor. The main objective of the studies described in this thesis was to understand the role of the HPA-related genetic factors in these two systems. Genetic factors represent one of the components causing individual variations in physiological stress parameters. Five genes involved in the functioning of the HPA axis regarding stress responses are examined in this thesis. They are: corticotropin-releasing hormone (CRH), the glucocorticoid receptor (GR), the mineralocorticoid receptor (MR), the 5-hydroxytryptamine-transporter-linked polymorphic region (5-HTTLPR) in the serotonin transporter (5-HTT) and the brain-derived neurotrophic factor (BDNF) gene. Two hundred thirty-two healthy participants were genotyped. The influence of genetic factors on physiological parameters, such as post-awakening cortisol and blood pressure was assessed, as well as the influence of genetic factors on stress reactivity in response to a socially evaluated cold pressor test (SeCPT). Three studies tested the HPA-related genes each on three different levels. The first study examined the influences of genotypes and haplotypes of these five genes on physiological as well as psychological stress indicators (Chapter 2). The second study examined the effects of GR variants (genotypes and haplotypes) and promoter methylation level on both the SAM system and the HPA axis stress reactivity (Chapter 3). The third study comprised the characterization of CRH promoter haplotypes in an in-vitro study and the association of the CRH promoter with stress indicators in vivo (Chapter 4).
In order to investigate the psychobiological consequences of acute stress under laboratory conditions, a wide range of methods for socially evaluative stress induction have been developed. The present dissertation is concerned with evaluating a virtual reality (VR)-based adaptation of one of the most widely used of those methods, the Trier Social Stress Test (TSST). In the three empirical studies collected in this dissertation, we aimed to examine the efficacy and possible areas of application of the adaptation of this well-established psychosocial stressor in a virtual environment. We found that the TSST-VR reliably incites the activation of the major stress effector systems in the human body, albeit in a slightly less pronounced way than the original paradigm. Moreover, the experience of presence is discussed as one potential factor of influence in the origin of the psychophysiological stress response. Lastly, we present a use scenario for the TSST-VR in which we employed the method to investigate the effects of acute stress on emotion recognition performance. We conclude that, due to its advantages concerning versatility, standardization and economic administration, the paradigm harbors enormous potential not only for psychobiological research, but other applications such as clinical practice as well. Future studies should further explore the underlying effect mechanisms of stress in the virtual realm and the implementation of VR-based paradigms in different fields of application.
Entrepreneurship has become an essential phenomenon all over the world because it is a major driving force behind the economic growth and development of a country. It is widely accepted that entrepreneurship development in a country creates new jobs, pro-motes healthy competition through innovation, and benefits the social well being of individuals and societies. The policymakers in both developed and developing countries focus on entrepreneurship because it helps to alleviate impediments to economic development and social welfare. Therefore, policymakers and academic researchers consider the promotion of entrepreneurship as essential for the economy and research-based support is needed for further development of entrepreneurship activities.
The impact of entrepreneurial activities on economic and social development also varies from country to country. The effect of entrepreneurial activities on economic and social development also varies from country to country because the level of entrepreneur-ship activities also varies from one region to another or one country to another. To under-stand these variations, policymakers have investigated the determinants of entrepreneur-ship at different levels, such as the individual, industry, and country levels. Moreover, entrepreneurship behavior is influenced by various personal and environmental level factors. However, these personal-level factors cannot be separated from the surrounding environment.
The link between religion and entrepreneurship is well established and can be traced back to Weber (1930). Researchers have analyzed the relationship between religion and entrepreneurship from various perspectives, and the research related to religion and entrepreneurship is diversified and scattered across disciplines. This dissertation tries to explain the link between religion and entrepreneurship, specifically Islamic religion and entrepreneurship. Technically this dissertation comprises three parts. The first part of this dissertation consists of two chapters that discuss the definition and theories of entrepreneurship (Chapter 2) and the theoretical relationship between religion and entrepreneur-ship (Chapter 3).
The second part of this dissertation (Chapter 4) provides an overview of the field with a purpose to gain a better understanding of the field’s current state of knowledge to bridge the different views and perspectives. In order to provide an overview of the field, a systematic literature search leading to a descriptive overview of the field based on 270 articles published in 163 journals Subsequently, bibliometric methods are used to identify thematic clusters, the most influential authors and articles, and how they are connected.
The third part of this dissertation (Chapter 5) empirically evaluates the influence of Islamic values and Islamic religious practices on entrepreneurship intentions within the Islamic community. Using the theory of planned behavior as a theoretical lens, we also take into account that the relationship between religion and entrepreneurial intentions can be mediated by individual’s attitude towards entrepreneurship. A self-administrative questionnaire was used to collect the responses from a sample of 1895 Pakistani university students. A structured equation modeling was adopted to perform a nuanced assessment of the relationship between Islamic values and practices and entrepreneurship intentions and to account for mediating effect of attitude towards entrepreneurship.
The research on religion and entrepreneurship has increased sharply during the last years and is scattered across various academic disciplines and fields. The analysis identifies and characterize the most important publications, journals, and authors in the area and map the analyzed religions and regions. The comprehensive overview of previous studies allows us to identify research gaps and derive avenues for future research in a substantiated way. Moreover, this dissertation helps the research scholars to understand the field in its entirety, identify relevant articles, and to uncover parallels and differences across religions and regions. Besides, the study reveals a lack of empirical research related to specific religions and specific regions. Therefore, scholars can take these regions and religions into consideration when conducting empirical research.
Furthermore, the empirical analysis about the influence of Islamic religious values and Islamic religious practices show that Islamic values served as a guiding principle in shaping people’s attitudes towards entrepreneurship in an Islamic community; they had an indirect influence on entrepreneurship intention through attitude. Similarly, the relationship between Islamic religious practices and the entrepreneurship intentions of students was fully mediated by the attitude towards entrepreneurship. Furthermore, this dissertation contributes to prior research on entrepreneurship in Islamic communities by applying a more fine-grained approach to capture the link between religion and entrepreneurship. Moreover, it contributes to the literature on entrepreneurship intentions by showing that the influence of religion on entrepreneurship intentions is mainly due to religious values and practices, which shape the attitude towards entrepreneurship and thereby influence entrepreneurship intentions in religious communities. The entrepreneur-ship research has put a higher emphasis on assessing the influence of a diverse set of con-textual factors. This dissertation introduces Islamic values and Islamic religious practices as critical contextual factors that shape entrepreneurship in countries that are characterized by the Islamic religion.
Why they rebel peacefully: On the violence-reducing effects of a positive attitude towards democracy
Under the impression of Europe’s drift into Nazism and Stalinism in the first half of the 20th century, social psychological research has focused strongly on dangers inherent in people’s attachment to a political system. The dissertation at hand contributes to a more differentiated perspective by examining violence-reducing aspects of political system attachment in four consecutive steps: First, it highlights attachment to a social group as a resource for violence prevention on an intergroup level. The results suggest that group attachment fosters self-control, a well-known protective factor against violence. Second, it demonstrates violence-reducing influences of attachment on a societal level. The findings indicate that attachment to a democracy facilitate peaceful and prevent violent protest tendencies. Third, it introduces the concept of political loyalty, defined as a positive attitude towards democracy, in order to clarify the different approaches of political system attachment. A set of three studies show the reliability and validity of a newly developed political loyalty questionnaire that distinguishes between affective and cognitive aspects. Finally, the dissertation differentiates former findings with regard to protest tendencies using the concept of political loyalty. A set of two experiments show that affective rather than cognitive aspects of political loyalty instigate peaceful protest tendencies and prevent violent ones. Implications of this dissertation for political engagement and peacebuilding as well as avenues for future research are discussed.
This dissertation details how Zeami (ca. 1363 - ca.1443) understood his adoption of the heavenly woman dance within the historical conditions of the Muromachi period. He adopted the dance based on performances by the Ōmi troupe player Inuō in order to expand his own troupe’s repertoire to include a divinely powerful, feminine character. In the first chapter, I show how Zeami, informed by his success as a sexualized child in the service of the political elite (chigo), understood the relationship between performer and audience in gendered terms. In his treatises, he describes how a player must create a complementary relationship between patron and performer (feminine/masculine or yin/yang) that escalates to an ecstasy of successful communication between the two poles, resembling sexual union. Next, I look at how Zeami perceived Inuō’s relationships with patrons, the daimyo Sasaki Dōyo in chapter two and shogun Ashikaga Yoshimitsu in chapter three. Inuō was influenced by Dōyo’s masculine penchant for powerful, awe-inspiring art, but Zeami also recognized that Inuō was able to complement Dōyo’s masculinity with feminine elegance (kakari and yūgen). In his relationship with Yoshimitsu, Inuō used the performance of subversion, both in his public persona and in the aesthetic of his performances, to maintain a rebellious reputation appropriate within the climate of conflict among the martial elite. His play “Aoi no ue” draws on the aristocratic literary tradition of the Genji monogatari, giving Yoshimitsu the role of Prince Genji and confronting him with the consequences of betrayal in the form of a demonic, because jilted, Lady Rokujō. This performance challenged Zeami’s early notion that the extreme masculinity of demons and elegant femininity as exemplified by the aristocracy must be kept separate in character creation. In the fourth chapter, I show how Zeami also combined dominance (masculinity) and submission (femininity) in the corporal capacity of a single player when he adopted the heavenly woman dance. The heavenly woman dance thus complemented not only the masculinity of his male patrons with femininity but also the political power of his patrons with another dominant power, which plays featuring the heavenly woman dance label divine rather than masculine.
Theoretical and empirical research assumes a negative development of student achievement motivation over the course of their school careers (i.e., mean-level declines of achievement motivation). However, the exact magnitude of this motivational change remains elusive and it is unclear whether different motivational constructs show similar developmental trends. Furthermore, it is unknown whether motivational declines are related to a particular school stage (i.e., elementary, middle, or high school) or the school transition, and which additional changes are associated with motivational decreases (e.g., changes in student achievement). Finally, previous research has remained inconsistent regarding the question whether ability grouping of students helps prevent motivational declines or results in additional motivational “costs” for students.
This dissertation presents three articles that were designed to address these research questions. In Article 1, a meta-analysis based on 107 independent longitudinal studies investigated student mean-level changes in self-esteem, academic self-concept, academic self-efficacy, intrinsic motivation, and achievement goals from first to 13th grade. Article 2 comprised two longitudinal studies with German adolescents (Study: n = 745 students assessed in four waves in grades 5-7; Study 2: n = 1420 students assessed in four waves in grades 5-8). Both longitudinal studies investigated the separate and the joint development of achievement goals, interest, and achievement in math. In Article 3, a longitudinal study (n = 296 high-ability students assessed in four waves in grades 5-7) investigated the effects of full-time ability grouping on student development of academic self-concept and achievement in math.
The meta-analysis revealed significant decreases in math and language academic self-concept, intrinsic motivation, and mastery and performance-approach goals, whereas no significant changes in self-esteem, general academic self-concept, academic self-efficacy, and performance-avoidance goals were found. Interestingly, motivational declines were not related to school stage or school transition. In Article 2, decreases in interest and mastery, performance-approach, and performance-avoidance goals were indicated by both longitudinal studies. Development of mastery and performance-approach goals was positively related or unrelated to development in interest and achievement, whereas development of performance-avoidance goals was negatively related or unrelated to development of interest and achievement. Finally, the longitudinal study in Article 3 revealed no significant change in student academic self-concept in math over time. Ability grouping showed no positive or negative effects on student academic self-concept. However, high-ability students that were grouped together demonstrated greater gains in their achievement than high-ability students in regular classes.
This dissertation investigates corporate acquisition decisions that represent important corporate development activities for family and non-family firms. The main research objective of this dissertation is to generate insights into the subjective decision-making behavior of corporate decision-makers from family and non-family firms and their weighting of M&A decision-criteria during the early pre-acquisition target screening and selection process. The main methodology chosen for the investigation of M&A decision-making preferences and the weighting of M&A decision criteria is a choice-based conjoint analysis. The overall sample of this dissertation consists of 304 decision-makers from 264 private and public family and non-family firms from mainly Germany and the DACH-region. In the first empirical part of the dissertation, the relative importance of strategic, organizational and financial M&A decision-criteria for corporate acquirers in acquisition target screening is investigated. In addition, the author uses a cluster analysis to explore whether distinct decision-making patterns exist in acquisition target screening. In the second empirical part, the dissertation explores whether there are differences in investment preferences in acquisition target screening between family and non-family firms and within the group of family firms. With regards to the heterogeneity of family firms, the dissertation generated insights into how family-firm specific characteristics like family management, the generational stage of the firm and non-economic goals such as transgenerational control intention influences the weighting of different M&A decision criteria in acquisition target screening. The dissertation contributes to strategic management research, in specific to M&A literature, and to family business research. The results of this dissertation generate insights into the weighting of M&A decision-making criteria and facilitate a better understanding of corporate M&A decisions in family and non-family firms. The findings show that decision-making preferences (hence the weighting of M&A decision criteria) are influenced by characteristics of the individual decision-maker, the firm and the environment in which the firm operates.
In the modeling context, non-linearities and uncertainty go hand in hand. In fact, the utility function's curvature determines the degree of risk-aversion. This concept is exploited in the first article of this thesis, which incorporates uncertainty into a small-scale DSGE model. More specifically, this is done by a second-order approximation, while carrying out the derivation in great detail and carefully discussing the more formal aspects. Moreover, the consequences of this method are discussed when calibrating the equilibrium condition. The second article of the thesis considers the essential model part of the first paper and focuses on the (forward-looking) data needed to meet the model's requirements. A large number of uncertainty measures are utilized to explain a possible approximation bias. The last article keeps to the same topic but uses statistical distributions instead of actual data. In addition, theoretical (model) and calibrated (data) parameters are used to produce more general statements. In this way, several relationships are revealed with regard to a biased interpretation of this class of models. In this dissertation, the respective approaches are explained in full detail and also how they build on each other.
In summary, the question remains whether the exact interpretation of model equations should play a role in macroeconomics. If we answer this positively, this work shows to what extent the practical use can lead to biased results.
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.