### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (220)
- Article (24)
- Book (11)
- Conference Proceedings (9)
- Other (3)
- Habilitation (2)
- Master's Thesis (1)
- Part of Periodical (1)
- retrodigitization (1)
- Working Paper (1)

#### Language

- English (273) (remove)

#### Keywords

- Stress (22)
- Cortisol (11)
- Fernerkundung (11)
- Hydrocortison (11)
- cortisol (9)
- n.a. (9)
- Modellierung (8)
- Optimierung (7)
- stress (7)
- Physiologische Psychologie (6)

#### Institute

- Psychologie (71)
- Geographie und Geowissenschaften (60)
- Mathematik (44)
- Wirtschaftswissenschaften (23)
- Anglistik (15)
- Informatik (15)
- Rechtswissenschaft (14)
- Fachbereich 4 (9)
- Fachbereich 6 (4)
- Fachbereich 1 (3)

Academic achievement is a central outcome in educational research, both in and outside higher education, has direct effects on individual’s professional and financial prospects and a high individual and public return on investment. Theories comprise cognitive as well as non-cognitive influences on achievement. Two examples frequently investigated in empirical research are knowledge (as a cognitive determinant) and stress (as a non-cognitive determinant) of achievement. However, knowledge and stress are not stable, what raises questions as to how temporal dynamics in knowledge on the one hand and stress on the other contribute to achievement. To study these contributions in the present doctoral dissertation, I used meta-analysis, latent profile transition analysis, and latent state-trait analysis. The results support the idea of knowledge acquisition as a cumulative and long-term process that forms the basis for academic achievement and conceptual change as an important mechanism for the acquisition of knowledge in higher education. Moreover, the findings suggest that students’ stress experiences in higher education are subject to stable, trait-like influences, as well as situational and/or interactional, state-like influences which are differentially related to achievement and health. The results imply that investigating the causal networks between knowledge, stress, and academic achievement is a promising strategy for better understanding academic achievement in higher education. For this purpose, future studies should use longitudinal designs, randomized controlled trials, and meta-analytical techniques. Potential practical applications include taking account of students’ prior knowledge in higher education teaching and decreasing stress among higher education students.

With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.

External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.

Many combinatorial optimization problems on finite graphs can be formulated as conic convex programs, e.g. the stable set problem, the maximum clique problem or the maximum cut problem. Especially NP-hard problems can be written as copositive programs. In this case the complexity is moved entirely into the copositivity constraint.
Copositive programming is a quite new topic in optimization. It deals with optimization over the so-called copositive cone, a superset of the positive semidefinite cone, where the quadratic form x^T Ax has to be nonnegative for only the nonnegative vectors x. Its dual cone is the cone of completely positive matrices, which includes all matrices that can be decomposed as a sum of nonnegative symmetric vector-vector-products.
The related optimization problems are linear programs with matrix variables and cone constraints.
However, some optimization problems can be formulated as combinatorial problems on infinite graphs. For example, the kissing number problem can be formulated as a stable set problem on a circle.
In this thesis we will discuss how the theory of copositive optimization can be lifted up to infinite dimension. For some special cases we will give applications in combinatorial optimization.

This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.

Die räumliche Entwicklung von Städten und Regionen wird durch Trends wie Klimawandel, demographische Veränderungen und Strukturwandel beeinflusst, welche nicht an Verwaltungsgrenzen aufhören, sondern die Entwicklung großflächiger Gebiete bestimmen. Außerdem weisen Grenzräume häufig funktionale und thematische Verflechtungen auf, die über die nationalen Grenzen hinweg bestehen. Damit verbunden sind ein regelmäßiger Austausch und Abhängigkeiten zwischen Grenzräumen und deren Bewohnern. Daher ist die Koordination der grenzüberschreitenden Raumentwicklung entscheidend für eine zukunftsorientierte und nachhaltige räumliche Entwicklung. Aufgrund seiner hohen Bedeutung wird dieses Thema von europäischen Wissenschaftlern in der ersten Ausgabe der Themenhefte Borders in Perspective aus verschiedenen Perspektiven beleuchtet.

Salivary alpha-amylase (sAA) influences the perception of taste and texture, features both relevant in acquiring food liking and, with time, food preference. However, no studies have yet investigated the relationship between basal activity levels of sAA and food preference. We collected saliva from 57 volunteers (63% women) who we assessed in terms of their preference for different food items. These items were grouped into four categories according to their nutritional properties: high in starch, high in sugar, high glycaemic index, and high glycaemic load. Anthropometric markers of cardiovascular risk were also calculated. Our findings suggest that sAA influences food
preference and body composition in women. Regression analysis showed that basal sAA activity is inversely associated with subjective but not self-reported behavioural preference for foods high in sugar. Additionally, sAA and subjective preference are associated with anthropometric markers of cardiovascular risk. We believe that this pilot study points to this enzyme as an interesting candidate to consider among the physiological factors that modulate eating behaviour.

The trophic niche is a life trait that identifies the consumer’s position in a local food web. Several factors, such as ontogeny, competitive ability and resource availability contribute in shaping species trophic niches. To date, information on the diet of European Hydromantes salamanders are only available for a limited number of species, no dietary studies have involved more than one species of the genus at a time, and there are limited evidences on how multiple factors interact in determining diet variation. In this study we examined the diet of multiple populations of six out of the eight European cave salamanders, providing the first data on the diet for five of them. In addition, we assessed whether these closely related generalist species show similar diet and, for each species, we tested whether season, age class or sex influence the number and the type of prey consumed. Stomach condition (empty/full) and the number of prey consumed were strongly related to seasonality and to the activity level of individuals. Empty stomachs were more frequent in autumn, in individuals far from cave entrance and in juveniles. Diet composition was significantly different among species. Hydromantes imperialis and H. supramontis were the most generalist species; H. flavus and H. sarrabusensis fed mostly on Hymenoptera and Coleoptera Staphylinidae, while H. genei and H. ambrosii mostly consumed Arachnida and Endopterygota larvae. Furthermore, we detected seasonal shifts of diet in the majority of the species examined. Conversely, within each species, we did not find diet differences between females, males and juveniles. Although being assumed to have very similar dietary habits, here Hydromantes species were shown to be characterized by a high divergence in diet composition and in the stomach condition of individuals.

Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.

This study examines to what extent a banking crisis and the ensuing potential liquidity shortage affect corporate cash holdings. Specifically, how do firms adjust their liquidity management prior to and during a banking crisis when they are restricted in their financing options? These restrictions might not result from firm-specific characteristics but also incorporate the effects of certain regulatory requirements. I analyse the real effects of indicators of a potential crisis and the occurrence of a crisis event on corporate cash holdings for both unregulated and regulated firms from 31 different countries. In contrast to existing studies, I perform this analysis on the basis of a long observation period (1997 to 2014 respectively 2003 to 2014) using multiple crisis indicators (early warning signals) and multiple crisis events. For regulated firms, this study makes use of a unique sample of country-specific regulatory information, which is collected by hand for 15 countries and converted into an ordinal scale based on the severity of the regulation. Regulated firms are selected from a single industry: Real Estate Investment Trusts. These firms invest in real estate properties and let these properties to third parties. Real Estate Investment Trusts that comply with the aforementioned regulations are exempt from income taxation and are punished for a breach, which makes this industry particularly interesting for the analysis of capital structure decisions.
The results for regulated and unregulated firms are mostly inconclusive. I find no convincing evidence that the degree of regulation affects the level of cash holdings for regulated firms before and during a banking crisis. For unregulated firms, I find strong evidence that financially constrained firms have higher cash holdings than unconstrained firms. Further, there is no real evidence that either financially constrained firms or unconstrained firms increase their cash holdings when observing an early warning signal. In case of a banking crisis, the results differ for univariate tests and in panel regressions. In the univariate setting, I find evidence that both types of firms hold higher levels of cash during a banking crisis. In panel regressions, the effect is only evident for financially unconstrained firms from the US, and when controlling for financial stress, it is also apparent for financially constrained US firms. For firms from Europe, the results are predominantly inconclusive. For banking crises that are preceded by an early warning signal, there is only evidence for an increase in cash holdings for unconstrained US firms when controlling for financial stress.

In the present study a non-motion-stabilized scanning Doppler lidar was operated on board of RV Polarstern in the Arctic (June 2014) and Antarctic (December 2015– January 2016). This is the first time that such a system measured on an icebreaker in the Antarctic. A method for a motion correction of the data in the post-processing is presented.
The wind calculation is based on vertical azimuth display (VAD) scans with eight directions that pass a quality control. Additionally a method for an empirical signal-tonoise ratio (SNR) threshold is presented, which can be calculated for individual measurement set-ups. Lidar wind profiles are compared to total of about 120 radiosonde profiles and also to wind measurements of the ship.
The performance of the lidar measurements in comparison with radio soundings generally shows small root mean square deviation (bias) for wind speed of around 1ms-1(0.1ms-1) and for wind direction of around 10 (1). The post-processing of the non-motion-stabilized data shows comparably high quality to studies with motion-stabilized systems.
Two case studies show that a flexible change in SNR threshold can be beneficial for special situations. Further the studies reveal that short-lived low-level jets in the atmospheric boundary layer can be captured by lidar measurements with a high temporal resolution in contrast to routine radio soundings. The present study shows that a non-motionstabilized Doppler lidar can be operated successfully on an
icebreaker. It presents a processing chain including quality control tests and error quantification, which is useful for further measurement campaigns.

In the context of accelerated global socio-environmental change, the Water-Energy-Food Nexus has received increasing attention within science and international politics by promoting integrated resource governance. This study explores the scientific nexus debates from a discourse analytical perspective to reveal knowledge and power relations as well as geographical settings of nexus research. We also investigate approaches to socio-nature relations that influence nexus research and subsequent political implications. Our findings suggest that the leading nexus discourse is dominated by natural scientific perspectives and a neo-Malthusian framing of environmental challenges. Accordingly, the promoted cross-sectoral nexus approach to resource governance emphasizes efficiency, security, future sustainability, and poverty reduction. Water, energy, and food are conceived as global trade goods that require close monitoring, management and control, to be achieved via quantitative assessments and technological interventions. Within the less visible discourse, social scientific perspectives engage with the social, political, and normative elements of the Water-Energy-Food Nexus. These perspectives criticize the dominant nexus representation for itsmanagerial, neoliberal, and utilitarian approach to resource governance. The managerial framing is critiqued for masking power relations and social inequalities, while alternative framings acknowledge the political nature of resource governance and socio-nature relations. The spatial dimensions of the nexus debate are also discussed. Notably, the nexus is largely shaped by western knowledge, yet applied mainly in specific regions of the Global South. In order for the nexus to achieve integrative solutions for sustainability, the debate needs to overcome its current discursive and spatial separations. To this end, we need to engage more closely with alternative nexus discourses, embrace epistemic pluralism and encourage multi-perspective debates about the socio-nature relations we actually intend to promote.

A basic assumption of standard small area models is that the statistic of interest can be modelled through a linear mixed model with common model parameters for all areas in the study. The model can then be used to stabilize estimation. In some applications, however, there may be different subgroups of areas, with specific relationships between the response variable and auxiliary information. In this case, using a distinct model for each subgroup would be more appropriate than employing one model for all observations. If no suitable natural clustering variable exists, finite mixture regression models may represent a solution that „lets the data decide“ how to partition areas into subgroups. In this framework, a set of two or more different models is specified, and the estimation of subgroup-specific model parameters is performed simultaneously to estimating subgroup identity, or the probability of subgroup identity, for each area. Finite mixture models thus offer a fexible approach to accounting for unobserved heterogeneity. Therefore, in this thesis, finite mixtures of small area models are proposed to account for the existence of latent subgroups of areas in small area estimation. More specifically, it is assumed that the statistic of interest is appropriately modelled by a mixture of K linear mixed models. Both mixtures of standard unit-level and standard area-level models are considered as special cases. The estimation of mixing proportions, area-specific probabilities of subgroup identity and the K sets of model parameters via the EM algorithm for mixtures of mixed models is described. Eventually, a finite mixture small area estimator is formulated as a weighted mean of predictions from model 1 to K, with weights given by the area-specific probabilities of subgroup identity.

Acute social and physical stress interact to influence social behavior: the role of social anxiety
(2018)

Stress is proven to have detrimental effects on physical and mental health. Due to different tasks and study designs, the direct consequences of acute stress have been found to be wide-reaching: while some studies report prosocial effects, others report increases in antisocial behavior, still others report no effect. To control for specific effects of different stressors and to consider the role of social anxiety in stress-related social behavior, we investigated the effects of social versus physical stress on behavior in male participants possessing different levels of social anxiety. In a randomized, controlled two by two design we investigated the impact of social and physical stress on behavior in healthy young men. We found significant influences on various subjective increases in stress by physical and social stress, but no interaction effect. Cortisol was significantly increased by physical stress, and the heart rate was modulated by physical and social stress as well as their combination. Social anxiety modulated the subjective stress response but not the cortisol or heart rate response. With respect to behavior, our results show that social and physical stress interacted to modulate trust, trustworthiness, and sharing. While social stress and physical stress alone reduced prosocial behavior, a combination of the two stressor modalities could restore prosociality. Social stress alone reduced nonsocial risk behavior regardless of physical stress. Social anxiety was associated with higher subjective stress responses and higher levels of trust. As a consequence, future studies will need to investigate further various stressors and clarify their effects on social behavior in health and social anxiety disorders.

The forward effect of testing refers to the finding that retrieval practice of previously studied information increases retention of subsequently studied other information. It has recently been hypothesized that the forward effect (partly) reflects the result of a reset-of-encoding (ROE) process. The proposal is that encoding efficacy decreases with an increase in study material, but testing of previously studied information resets the encoding process and makes the encoding of the subsequently studied information as effective as the encoding of the previously studied information. The goal of the present study was to verify the ROE hypothesis on an item level basis. An experiment is reported that examined the effects of testing in comparison to restudy on items’ serial position curves. Participants studied three lists of items in each condition. In the testing condition, participants were tested immediately on non-target lists 1 and 2, whereas in the restudy condition, they restudied lists 1 and 2. In both conditions, participants were tested immediately on target list 3. Influences of condition and items’ serial learning position on list 3 recall were analyzed. The results showed the forward effect of testing and furthermore that this effect varies with items’ serial list position. Early target list items at list primacy positions showed a larger enhancement effect than middle and late target list items at non-primacy positions. The results are consistent with the ROE hypothesis on an item level basis. The generalizability of the ROE hypothesis across different experimental tasks, like the list-method directed-forgetting task, is discussed.

Background: The growing production and use of engineered AgNP in industry and private households make
increasing concentrations of AgNP in the environment unavoidable. Although we already know the harmful effects
of AgNP on pivotal bacterial driven soil functions, information about the impact of silver nanoparticles (AgNP) on the
soil bacterial community structure is rare. Hence, the aim of this study was to reveal the long-term effects of AgNP
on major soil bacterial phyla in a loamy soil. The study was conducted as a laboratory incubation experiment over a
period of 1 year using a loamy soil and AgNP concentrations ranging from 0.01 to 1 mg AgNP/kg soil. Effects were
quantified using the taxon-specific 16S rRNA qPCR.
Results: The short-term exposure of AgNP at environmentally relevant concentration of 0.01 mg AgNP/kg caused
significant positive effects on Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%), whereas beta-
Proteobacteria population was minimized by 14.2% relative to the control (p ≤ 0.05). After 1 year of exposure to
0.01 mg AgNP/kg diminished Acidobacteria (p = 0.007), Bacteroidetes (p = 0.005) and beta-Proteobacteria (p = 0.000) by
14.5, 10.1 and 13.9%, respectively. Actino- and alpha-Proteobacteria were statistically unaffected by AgNP treatments
after 1-year exposure. Furthermore, a statistically significant regression and correlation analysis between silver toxicity
and exposure time confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions.
Conclusions: Even very low concentrations of AgNP may cause disadvantages for the autotrophic ammonia oxidation
(nitrification), the organic carbon transformation and the chitin degradation in soils by exerting harmful effects
on the liable bacterial phyla.

Species can show strong variation of local abundance across their ranges. Recent analyses suggested that variation in abundance can be related to environmental suitability, as the highest abundances are often observed in populations living in the most suitable areas. However, there is limited information on the mechanisms through which variation in environmental suitability determines abundance. We analysed populations of the microendemic salamander Hydromantes flavus, and tested several hypotheses on potential relationships linking environmental suitability to population parameters. For multiple populations across the whole species range, we assessed suitability using species distribution models, and measured density, activity level, food intake and body condition index. In high-suitability sites, the density of salamanders was up to 30-times higher than in the least suitable ones. Variation in activity levels and population performance can explain such variation of abundance. In high-suitability sites, salamanders were active close to the surface, and showed a low frequency of empty stomachs. Furthermore, when taking into account seasonal variation, body condition was better in the most suitable sites. Our results show that the strong relationship between environmental suitability and population abundance can be mediated by the variation of parameters strongly linked to individual performance and fitness.

Stiftungsunternehmen sind Unternehmen, die sich ganz oder teilweise im Eigentum einer gemeinnützigen oder privaten Stiftung befinden. Die Anzahl an Stiftungsunternehmen in Deutschland ist in den letzten Jahren deutlich gestiegen. Bekannte deutsche Unternehmen wie Aldi, Bosch, Bertelsmann, LIDL oder Würth befinden sich im Eigentum von Stiftungen. Einige von ihnen, wie beispielsweise Fresenius, ZF Friedrichshafen oder Zeiss, sind sogar an der Börse notiert. Die Mehrzahl der Stiftungsunternehmen entsteht dadurch, dass Unternehmensgründer oder Unternehmerfamilien ihr Unternehmen in eine Stiftung einbringen, anstatt es zu vererben oder zu verkaufen.
Die Motive hierfür sind vielfältig und können familiäre Gründe (z. B. Kinderlosigkeit, Vermeidung von Familienstreit), unternehmensbezogene Gründe (z. B. Möglichkeit der langfristigen Planung durch stabile Eigentümerstruktur) und steuerliche Gründe (Vermeidung oder Reduzierung der Erbschaftssteuer) haben oder sind durch die Person des Gründers motiviert (Möglichkeit, das Unternehmen auch nach dem eigenen Tod über die Stiftung noch weiterhin zu prägen). Aufgrund der Tatsache, dass Stiftungsunternehmen zumeist aus Familienunternehmen hervorgehen, wird in der Forschung häufig nicht zwischen Familien- und Stiftungsunternehmen differenziert. Aus diesem Grund werden in dieser Dissertation zu Beginn anhand des Drei-Kreis-Modells für Familienunternehmen die Unterschiede zwischen Stiftungs- und Familienunternehmen dargestellt. Die Ergebnisse zeigen, dass nur eine sehr geringe Anzahl von Stiftungsunternehmen eine große Ähnlichkeit zu klassischen Familienunternehmen aufweist. Die meisten Stiftungsunternehmen unterscheiden sich zum Teil sehr stark von Familienunternehmen. Diese Ergebnisse verdeutlichen, dass Stiftungsunternehmen als separates Forschungsfeld betrachtet werden sollten.
Da innerhalb der Gruppe der Stiftungsunternehmen ebenfalls eine starke Heterogenität herrscht, werden im Anschluss Performanceunterschiede innerhalb der Gruppe der Stiftungsunternehmen untersucht. Hierzu wurden die Daten von 142 deutschen Stiftungsunternehmen für die Jahre 2006-2016 erhoben und mittels einer lineareren Regression ausgewertet. Die Ergebnisse zeigen, dass zwischen den verschiedenen Typen signifikante Unterschiede herrschen. Unternehmen, die von einer gemeinnützigen Stiftung gehalten werden, weisen eine signifikant schlechtere Performance auf, als Unternehmen die eine private Stiftung als Shareholder haben.
Im nächsten Schritt wird die Gruppe der börsennotierten Stiftungsunternehmen untersucht. Mittels einer Ereignisstudie wird getestet, wie sich die Stiftung als Eigentümer eines börsennotierten Unternehmens auf den Shareholder Value auswirkt. Die Ergebnisse zeigen, dass eine Anteilsverringerung einer Stiftung einen positiven Einfluss auf den Shareholder Value hat. Stiftungen werden vom Kapitalmarkt dementsprechend negativ bewertet. Aufgrund der divergierenden Ziele von Stiftung und Unternehmen birgt die Verbindung zwischen Stiftung und Unternehmen potentielle Konflikte und Herausforderungen für die beteiligten Personen. Mittels eines qualitativen explorativen Ansatzes, wird basierend auf Interviews, ein Modell entwickelt, welches die potentiellen Konflikte in Stiftungsunternehmen anhand des Beispiels der Doppelstiftung aufzeigt.
Im letzten Schritt werden Handlungsempfehlungen in Form eines Entwurfs für einen Corporate Governance Kodex erarbeitet, die (potentiellen) Stifterinnen und Stiftern helfen sollen, mögliche Konflikte entweder zu vermeiden oder bereits bestehende Probleme zu lösen.
Die Ergebnisse dieser Dissertation sind relevant für Theorie und Praxis. Aus theoretischer Sicht liegt der Wert dieser Untersuchungen darin, dass Forscher künftig besser zwischen Stiftungs- und Familienunternehmen unterscheiden können. Zudem bringt diese Arbeit den aktuellen Forschungsstand zum Thema Stiftungsunternehmen weiter. Außerdem bietet diese Dissertation insbesondere potentiellen Stiftern einen Überblick über die verschiedenen Ausgestaltungsmöglichkeiten und die Vor- und Nachteile, die diese Konstruktionen mit sich bringen. Die Handlungsempfehlungen ermöglichen es Stiftern, vorab potentielle Gefahren erkennen zu können und diese zu umgehen.

The changing views on the evolutionary relationships of extant Salamandridae (Amphibia: Urodela)
(2018)

The phylogenetic relationships among members of the family Salamandridae have been repeatedly investigated over the last 90 years, with changing character and taxon sampling. We review the changing composition and the phylogenetic position of salamandrid genera and species groups and add a new phylogeny based exclusively on sequences of nuclear genes. Salamandrina often changed its position depending on the characters used. It was included several times in a clade together with the primitive newts (Echinotriton, Pleurodeles, Tylototriton) due to their seemingly ancestral morphology. The latter were often inferred as a monophyletic clade. Respective monophyly was almost consistently established in all molecular studies for true salamanders (Chioglossa, Lyciasalamandra, Mertensiella, Salamandra), modern Asian newts (Cynops, Laotriton, Pachytriton, Paramesotriton) and modern New World newts (Notophthalmus, Taricha). Reciprocal non-monophyly has been established through molecular studies for the European mountain newts (Calotriton, Euproctus) and the modern European newts (Ichthyosaura, Lissotriton, Neurergus, Ommatotriton, Triturus) since Calotriton was identified as the sister lineage of Triturus. In pre-molecular studies, their respective monophyly had almost always been assumed, mainly because a complex courtship behaviour shared by their respective members. Our nuclear tree is nearly identical to a mito-genomic tree, with all but one node being highly supported. The major difference concerns the position of Calotriton, which is no longer nested within the modern European newts. This has implications for the evolution of courtship behaviour of European newts. Within modern European newts, Ichthyosaura and Lissotriton changed their position compared to the mito-genomic tree. Previous molecular trees based on seemingly large nuclear data sets, but analysed together with mitochondrial data, did not reveal monophyly of modern European newts since taxon sampling and nuclear gene coverage was too poor to obtain conclusive results. We therefore conclude that mitochondrial and nuclear data should be analysed on their own.

Reptiles belong to a taxonomic group characterized by increasing worldwide population declines. However, it has not been until comparatively recent years that public interest in these taxa has increased, and conservation measures are starting to show results. While many factors contribute to these declines, environmental pollution, especially in form of pesticides, has seen a strong increase in the last few decades, and is nowadays considered a main driver for reptile diversity loss. In light of the above, and given that reptiles are extremely underrepresented in ecotoxicological studies regarding the effects of plant protection products, this thesis aims at studying the impacts of pesticide exposure in reptiles, by using the Common wall lizard (Podarcis muralis) as model species. In a first approach, I evaluated the risk of pesticide exposure for reptile species within the European Union, as a means to detect species with above average exposure probabilities and to detect especially sensitive reptile orders. While helpful to detect species at risk, a risk evaluation is only the first step towards addressing this problem. It is thus indispensable to identify effects of pesticide exposure in wildlife. For this, the use of enzymatic biomarkers has become a popular method to study sub-individual responses, and gain information regarding the mode of action of chemicals. However, current methodologies are very invasive. Thus, in a second step, I explored the use of buccal swabs as a minimally invasive method to detect changes in enzymatic biomarker activity in reptiles, as an indicator for pesticide uptake and effects at the sub-individual level. Finally, the last part of this thesis focuses on field data regarding pesticide exposure and its effects on reptile wildlife. Here, a method to determine pesticide residues in food items of the Common wall lizard was established, as a means to generate data for future dietary risk assessments. Subsequently, a field study was conducted with the aim to describe actual effects of pesticide exposure on reptile populations at different levels.

The Harmonic Faber Operator
(2018)

P. K. Suetin points out in the beginning of his monograph "Faber
Polynomials and Faber Series" that Faber polynomials play an important
role in modern approximation theory of a complex variable as they
are used in representing analytic functions in simply connected domains,
and many theorems on approximation of analytic functions are proved
with their help [50].
In 1903, the Faber polynomials were firstly discovered by G. Faber. It was Faber's aim to find a generalisation of Taylor
series of holomorphic functions in the open unit disc D
in the following way. As any holomorphic function in D
has a Taylor series representation
f(z)=\sum_{\nu=0}^{\infty}a_{\nu}z^{\nu} (z\in\D)
converging locally uniformly inside D, for a simply connected
domain G, Faber wanted to determine a system of polynomials (Q_n)
such that each function f being holomorphic in G can be expanded
into a series
f=\sum_{\nu=0}^{\infty}b_{\nu}Q_{\nu}
converging locally uniformly inside G. Having this goal in mind,
Faber considered simply connected domains bounded by an analytic Jordan
curve. He constructed a system of polynomials (F_n)
with this property. These polynomials F_n were named after him
as Faber polynomials. In the preface of [50],
a detailed summary of results concerning Faber polynomials and results
obtained by the aid of them is given.
An important application of Faber polynomials is e.g. the transfer
of known assertions concerning polynomial approximation of functions
belonging to the disc algebra to results of the approximation of functions
being continuous on a compact continuum K which contains at least
two points and has a connected complement and being holomorphic in
the interior of K. In this field, the Faber operator
denoted by T turns out to be a powerful tool (for
an introduction, see e.g. D. Gaier's monograph). It
assigns a polynomial of degree at most n given in the monomial
basis \sum_{\nu=0}^{n}a_{\nu}z^{\nu} with a polynomial of degree
at most n given in the basis of Faber polynomials \sum_{\nu=0}^{n}a_{\nu}F_{\nu}.
If the Faber operator is continuous with respect to the uniform norms,
it has a unique continuous extension to an operator mapping the disc
algebra onto the space of functions being continuous on the whole
compact continuum and holomorphic in its interior. For all f being
element of the disc algebra and all polynomials P, via the obvious
estimate for the uniform norms
||T(f)-T(P)||<= ||T|| ||f-P||,
it can be seen that the original task of approximating F=T(f)
by polynomials is reduced to the polynomial approximation of the function
f. Therefore, the question arises under which conditions the Faber
operator is continuous and surjective. A fundamental result in this
regard was established by J. M. Anderson and J. Clunie who showed
that if the compact continuum is bounded by a rectifiable Jordan curve
with bounded boundary rotation and free from cusps, then the Faber
operator with respect to the uniform norms is a topological isomorphism.
Now, let f be a harmonic function in D.
Similar as above, we find that f has a uniquely determined representation
f=\sum_{\nu=-\infty}^{\infty}a_{\nu}p_{\nu}
converging locally uniformly inside D where p_{n}(z)=z^{n}
for n\in\N_{0} and p_{-n}(z)=\overline{z}^{n}
for n\in\N}. One may ask whether there is an analogue for
harmonic functions on simply connected domains G. Indeed, for a
domain G bounded by an analytic Jordan curve, the conjecture that
each function f being harmonic in G has a uniquely determined
representation
f=\sum_{\nu=-\infty}^{\infty}b_{\nu}F_{\nu}
where F_{-n}(z)=\overline{F_{n}(z\)} for n\inN,
converging locally uniformly inside G, holds true.
Let now K be a compact continuum containing at least two points
and having a connected complement. A main component of this thesis
will be the examination of the harmonic Faber operator mapping a harmonic
polynomial given in the basis of the harmonic monomials \sum_{\nu=-n}^{n}a_{\nu}p_{\nu}
to a harmonic polynomial given as \sum_{\nu=-n}^{n}a_{\nu}F_{\nu}.
If this operator, which is based on an idea of J. Müller,
is continuous with respect to the uniform norms, it has a unique continuous
extension to an operator mapping the functions being continuous on
\partial\D onto the continuous functions on K being
harmonic in the interior of K. Harmonic Faber polynomials and the
harmonic Faber operator will be the objects accompanying us throughout
our whole discussion.
After having given an overview about notations and certain tools we
will use in our consideration in the first chapter, we begin our studies
with an introduction to the Faber operator and the harmonic Faber
operator. We start modestly and consider domains bounded by an analytic
Jordan curve. In Section 2, as a first
result, we will show that, for such a domain G, the harmonic Faber
operator has a unique continuous extension to an operator mapping
the space of the harmonic functions in D onto the space
of the harmonic functions in G, and moreover, the harmonic Faber
operator is an isomorphism with respect to the topologies of locally
uniform convergence. In the further sections of this chapter, we illumine
the behaviour of the (harmonic) Faber operator on certain function
spaces.
In the third chapter, we leave the situation of compact continua bounded
by an analytic Jordan curve. Instead we consider closures of domains
bounded by Jordan curves having a Dini continuous curvature. With
the aid of the concept of compact operators and the Fredholm alternative,
we are able to show that the harmonic Faber operator is a topological
isomorphism.
Since, in particular, the main result of the third chapter holds true
for closures K of domains bounded by analytic Jordan curves, we
can make use of it to obtain new results concerning the approximation
of functions being continuous on K and harmonic in the interior
of K by harmonic polynomials. To do so, we develop techniques applied
by L. Frerick and J. Müller in [11] and adjust them to
our setting. So, we can transfer results about the classic Faber operator
to the harmonic Faber operator.
In the last chapter, we will use the theory of harmonic Faber polynomials
to solve certain Dirichlet problems in the complex plane. We pursue
two different approaches: First, with a similar philosophy as in [50],
we develop a procedure to compute the coefficients of a series \sum_{\nu=-\infty}^{\infty}c_{\nu}F_{\nu}
converging uniformly to the solution of a given Dirichlet problem.
Later, we will point out how semi-infinite programming with harmonic
Faber polynomials as ansatz functions can be used to get an approximate
solution of a given Dirichlet problem. We cover both approaches first
from a theoretical point of view before we have a focus on the numerical
implementation of concrete examples. As application of the numerical
computations, we considerably obtain visualisations of the concerned
Dirichlet problems rounding out our discussion about the harmonic
Faber polynomials and the harmonic Faber operator.

Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)

An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.

Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.

The dissertation deals with methods to improve design-based and model-assisted estimation techniques for surveys in a finite population framework. The focus is on the development of the statistical methodology as well as their implementation by means of tailor-made numerical optimization strategies. In that regard, the developed methods aim at computing statistics for several potentially conflicting variables of interest at aggregated and disaggregated levels of the population on the basis of one single survey. The work can be divided into two main research questions, which are briefly explained in the following sections.
First, an optimal multivariate allocation method is developed taking into account several stratification levels. This approach results in a multi-objective optimization problem due to the simultaneous consideration of several variables of interest. In preparation for the numerical solution, several scalarization and standardization techniques are presented, which represent the different preferences of potential users. In addition, it is shown that by solving the problem scalarized with a weighted sum for all combinations of weights, the entire Pareto frontier of the original problem can be generated. By exploiting the special structure of the problem, the scalarized problems can be efficiently solved by a semismooth Newton method. In order to apply this numerical method to other scalarization techniques as well, an alternative approach is suggested, which traces the problem back to the weighted sum case. To address regional estimation quality requirements at multiple stratification levels, the potential use of upper bounds for regional variances is integrated into the method. In addition to restrictions on regional estimates, the method enables the consideration of box-constraints for the stratum-specific sample sizes, allowing minimum and maximum stratum-specific sampling fractions to be defined.
In addition to the allocation method, a generalized calibration method is developed, which is supposed to achieve coherent and efficient estimates at different stratification levels. The developed calibration method takes into account a very large number of benchmarks at different stratification levels, which may be obtained from different sources such as registers, paradata or other surveys using different estimation techniques. In order to incorporate the heterogeneous quality and the multitude of benchmarks, a relaxation of selected benchmarks is proposed. In that regard, predefined tolerances are assigned to problematic benchmarks at low aggregation levels in order to avoid an exact fulfillment. In addition, the generalized calibration method allows the use of box-constraints for the correction weights in order to avoid an extremely high variation of the weights. Furthermore, a variance estimation by means of a rescaling bootstrap is presented.
Both developed methods are analyzed and compared with existing methods in extensive simulation studies on the basis of a realistic synthetic data set of all households in Germany. Due to the similar requirements and objectives, both methods can be successively applied to a single survey in order to combine their efficiency advantages. In addition, both methods can be solved in a time-efficient manner using very comparable optimization approaches. These are based on transformations of the optimality conditions. The dimension of the resulting system of equations is ultimately independent of the dimension of the original problem, which enables the application even for very large problem instances.

The economic growth theory analyses which factors affect economic growth
and tries to analyze how it can last. A popular neoclassical growth model
is the Ramsey-Cass-Koopmans model, which aims to determine how much
of its income a nation or an economy should save in order to maximize its
welfare.
In this thesis, we present and analyze an extended capital accumulation equation of a spatial version of the Ramsey model, balancing diffusive and agglomerative effects. We model the capital mobility in space via a nonlocal
diffusion operator which allows for jumps of the capital stock from one lo-
cation to an other. Moreover, this operator smooths out heterogeneities in
the factor distributions slower, which generated a more realistic behavior of
capital flows. In addition to that, we introduce an endogenous productivity-
production operator which depends on time and on the capital distribution
in space. This operator models the technological progress of the economy.
The resulting mathematical model is an optimal control problem under a
semilinear parabolic integro-differential equation with initial and volume constraints, which are a nonlocal analog to local boundary conditions, and box-constraints on the state and the control variables. In this thesis, we consider
this problem on a bounded and unbounded spatial domain, in both cases with
a finite time horizon. We derive existence results of weak solutions for the
capital accumulation equations in both settings and we proof the existence
of a Ramsey equilibrium in the unbounded case. Moreover, we solve the
optimal control problem numerically and discuss the results in the economic
context.

This dissertation is dedicated to the analysis of the stabilty of portfolio risk and the impact of European regulation introducing risk based classifications for investment funds.
The first paper examines the relationship between portfolio size and the stability of mutual fund risk measures, presenting evidence for economies of scale in risk management. In a unique sample of 338 fund portfolios we find that the volatility of risk numbers decreases for larger funds. This finding holds for dispersion as well as tail risk measures. Further analyses across asset classes provide evidence for the robustness of the effect for balanced and fixed income portfolios. However, a size effect did not emerge for equity funds, suggesting that equity fund managers simply scale their strategy up as they grow. Analyses conducted on the differences in risk stability between tail risk measures and volatilities reveal that smaller funds show higher discrepancies in that respect. In contrast to the majority of prior studies on the basis of ex-post time series risk numbers, this study contributes to the literature by using ex-ante risk numbers based on the actual assets and de facto portfolio data.
The second paper examines the influence of European legislation regarding risk classification of mutual funds. We conduct analyses on a set of worldwide equity indices and find that a strategy based on the long term volatility as it is imposed by the Synthetic Risk Reward Indicator (SRRI) would lead to substantial variations in exposures ranging from short phases of very high leverage to long periods of under investments that would be required to keep the risk classes. In some cases, funds will be forced to migrate to higher risk classes due to limited means to reduce volatilities after crises events. In other cases they might have to migrate to lower risk classes or increase their leverage to ridiculous amounts. Overall, we find if the SRRI creates a binding mechanism for fund managers, it will create substantial interference with the core investment strategy and may incur substantial deviations from it. Fruthermore due to the forced migrations the SRRI degenerates to a passive indicator.
The third paper examines the impact of this volatility based fund classification on portfolio performance. Using historical data on equity indices we find initially that a strategy based on long term portfolio volatility, as it is imposed by the Synthetic Risk Reward Indicator (SRRI), yields better Sharpe Ratios (SRs) and Buy and Hold Returns (BHRs) for the investment strategies matching the risk classes. Accounting for the Fama-French factors reveals no significant alphas for the vast majority of the strategies. In our simulation study where volatility was modelled through a GJR(1,1) - model we find no significant difference in mean returns, but significantly lower SRs for the volatility based strategies. These results were confirmed in robustness checks using alternative models and timeframes. Overall we present evidence which suggests that neither the higher leverage induced by the SRRI nor the potential protection in downside markets does pay off on a risk adjusted basis.

The implicit power motive is one of the most researched motives in motivational
psychology—at least in adults. Children have rarely been subject to investigation and there
are virtually no results on behavioral and affective correlates of the implicit power motive in
children. As behavior and affect are important components of conceptual validation, the
empirical data in this dissertation focused on identifying three correlates, namely resource
control behavior (study 1), power stress (study 2), and persuasive behavior (study 3). In each
study, the implicit power motive was measured via the Picture Story Exercise, using an
adapted version for children. Children across samples were between 4 and 11 years old.
Results from study 1 and 2 showed that children’s power-related behavior corresponded with
evidence from adult samples: children with a high implicit power motive secure attractive
resources and show negative reactions to a thwarted attempt to exert influence. Study 3
contradicted existing evidence with adults in that children’s persuasive behavior was not
associated with nonverbal, but with verbal strategies of persuasion. Despite this inconsistency,
these results are, together with the validation of a child-friendly Picture Story Exercise
version, an important step into further investigating and confirming the concept of the implicit
power motive and how to measure it in children.

A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem.rnrnIn addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2.rnrnMoreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization.rnrnThis problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6.rnrnOn the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4.rnrnIn the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets.rnrnTo obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone.rnrnAs a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2.rnNumerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.

We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.

Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.

Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.

Fostering positive and realistic self-concepts of individuals is a major goal in education worldwide (Trautwein & Möller, 2016). Individuals spend most of their childhood and adolescence in school. Thus, schools are important contexts for individuals to develop positive self-perceptions such as self-concepts. In order to enhance positive self-concepts in educational settings and in general, it is indispensable to have a comprehensive knowledge about the development and structure of self-concepts and their determinants. To date, extensive empirical and theoretical work on antecedents and change processes of self-concept has been conducted. However, several research gaps still exist, and several of these are the focus of the present dissertation. Specifically, these research gaps encompass (a) the development of multiple self-concepts from multiple perspectives regarding stability and change, (b) the direction of longitudinal interplay between self-concept facets over the entire time period from childhood to late adolescence, and (c) the evidence that a recently developed structural model of academic self-concept (nested Marsh/Shavelson model [Brunner et al., 2010]) fits the data in elementary school students, (d) the investigation of structural changes in academic self-concept profile formation within this model, (e) the investigation of dimensional comparison processes as determinants of academic self-concept profile formation in elementary school students within the internal/external frame of reference model (I/E model; Marsh, 1986), (f) the test of moderating variables for dimensional comparison processes in elementary school, (g) the test of the key assumptions of the I/E model that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, and (h) the generalizability of the findings regarding the I/E model over different statistical analytic methods. Thus, the aim of the present dissertation is to contribute to close these gaps with three studies. Thereby, data from German students enrolled in elementary school to secondary school education were gathered in three projects comprising the developmental time span from childhood to adolescence (ages 6 to 20). Three vital self-concept areas in childhood and adolescence were in-vestigated: general self-concept (i.e., self-esteem), academic self-concepts (general, math, reading, writing, native language), and social self-concepts (of acceptance and assertion). In all studies, data were analyzed within a latent variable framework. Findings are discussed with respect to the research aims of acquiring more comprehensive knowledge on the structure and development of significant self-concept in childhood and adolescence and their determinants. In addition, theoretical and practical implications derived from the findings of the present studies are outlined. Strengths and limitations of the present dissertation are discussed. Finally, an outlook for future research on self-concepts is given.

This thesis is focused on improving the knowledge on a group of threatened species, the European cave salamanders (genus Hydromantes). There are three main sections gathering studies dealing with different topics: Ecology (first part), Life traits (second part) and Monitoring methodologies (third part). First part starts with the study of the response of Hydromantes to the variation of climatic conditions, analysing 15 different localities throughout a full year (CHAPTER I; published in PEERJ in August 2015). After that, the focus moves on identify which is the operative temperature that these salamander experience, including how their body respond to variation of environmental temperature. This study was conducted using one of the most advanced tool, an infrared thermocamera, which gave the opportunity to perform detailed observation on salamanders body (CHAPTER II; published in JOURNAL OF THERMAL BIOLOGY in June 2016). In the next chapter we use the previous results to analyse the ecological niche of all eight Hydromantes species. The study mostly underlines the mismatch between macro- and microscale analysis of ecological niche, showing a weak conservatism of ecological niches within the evolution of species (CHAPTER III; unpublished manuscript). We then focus only on hybrids, which occur within the natural distribution of mainland species. Here, we analyse if the ecological niche of hybrids shows divergences from those of parental species, thus evaluating the power of hybrids adaptation (CHAPTER IV; unpublished manuscript). Considering that hybrids may represent a potential threat for parental species (in terms of genetic erosion and competition), we produced the first ecological study on an allochthonous mixed population of Hydromantes, analysing population structure, ecological requirements and diet. The interest on this particular population mostly comes by the fact that its members are coming from all three mainland Hydromantes species, and thus it may represent a potential source of new hybrids (CHAPTER V; accepted in AMPHIBIA-REPTILIA in October 2017). The focus than moves on how bioclimatic parameters affect species within their distributional range. Using as model species the microendemic H. flavus, we analyse the relationship between environmental suitability and local abundance of the species, also focusing on all intermediate dynamics which provide useful information on spatial variation of individual fitness (CHAPTER VI; submitted to SCIENTIFIC REPORTS in November 2017). The first part ends with an analysis of the interaction between Hydromantes and Batracobdella algira leeches, the only known ectoparasite for European cave salamanders. Considering that the effect of leeches on their hosts is potentially detrimental, we investigated if these ectoparasites may represent a further threat for Hydromantes (CHAPTER VII; submitted to INTERNATIONAL JOURNAL FOR PARASITOLOGY: PARASITES AND WILDLIFE in November 2017). The second part is related to the reproduction of Hydromantes. In the first study we perform analyses on the breeding behaviour of several females belonging to a single population, identifying differences and similarities occurring in cohorting females (CHAPTER VIII; published in NORTH-WESTERN JOURNAL OF ZOOLOGY in December 2015). In the second study we gather information from all Hydromantes species, analysing size and development of breeding females, and identifying a relationship between breeding time and climatic conditions (CHAPTER IX; submitted to SALAMANDRA in June 2017). In the last part of this thesis, we analyse two potential methods for monitoring Hydromantes populations. In the first study we evaluate the efficiency of the marking method involving Alpha tags (CHAPTER X; published in SALAMANDRA in October 2017). In the second study we focus on evaluating N-mixtures models as a methodology for estimating abundance in wild populations (CHAPTER XI; submitted to BIODIVERSITY & CONSERVATION in October 2017).

The availability of data on the feeding habits of species of conservation value may be of great importance to develop analyses for both scientific and management purposes. Stomach flushing is a harmless technique that allowed us to collect extensive data on the feeding habits of six Hydromantes species. Here, we present two datasets originating from a three-year study performed in multiple seasons (spring and autumn) on 19 different populations of cave salamanders. The first dataset contains data of the stomach content of 1,250 salamanders, where 6,010 items were recognized; the second one reports the size of the intact prey items found in the stomachs. These datasets integrate considerably data already available on the diet of the European plethodontid salamanders, being also of potential use for large scale meta-analyses on amphibian diet.

Leeches can parasitize many vertebrate taxa. In amphibians, leech parasitism often has potential detrimental effects including population decline. Most of studies on the host-parasite interactions involving leeches and amphibians focus on freshwater environments, while they are very scarce for terrestrial amphibians. In this work, we studied the relationship between the leech Batracobdella algira and the European terrestrial salamanders of the genus Hydromantes, identifying environmental features related to the presence of the leeches and their possible effects on the hosts. We performed observation throughout Sardinia (Italy), covering the distribution area of all Hydromantes species endemic to this island. From September 2015 to May 2017, we conducted >150 surveys in 26 underground environments, collecting data on 2629 salamanders and 131 leeches. Water hardness was the only environmental feature correlated with the presence of B. algira, linking this leech to active karstic systems. Leeches were more frequently parasitizing salamanders with large body size. Body Condition Index was not significantly different between parasitized and non-parasitized salamanders. Our study shows the importance of abiotic environmental features for host-parasite interactions, and poses new questions on complex interspecific interactions between this ectoparasite and amphibians.

Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975 " 2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.

This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.

At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.

Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80 " 90 % of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0 " 2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8 " 14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4 " 2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.

Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.

Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.

The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.

This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.rn

Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.

In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night.rnIn a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population.rnUsing the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive.rnSince bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.

Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)

Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.

Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.

The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.

A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.

Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)

People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.