Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (518) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (17)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (53)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
Fostering positive and realistic self-concepts of individuals is a major goal in education worldwide (Trautwein & Möller, 2016). Individuals spend most of their childhood and adolescence in school. Thus, schools are important contexts for individuals to develop positive self-perceptions such as self-concepts. In order to enhance positive self-concepts in educational settings and in general, it is indispensable to have a comprehensive knowledge about the development and structure of self-concepts and their determinants. To date, extensive empirical and theoretical work on antecedents and change processes of self-concept has been conducted. However, several research gaps still exist, and several of these are the focus of the present dissertation. Specifically, these research gaps encompass (a) the development of multiple self-concepts from multiple perspectives regarding stability and change, (b) the direction of longitudinal interplay between self-concept facets over the entire time period from childhood to late adolescence, and (c) the evidence that a recently developed structural model of academic self-concept (nested Marsh/Shavelson model [Brunner et al., 2010]) fits the data in elementary school students, (d) the investigation of structural changes in academic self-concept profile formation within this model, (e) the investigation of dimensional comparison processes as determinants of academic self-concept profile formation in elementary school students within the internal/external frame of reference model (I/E model; Marsh, 1986), (f) the test of moderating variables for dimensional comparison processes in elementary school, (g) the test of the key assumptions of the I/E model that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, and (h) the generalizability of the findings regarding the I/E model over different statistical analytic methods. Thus, the aim of the present dissertation is to contribute to close these gaps with three studies. Thereby, data from German students enrolled in elementary school to secondary school education were gathered in three projects comprising the developmental time span from childhood to adolescence (ages 6 to 20). Three vital self-concept areas in childhood and adolescence were in-vestigated: general self-concept (i.e., self-esteem), academic self-concepts (general, math, reading, writing, native language), and social self-concepts (of acceptance and assertion). In all studies, data were analyzed within a latent variable framework. Findings are discussed with respect to the research aims of acquiring more comprehensive knowledge on the structure and development of significant self-concept in childhood and adolescence and their determinants. In addition, theoretical and practical implications derived from the findings of the present studies are outlined. Strengths and limitations of the present dissertation are discussed. Finally, an outlook for future research on self-concepts is given.
Background and rationale: Changing working conditions demand adaptation, resulting in higher stress levels in employees. In consequence, decreased productivity, increasing rates of sick leave, and cases of early retirement result in higher direct, indirect, and intangible costs. Aims of the Research Project: The aim of the study was to test the usefulness of a novel translational diagnostic tool, Neuropattern, for early detection, prevention, and personalized treatment of stress-related disorders. The trial was designed as a pilot study with a wait list control group. Materials and Methods: In this study, 70 employees of the Forestry Department Rhineland-Palatinate, Germany, were enrolled. Subjects were block-randomized according to the functional group of their career field, and either underwent Neuropattern diagnostics immediately, or after a waiting period of three months. After the diagnostic assessment, their physicians received the Neuropattern Medical Report, including the diagnostic results and treatment recommendations. Participants were informed by the Neuropattern Patient Report, and were eligible to an individualized Neuropattern Online Counseling account. Results: The application of Neuropattern diagnostics significantly improved mental health and health-related behavior, reduced perceived stress, emotional exhaustion, overcommitment and possibly, presenteeism. Additionally, Neuropattern sensitively detected functional changes in stress physiology at an early stage, thus allowing timely personalized interventions to prevent and treat stress pathology. Conclusion: The present study encouraged the application of Neuropattern diagnostics to early intervention in non-clinical populations. However, further research is required to determine the best operating conditions.
Early life adversity (ELA) is associated with a higher risk for diseases in adulthood. Changes in the immune system have been proposed to underlie this association. Although higher levels of inflammation and immunosenescence have been reported, data on cell-specific immune effects are largely absent. In addition, stress systems and health behaviors are altered in ELA, which may contribute to the generation of the "ELA immune phenotype". In this thesis, we have investigated the ELA immune phenotype on a cellular level and whether this is an indirect consequence of changes in behavior or stress reactivity. To address these questions the EpiPath cohort was established, consisting of 115 young adults with or without ELA. ELA participants had experienced separation from their parents in early childhood and were subsequently adopted, which is a standard model for ELA, whereas control participants grew up with their biological parents. At a first visit, blood samples were taken for analysis of epigenetic markers and immune parameters. A selection of the cohort underwent a standardized laboratory stress test (SLST). Endocrine, immune, and cardiovascular parameters were assessed at several time points before and after stress. At a second visit, participants underwent structural clinical interviews and filled out psychological questionnaires. We observed a higher number of activated T cells in ELA, measured by HLA-DR and CD25 expression. Neither cortisol levels nor health-risk behaviors explained the observed group differences. Besides a trend towards higher numbers of CCR4+CXCR3-CCR6+ CD4 T cells in ELA, relative numbers of immune cell subsets in circulation were similar between groups. No difference was observed in telomere length or in methylation levels of age-related CpGs in whole blood. However, we found a higher expression of senescence markers (CD57) on T cells in ELA. In addition, these cells had an increased cytolytic potential. A mediation analysis demonstrated that cytomegalovirus infection " an important driving force of immunosenescence " largely accounted for elevated CD57 expression. The psychological investigations revealed that after adoption, family conditions appeared to have been similar to the controls. However, PhD thesis MMC Elwenspoek 18 ELA participants scored higher on a depression index, chronic stress, and lower on self-esteem. Psychological, endocrine, and cardiovascular parameters significantly responded to the SLST, but were largely similar between the two groups. Only in a smaller subset of groups matched for gender, BMI, and age, the cortisol response seemed to be blunted in ELA participants. Although we found small differences in the methylation level of the GR promoter, GR sensitivity and mRNA expression levels GR as well as expression of the GR target genes FKBP5 and GILZ were similar between groups. Taken together, our data suggest an elevated state of immune activation in ELA, in which particularly T cells are affected. Furthermore, we found higher levels of T cells immunosenescence in ELA. Our data suggest that ELA may increase the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell specific immunosenescence. Importantly, we found no evidence of HPA dysregulation in participants exposed to ELA in the EpiPath cohort. Thus, the observed immune phenotype does not seem to be secondary to alterations in the stress system or health-risk behaviors, but rather a primary effect of early life programming on immune cells. Longitudinal studies will be necessary to further dissect cause from effect in the development of the ELA immune phenotype.
This thesis is focused on improving the knowledge on a group of threatened species, the European cave salamanders (genus Hydromantes). There are three main sections gathering studies dealing with different topics: Ecology (first part), Life traits (second part) and Monitoring methodologies (third part). First part starts with the study of the response of Hydromantes to the variation of climatic conditions, analysing 15 different localities throughout a full year (CHAPTER I; published in PEERJ in August 2015). After that, the focus moves on identify which is the operative temperature that these salamander experience, including how their body respond to variation of environmental temperature. This study was conducted using one of the most advanced tool, an infrared thermocamera, which gave the opportunity to perform detailed observation on salamanders body (CHAPTER II; published in JOURNAL OF THERMAL BIOLOGY in June 2016). In the next chapter we use the previous results to analyse the ecological niche of all eight Hydromantes species. The study mostly underlines the mismatch between macro- and microscale analysis of ecological niche, showing a weak conservatism of ecological niches within the evolution of species (CHAPTER III; unpublished manuscript). We then focus only on hybrids, which occur within the natural distribution of mainland species. Here, we analyse if the ecological niche of hybrids shows divergences from those of parental species, thus evaluating the power of hybrids adaptation (CHAPTER IV; unpublished manuscript). Considering that hybrids may represent a potential threat for parental species (in terms of genetic erosion and competition), we produced the first ecological study on an allochthonous mixed population of Hydromantes, analysing population structure, ecological requirements and diet. The interest on this particular population mostly comes by the fact that its members are coming from all three mainland Hydromantes species, and thus it may represent a potential source of new hybrids (CHAPTER V; accepted in AMPHIBIA-REPTILIA in October 2017). The focus than moves on how bioclimatic parameters affect species within their distributional range. Using as model species the microendemic H. flavus, we analyse the relationship between environmental suitability and local abundance of the species, also focusing on all intermediate dynamics which provide useful information on spatial variation of individual fitness (CHAPTER VI; submitted to SCIENTIFIC REPORTS in November 2017). The first part ends with an analysis of the interaction between Hydromantes and Batracobdella algira leeches, the only known ectoparasite for European cave salamanders. Considering that the effect of leeches on their hosts is potentially detrimental, we investigated if these ectoparasites may represent a further threat for Hydromantes (CHAPTER VII; submitted to INTERNATIONAL JOURNAL FOR PARASITOLOGY: PARASITES AND WILDLIFE in November 2017). The second part is related to the reproduction of Hydromantes. In the first study we perform analyses on the breeding behaviour of several females belonging to a single population, identifying differences and similarities occurring in cohorting females (CHAPTER VIII; published in NORTH-WESTERN JOURNAL OF ZOOLOGY in December 2015). In the second study we gather information from all Hydromantes species, analysing size and development of breeding females, and identifying a relationship between breeding time and climatic conditions (CHAPTER IX; submitted to SALAMANDRA in June 2017). In the last part of this thesis, we analyse two potential methods for monitoring Hydromantes populations. In the first study we evaluate the efficiency of the marking method involving Alpha tags (CHAPTER X; published in SALAMANDRA in October 2017). In the second study we focus on evaluating N-mixtures models as a methodology for estimating abundance in wild populations (CHAPTER XI; submitted to BIODIVERSITY & CONSERVATION in October 2017).
There are large health, societal, and economic costs associated with attrition from psychological services. The recently emerged, innovative statistical tool of complex network analysis was used in the present proof-of-concept study to improve the prediction of attrition. Fifty-eight patients undergoing psychological treatment for mood or anxiety disorders were assessed using Ecological Momentary Assessments four times a day for two weeks before treatment (3,248 measurements). Multilevel vector autoregressive models were employed to compute dynamic symptom networks. Intake variables and network parameters (centrality measures) were used as predictors for dropout using machine-learning algorithms. Networks for patients differed significantly between completers and dropouts. Among intake variables, initial impairment and sex predicted dropout explaining 6% of the variance. The network analysis identified four additional predictors: Expected force of being excited, outstrength of experiencing social support, betweenness of feeling nervous, and instrength of being active. The final model with the two intake and four network variables explained 32% of variance in dropout and identified 47 out of 58 patients correctly. The findings indicate that patients" dynamic network structures may improve the prediction of dropout. When implemented in routine care, such prediction models could identify patients at risk for attrition and inform personalized treatment recommendations.
The availability of data on the feeding habits of species of conservation value may be of great importance to develop analyses for both scientific and management purposes. Stomach flushing is a harmless technique that allowed us to collect extensive data on the feeding habits of six Hydromantes species. Here, we present two datasets originating from a three-year study performed in multiple seasons (spring and autumn) on 19 different populations of cave salamanders. The first dataset contains data of the stomach content of 1,250 salamanders, where 6,010 items were recognized; the second one reports the size of the intact prey items found in the stomachs. These datasets integrate considerably data already available on the diet of the European plethodontid salamanders, being also of potential use for large scale meta-analyses on amphibian diet.
Leeches can parasitize many vertebrate taxa. In amphibians, leech parasitism often has potential detrimental effects including population decline. Most of studies on the host-parasite interactions involving leeches and amphibians focus on freshwater environments, while they are very scarce for terrestrial amphibians. In this work, we studied the relationship between the leech Batracobdella algira and the European terrestrial salamanders of the genus Hydromantes, identifying environmental features related to the presence of the leeches and their possible effects on the hosts. We performed observation throughout Sardinia (Italy), covering the distribution area of all Hydromantes species endemic to this island. From September 2015 to May 2017, we conducted >150 surveys in 26 underground environments, collecting data on 2629 salamanders and 131 leeches. Water hardness was the only environmental feature correlated with the presence of B. algira, linking this leech to active karstic systems. Leeches were more frequently parasitizing salamanders with large body size. Body Condition Index was not significantly different between parasitized and non-parasitized salamanders. Our study shows the importance of abiotic environmental features for host-parasite interactions, and poses new questions on complex interspecific interactions between this ectoparasite and amphibians.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night. In a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population. Using the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive. Since bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Background: We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients. Methods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively). Conclusions: Low depression in conjunction with social integration at time of waitlisting is related to enhanced chances for survival after HTx. Both factors should be considered for inclusion in standardized assessments and interventions for HTx candidates. We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients.\r\n\r\nMethods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively).
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
This study aims to estimate the cotton yield at the field and regional level via the APSIM/OZCOT crop model, using an optimization-based recalibration approach based on the state variable of the cotton canopy - the leaf area index (LAI), derived from atmospherically corrected Landsat-8 OLI remote sensing images in 2014. First, a local sensitivity and global analysis approach was employed to test the sensitivity of cultivar, soil and agronomic parameters to the dynamics of the LAI. After sensitivity analyses, a series of sensitive parameters were obtained. Then, the APSIM/OZCOT crop model was calibrated by observations over a two-year span (2006-2007) at the Aksu station, combined with these sensitive cultivar parameters and the current understanding of cotton cultivar parameters. Third, the relationship between the observed in-situ LAI and synchronous perpendicular vegetation indices derived from six Landsat-8 OLI images covering the entire growth stage was modelled to generate LAI maps in time and space. Finally, the Particle Swarm Optimization (PSO) and general-purpose optimization approach (based on Nelder-Mead algorithm) were used to recalibrate four sensitive agronomic parameters (row spacing, sowing density per row, irrigation amount and total fertilization) according to the minimization of the root-mean-square deviation (RMSE) between the simulated LAI from the APSIM/OZCOT model and retrieved LAI from Landsat-8 OLI remote sensing images. After the recalibration, the best simulated results compared with observed cotton yield were obtained. The results showed that: (1) FRUDD, FLAI and DDISQ were the major cultivar parameters suitable for calibrating the cotton cultivar. (2) After the calibration, the simulated LAI performed well with an RMSE and mean absolute error (MAE) of 0.45 and 0.33, respectively, in 2006 and 0.46 and 0.41, respectively, in 2007. The coefficient of determination between the observed and simulated LAI was 0.83 and 0.97, respectively, in 2006 and 2007. The Pearson- correlation coefficient was 0.913 and 0.988 in 2006 and 2007, respectively, with a significant positive correlation between the simulated and observed LAI. The difference between the observed and simulated yield was 776.72 kg/ha and 259.98 kg/ha in 2006 and 2007, respectively. (3) Cotton cultivation in 2014 was obtained using three Landsat-8 OLI images - DOY136 (May), DOY 168 (June) and DOY 200 (July) - based on the phenological differences in cotton and other vegetation types. (4) The yield estimation after the assimilation closely approximated the field-observed values, and the coefficient of determination was as high as 0.82, after recalibration of the APSIM/OZCOT model for ten cotton fields. The difference between the observed and assimilated yields for the ten fields ranged from 18.2 to 939.7 kg/ha. The RMSE and MAE between the assimilated and observed yield was 417.5 and 303.1 kg/ha, respectively. These findings provide scientific evidence for the feasibility of coupled remote sensing and APSIM/OZCOT model at the field level. (5) Upscaling from field level to regional level, the assimilation algorithm and scheme are both especially important. Although the PSO method is very efficient, the computational efficiency is also the shortcoming of the assimilation strategy on a regional scale. Comparisons between the PSO and general-purpose optimization method (based on the Nelder-Mead algorithm) were implemented from the RSME, LAI curve and computational time. The general-purpose optimization method (based on the Nelder-Mead algorithm) was used for the regional assimilation between remote sensing and the APSIM/OZCOT model. Meanwhile, the basic unit for regional assimilation was also determined as cotton field rather than pixel. Moreover, the crop growth simulation was also divided into two phases (vegetative growth and reproductive growth) for regional assimilation. (6) The regional assimilation at the vegetative growth stage between the remote sensing derived and APSIM/OZCOT model-simulated LAI was implemented by adjusting two parameters: row spacing and sowing density per row. The results showed that the sowing density of cotton was higher in the southern part than in the northern part of the study area. The spatial pattern of cotton density was also consistent with the reclamation from 2001 to 2013. Cotton fields after early reclamation were mainly located in the southern part while the recent reclamation was located in the northern part. Poor soil quality, lack of irrigation facilities and woodland belts of cotton fields in the northern part caused the low density of cotton. Regarding the row spacing, the northern part was larger than the southern part due to the variation of two agronomic modes from military and private companies. (7) The irrigation and fertilization amount were both used as key parameters to be adjusted for regional assimilation during the reproductive growth period. The result showed that the irrigation per time ranged from 58.14 to 89.99 mm in the study area. The spatial distribution of the irrigation amount is higher in the northern part while lower in southern study area. The application of urea fertilization ranged from 500.35 to 1598.59 kg/ha in the study area. The spatial distribution of fertilization was lower in the northern part and higher in the southern part. More fertilization applied in the southern study area aims to increase the boll weight and number for pursuing higher yields of cotton. The frequency of the RSME during the second assimilation was mainly located in the range of 0.4-0.6 m2/m2. The estimated cotton yield ranged from 1489 to 8895 kg/ha. The spatial distribution of the estimated yield is also higher in the southern part than the northern study area.
Background: Psychotherapy is successful for the majority of patients , but not for every patient. Hence, further knowledge is needed on how treatments should be adapted for those who do not profit or deteriorate. In the last years prediction tools as well as feedback interventions were part of a trend to more personalized approaches in psychotherapy. Research on psychometric prediction and feedback into ongoing treatment has the potential to enhance treatment outcomes, especially for patients with an increased risk of treatment failure or drop-out.rnMethods/design: The research project investigates in a randomized controlled trial the effectiveness as well as moderating and mediating factors of psychometric feedback to therapists. In the intended study a total of 423 patients, who applied for a cognitive-behavioral therapy at the psychotherapy clinic of the University Trier and suffer from a depressive and/or an anxietyrndisorder (SCID interviews), will be included. The patients will be randomly assigned either to one therapist as well as to one of two intervention groups (CG, IG2). An additional intervention group (IG1) will be generated from an existing archival data set via propensity score matching. Patients of the control group (CG; n = 85) will be monitored concerning psychological impairment but therapists will not be provided with any feedback about the patients assessments. In both intervention groups (IG1: n = 169; IG2: n = 169) the therapists are provided with feedback about the patients self-evaluation in a computerized feedback portal. Therapists of the IG2 will additionally be provided with clinical support tools, which will be developed in thisrnproject, on the basis of existing systems. Therapists will also be provided with a personalized treatment recommendation based on similar patients (Nearest Neighbors) at the beginning of treatment. Besides the general effectiveness of feedback and the clinical support tools for negatively developing patients, further mediating and moderating variables on this feedback effectrnshould be examined: treatment length, frequency of feedback use, therapist effects, therapist- experience, attitude towards feedback as well as congruence of therapist-andpatient- evaluation concerning the progress. Additional procedures will be implemented to assess treatment adherence as well as the reliability of diagnosis and to include it into the analyses.rnDiscussion: The current trial tests a comprehensive feedback system which combines precision mental health predictions with routine outcome monitoring and feedback tools in routine outpatient psychotherapy. It also adds to previous feedback research a stricter design by investigating another repeated measurement CG as well as a stricter control of treatment integrity. It also includes a structured clinical interview (SCID) and controls for comorbidity (within depression and anxiety). This study also investigates moderators (attitudes towards, use of the feedback system, diagnoses) and mediators (therapists" awareness of negative change and treatment length) in one study.
This paper describes the concept of the hyperspectral Earth-observing thermal infrared (TIR) satellite mission HiTeSEM (High-resolution Temperature and Spectral Emissivity Mapping). The scientific goal is to measure specific key variables from the biosphere, hydrosphere, pedosphere, and geosphere related to two global problems of significant societal relevance: food security and human health. The key variables comprise land and sea surface radiation temperature and emissivity, surface moisture, thermal inertia, evapotranspiration, soil minerals and grain size components, soil organic carbon, plant physiological variables, and heat fluxes. The retrieval of this information requires a TIR imaging system with adequate spatial and spectral resolutions and with day-night following observation capability. Another challenge is the monitoring of temporally high dynamic features like energy fluxes, which require adequate revisit time. The suggested solution is a sensor pointing concept to allow high revisit times for selected target regions (1"5 days at off-nadir). At the same time, global observations in the nadir direction are guaranteed with a lower temporal repeat cycle (>1 month). To account for the demand of a high spatial resolution for complex targets, it is suggested to combine in one optic (1) a hyperspectral TIR system with ~75 bands at 7.2"12.5 -µm (instrument NEDT 0.05 K"0.1 K) and a ground sampling distance (GSD) of 60 m, and (2) a panchromatic high-resolution TIR-imager with two channels (8.0"10.25 -µm and 10.25"12.5 -µm) and a GSD of 20 m. The identified science case requires a good correlation of the instrument orbit with Sentinel-2 (maximum delay of 1"3 days) to combine data from the visible and near infrared (VNIR), the shortwave infrared (SWIR) and TIR spectral regions and to refine parameter retrieval.
Dry tropical forests undergo massive conversion and degradation processes. This also holds true for the extensive Miombo forests that cover large parts of Southern Africa. While the largest proportional area can be found in Angola, the country still struggles with food shortages, insufficient medical and educational supplies, as well as the ongoing reconstruction of infrastructure after 27 years of civil war. Especially in rural areas, the local population is therefore still heavily dependent on the consumption of natural resources, as well as subsistence agriculture. This leads, on one hand, to large areas of Miombo forests being converted for cultivation purposes, but on the other hand, to degradation processes due to the selective use of forest resources. While forest conversion in south-central rural Angola has already been quantitatively described, information about forest degradation is not yet available. This is due to the history of conflicts and the therewith connected research difficulties, as well as the remote location of this area. We apply an annual time series approach using Landsat data in south-central Angola not only to assess the current degradation status of the Miombo forests, but also to derive past developments reaching back to times of armed conflicts. We use the Disturbance Index based on tasseled cap transformation to exclude external influences like inter-annual variation of rainfall. Based on this time series, linear regression is calculated for forest areas unaffected by conversion, but also for the pre-conversion period of those areas that were used for cultivation purposes during the observation time. Metrics derived from linear regression are used to classify the study area according to their dominant modification processes.rnWe compare our results to MODIS latent integral trends and to further products to derive information on underlying drivers. Around 13% of the Miombo forests are affected by degradation processes, especially along streets, in villages, and close to existing agriculture. However, areas in presumably remote and dense forest areas are also affected to a significant extent. A comparison with MODIS derived fire ignition data shows that they are most likely affected by recurring fires and less by selective timber extraction. We confirm that areas that are used for agriculture are more heavily disturbed by selective use beforehand than those that remain unaffected by conversion. The results can be substantiated by the MODIS latent integral trends and we also show that due to extent and location, the assessment of forest conversion is most likely not sufficient to provide good estimates for the loss of natural resources.
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
Avoiding aerial microfibre contamination of environmental samples is essential for reliable analyses when it comes to the detection of ubiquitous microplastics. Almost all laboratories have contamination problems which are largely unavoidable without investments in clean-air devices. Therefore, our study supplies an approach to assess background microfibre contamination of samples in the laboratory under particle-free air conditions. We tested aerial contamination of samples indoor, in a mobile laboratory, within a laboratory fume hood and on a clean bench with particles filtration during the examining process of a fish. The used clean bench reduced aerial microfibre contamination in our laboratory by 96.5%. This highlights the value of suitable clean-air devices for valid microplastic pollution data. Our results indicate, that pollution levels by microfibres have been overestimated and actual pollution levels may be many times lower. Accordingly, such clean-air devices are recommended for microplastic laboratory applications in future research work to significantly lower error rates.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation). This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011"2012) within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations.
Academic self-concept (ASC) is comprised of individual perceptions of one- own academic ability. In a cross-sectional quasi-representative sample of 3,779 German elementary school children in grades 1 to 4, we investigated (a) the structure of ASC, (b) ASC profile formation, an aspect of differentiation that is reflected in lower correlations between domain-specific ASCs with increasing grade level, (c) the impact of (internal) dimensional comparisons of one- own ability in different school subjects for profile formation of ASC, and (d) the role played by differences in school grades between subjects for these dimensional comparisons. The nested Marsh/Shavelson model, with general ASC at the apex and math, writing, and reading ASC as specific factors nested under general ASC fitted the data at all grade levels. A first-order factor model with math, writing, reading, and general ASCs as correlated factors provided a good fit, too. ASC profile formation became apparent during the first two to three years of school. Dimensional comparisons across subjects contributed to ASC profile formation. School grades enhanced these comparisons, especially when achievement profiles were uneven. In part, findings depended on the assumed structural model of ASCs. Implications for further research are discussed with special regard to factors influencing and moderating dimensional comparisons.
Dysfunctional eating behavior is a major risk factor for developing all sorts of eating disorders. Food craving is a concept that may help to understand better why and how these and other eating disorders become chronic conditions through non homeastatically-driven mechanisms. As obesity affects people worldwide, cultural differences must be acknowledged to apply proper therapeutic strategies. In this work, we adapted the Food Craving Inventory (FCI) to the German population. We performed a factor analysis of an adaptation of the original FCI in a sample of 326 men and women. We could replicate the factor structure of the FCI on a German population.rnThe factor extraction procedure produced a factor solution that reproduces the fourfactors described in the original inventory, the FCI. Our instrument presents high internal consistency, as well as a significant correlation with measures of convergent and discriminant validity. The FCI-Deutsch (FCI-DE) is a valid instrument to assess craving for particular foods in Germany, and it could, therefore, prove useful in the clinical and research practice in the field of obesity and eating behaviors.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related activities, most of them enter their professional life unprepared, as classroom assessment is often not part of their educational training. Since teacher judgments matter for the educational development of students, the judgments should be up to a high standard. The present dissertation comprises three studies focusing on accuracy of teacher judgments (Study 1), consequences of (mis-)judgment regarding teacher nomination for gifted programming (Study 2) and teacher recommendations for secondary school tracks (Study 3), and individual student characteristics that impact and potentially bias teacher judgment (Studies 1 through 3). All studies were designed to contribute to a further understanding of classroom assessment skills of teachers. Overall, the results implied that, teacher judgment of cognitive ability was an important constant for teacher nominations and recommendations but lacked accuracy. Furthermore, teacher judgments of various traits and school achievement were substantially related to social background variables, especially the parents" educational background. However, multivariate analysis showed social background variables to impact nomination and recommendation only marginally if at all. All results indicated differentiated but potentially biased teacher judgments to impact their far-reaching referral decisions directly, while the influence of social background on the referral decisions itself seems mediated. Implications regarding further research practices and educational assessment strategies are discussed. The implications on the needs of teachers to be educated on judgment and educational assessment are of particular interest and importance.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
It is generally assumed that the temperature increase associated with global climate change will lead to increased thunderstorm intensity and associated heavy precipitation events. In the present study it is investigated whether the frequency of thunderstorm occurrences will in- or decrease and how the spatial distribution will change for the A1B scenario. The region of interest is Central Europe with a special focus on the Saar-Lor-Lux region (Saarland, Lorraine, Luxembourg) and Rhineland-Palatinate.Daily model data of the COSMO-CLM with a horizontal resolution of 4.5 km is used. The simulations were carried out for two different time slices: 1971"2000 (C20), and 2071"2100 (A1B). Thunderstorm indices are applied to detect thunderstorm-prone conditions and differences in their frequency of occurrence in the two thirty years timespans. The indices used are CAPE (Convective Available Potential Energy), SLI (Surface Lifted Index), and TSP (Thunderstorm Severity Potential).The investigation of the present and future thunderstorm conducive conditions show a significant increase of non-thunderstorm conditions. The regional averaged thunderstorm frequencies will decrease in general, but only in the Alps a potential increase in thunderstorm occurrences and intensity is found. The comparison between time slices of 10 and 30 years length show that the number of gridpoints with significant signals increases only slightly. In order to get a robust signal for severe thunderstorm, an extension to more than 75 years would be necessary.
The Firepower of Work Craving: When Self-Control Is Burning under the Rubble of Self-Regulation
(2017)
Work craving theory addresses how work-addicted individuals direct great emotion-regulatory efforts to weave their addictive web of working. They crave work for two main emotional incentives: to overcompensate low self-worth and to escape (i.e., reduce) negative affect, which is strategically achieved through neurotic perfectionism and compulsive working. Work-addicted individuals" strong persistence and self-discipline with respect to work-related activities suggest strong skills in volitional action control. However, their inability to disconnect from work implies low volitional skills. How can work-addicted individuals have poor and strong volitional skills at the same time? To answer this paradox, we elaborated on the relevance of two different volitional modes in work craving: self-regulation (self-maintenance) and self-control (goal maintenance). Four hypotheses were derived from Wojdylo- work craving theory and Kuhl- self-regulation theory: (H1) Work craving is associated with a combination of low self-regulation and high self-control. (H2) Work craving is associated with symptoms of psychological distress. (H3) Low self-regulation is associated with psychological distress symptoms. (H4) Work craving mediates the relationships between self-regulation deficits and psychological distress symptoms at high levels of self-control. Additionally, we aimed at supporting the discriminant validity of work craving with respect to work engagement by showing their different volitional underpinnings. Results of the two studies confirmed our hypotheses: whereas work craving was predicted by high self-control and low self-regulation and associated with higher psychological distress, work engagement was predicted by high self-regulation and high self-control and associated with lower symptoms of psychological distress. Furthermore, work styles mediated the relationship between volitional skills and symptoms of psychological distress. Based on these new insights, several suggestions for prevention and therapeutic interventions for work-addicted individuals are proposed.
Phase-amplitude cross-frequency coupling is a mechanism thought to facilitate communication between neuronal ensembles. The mechanism could underlie the implementation of complex cognitive processes, like executive functions, in the brain. This thesis contributes to answering the question, whether phase-amplitude cross-frequency coupling - assessed via electroencephalography (EEG) - is a mechanism by which executive functioning is implemented in the brain and whether an assumed performance effect of stress on executive functioning is reflected in phase-amplitude coupling strength. A huge body of studies shows that stress can influence executive functioning, in essence having detrimental effects. In two independent studies, each being comprised of two core executive function tasks (flexibility and behavioural inhibition as well as cognitive inhibition and working memory), beta-gamma phase-amplitude coupling was robustly detected in the left and right prefrontal hemispheres. No systematic pattern of coupling strength modulation by either task demands or acute stress was detected. Beta-gamma coupling might also be present in more basic attention processes. This is the first investigation of the relationship between stress, executive functions and phase-amplitude coupling. Therefore, many aspects have not been explored yet. For example, studying phase precision instead of coupling strength as an indicator for phase-amplitude coupling modulations. Furthermore, data was analysed in source space (independent component analysis); comparability to sensor space has still to be determined. These as well as other aspects should be investigated, due to the promising finding of very robust and strong beta-gamma coupling for all executive functions. Additionally, this thesis tested the performance of two widely used phase-amplitude coupling measures (mean vector length and modulation index). Both measures are specific and sensitive to coupling strength and coupling width. The simulation study also drew attention to several confounding factors, which influence phase-amplitude coupling measures (e. g. data length, multimodality).
Besides well-known positive aspects of conservation tillage combined with mulching, a drawback may be the survival of phytopathogenic fungi like Fusarium species on plant residues. This may endanger the health of the following crop by increasing the infection risk for specific plant diseases. In infected plant organs, these pathogens are able to produce mycotoxins like deoxynivalenol (DON). Mycotoxins like DON persist during storage, are heat resistant and of major concern for human and animal health after consumption of contaminated food and feed, respectively. Among fungivorous soil organisms, there are representatives of the soil fauna which are obviously antagonistic to a Fusarium infection and the contamination with mycotoxins. Earthworms (Lumbricus terrestris), collembolans (Folsomia candida) and nematodes (Aphelenchoides saprophilus) provide a wide range of ecosystem services including the stimulation of decomposition processes which may result in the regulation of plant pathogens and the degradation of environmental contaminants. Several investigations under laboratory conditions and in the field were conducted to test the following hypotheses: (1) Fusarium-infected and DON-contaminated wheat straw provides a more attractive food substrate than non-infected control straw (2) the introduced soil fauna reduce the biomass of F. culmorum and the content of DON in infected wheat straw under laboratory and field conditions (3) the species interaction of the introduced soil fauna enhances the degradation of Fusarium biomass and DON concentration in wheat straw; (4) the degradation efficiency of soil fauna is affected by soil texture. The results of the present thesis pointed out that the degradation performance of the introduced soil fauna must be considered as an important contribution to the biological control of plant diseases and environmental pollutants. As in particular L. terrestris revealed to be the driver of the degradation process, earthworms contribute to a sustainable control of fungal pathogens like Fusarium and its mycotoxins in wheat straw, thus reducing the risk of plant diseases and environmental pollution as ecosystem services.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.