Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (517) (entfernen)
Volltext vorhanden
- ja (517) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (17)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (53)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (14)
- Rechtswissenschaft (14)
At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night. In a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population. Using the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive. Since bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Background: We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients. Methods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively). Conclusions: Low depression in conjunction with social integration at time of waitlisting is related to enhanced chances for survival after HTx. Both factors should be considered for inclusion in standardized assessments and interventions for HTx candidates. We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients.\r\n\r\nMethods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively).
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
This study aims to estimate the cotton yield at the field and regional level via the APSIM/OZCOT crop model, using an optimization-based recalibration approach based on the state variable of the cotton canopy - the leaf area index (LAI), derived from atmospherically corrected Landsat-8 OLI remote sensing images in 2014. First, a local sensitivity and global analysis approach was employed to test the sensitivity of cultivar, soil and agronomic parameters to the dynamics of the LAI. After sensitivity analyses, a series of sensitive parameters were obtained. Then, the APSIM/OZCOT crop model was calibrated by observations over a two-year span (2006-2007) at the Aksu station, combined with these sensitive cultivar parameters and the current understanding of cotton cultivar parameters. Third, the relationship between the observed in-situ LAI and synchronous perpendicular vegetation indices derived from six Landsat-8 OLI images covering the entire growth stage was modelled to generate LAI maps in time and space. Finally, the Particle Swarm Optimization (PSO) and general-purpose optimization approach (based on Nelder-Mead algorithm) were used to recalibrate four sensitive agronomic parameters (row spacing, sowing density per row, irrigation amount and total fertilization) according to the minimization of the root-mean-square deviation (RMSE) between the simulated LAI from the APSIM/OZCOT model and retrieved LAI from Landsat-8 OLI remote sensing images. After the recalibration, the best simulated results compared with observed cotton yield were obtained. The results showed that: (1) FRUDD, FLAI and DDISQ were the major cultivar parameters suitable for calibrating the cotton cultivar. (2) After the calibration, the simulated LAI performed well with an RMSE and mean absolute error (MAE) of 0.45 and 0.33, respectively, in 2006 and 0.46 and 0.41, respectively, in 2007. The coefficient of determination between the observed and simulated LAI was 0.83 and 0.97, respectively, in 2006 and 2007. The Pearson- correlation coefficient was 0.913 and 0.988 in 2006 and 2007, respectively, with a significant positive correlation between the simulated and observed LAI. The difference between the observed and simulated yield was 776.72 kg/ha and 259.98 kg/ha in 2006 and 2007, respectively. (3) Cotton cultivation in 2014 was obtained using three Landsat-8 OLI images - DOY136 (May), DOY 168 (June) and DOY 200 (July) - based on the phenological differences in cotton and other vegetation types. (4) The yield estimation after the assimilation closely approximated the field-observed values, and the coefficient of determination was as high as 0.82, after recalibration of the APSIM/OZCOT model for ten cotton fields. The difference between the observed and assimilated yields for the ten fields ranged from 18.2 to 939.7 kg/ha. The RMSE and MAE between the assimilated and observed yield was 417.5 and 303.1 kg/ha, respectively. These findings provide scientific evidence for the feasibility of coupled remote sensing and APSIM/OZCOT model at the field level. (5) Upscaling from field level to regional level, the assimilation algorithm and scheme are both especially important. Although the PSO method is very efficient, the computational efficiency is also the shortcoming of the assimilation strategy on a regional scale. Comparisons between the PSO and general-purpose optimization method (based on the Nelder-Mead algorithm) were implemented from the RSME, LAI curve and computational time. The general-purpose optimization method (based on the Nelder-Mead algorithm) was used for the regional assimilation between remote sensing and the APSIM/OZCOT model. Meanwhile, the basic unit for regional assimilation was also determined as cotton field rather than pixel. Moreover, the crop growth simulation was also divided into two phases (vegetative growth and reproductive growth) for regional assimilation. (6) The regional assimilation at the vegetative growth stage between the remote sensing derived and APSIM/OZCOT model-simulated LAI was implemented by adjusting two parameters: row spacing and sowing density per row. The results showed that the sowing density of cotton was higher in the southern part than in the northern part of the study area. The spatial pattern of cotton density was also consistent with the reclamation from 2001 to 2013. Cotton fields after early reclamation were mainly located in the southern part while the recent reclamation was located in the northern part. Poor soil quality, lack of irrigation facilities and woodland belts of cotton fields in the northern part caused the low density of cotton. Regarding the row spacing, the northern part was larger than the southern part due to the variation of two agronomic modes from military and private companies. (7) The irrigation and fertilization amount were both used as key parameters to be adjusted for regional assimilation during the reproductive growth period. The result showed that the irrigation per time ranged from 58.14 to 89.99 mm in the study area. The spatial distribution of the irrigation amount is higher in the northern part while lower in southern study area. The application of urea fertilization ranged from 500.35 to 1598.59 kg/ha in the study area. The spatial distribution of fertilization was lower in the northern part and higher in the southern part. More fertilization applied in the southern study area aims to increase the boll weight and number for pursuing higher yields of cotton. The frequency of the RSME during the second assimilation was mainly located in the range of 0.4-0.6 m2/m2. The estimated cotton yield ranged from 1489 to 8895 kg/ha. The spatial distribution of the estimated yield is also higher in the southern part than the northern study area.
Background: Psychotherapy is successful for the majority of patients , but not for every patient. Hence, further knowledge is needed on how treatments should be adapted for those who do not profit or deteriorate. In the last years prediction tools as well as feedback interventions were part of a trend to more personalized approaches in psychotherapy. Research on psychometric prediction and feedback into ongoing treatment has the potential to enhance treatment outcomes, especially for patients with an increased risk of treatment failure or drop-out.rnMethods/design: The research project investigates in a randomized controlled trial the effectiveness as well as moderating and mediating factors of psychometric feedback to therapists. In the intended study a total of 423 patients, who applied for a cognitive-behavioral therapy at the psychotherapy clinic of the University Trier and suffer from a depressive and/or an anxietyrndisorder (SCID interviews), will be included. The patients will be randomly assigned either to one therapist as well as to one of two intervention groups (CG, IG2). An additional intervention group (IG1) will be generated from an existing archival data set via propensity score matching. Patients of the control group (CG; n = 85) will be monitored concerning psychological impairment but therapists will not be provided with any feedback about the patients assessments. In both intervention groups (IG1: n = 169; IG2: n = 169) the therapists are provided with feedback about the patients self-evaluation in a computerized feedback portal. Therapists of the IG2 will additionally be provided with clinical support tools, which will be developed in thisrnproject, on the basis of existing systems. Therapists will also be provided with a personalized treatment recommendation based on similar patients (Nearest Neighbors) at the beginning of treatment. Besides the general effectiveness of feedback and the clinical support tools for negatively developing patients, further mediating and moderating variables on this feedback effectrnshould be examined: treatment length, frequency of feedback use, therapist effects, therapist- experience, attitude towards feedback as well as congruence of therapist-andpatient- evaluation concerning the progress. Additional procedures will be implemented to assess treatment adherence as well as the reliability of diagnosis and to include it into the analyses.rnDiscussion: The current trial tests a comprehensive feedback system which combines precision mental health predictions with routine outcome monitoring and feedback tools in routine outpatient psychotherapy. It also adds to previous feedback research a stricter design by investigating another repeated measurement CG as well as a stricter control of treatment integrity. It also includes a structured clinical interview (SCID) and controls for comorbidity (within depression and anxiety). This study also investigates moderators (attitudes towards, use of the feedback system, diagnoses) and mediators (therapists" awareness of negative change and treatment length) in one study.
This paper describes the concept of the hyperspectral Earth-observing thermal infrared (TIR) satellite mission HiTeSEM (High-resolution Temperature and Spectral Emissivity Mapping). The scientific goal is to measure specific key variables from the biosphere, hydrosphere, pedosphere, and geosphere related to two global problems of significant societal relevance: food security and human health. The key variables comprise land and sea surface radiation temperature and emissivity, surface moisture, thermal inertia, evapotranspiration, soil minerals and grain size components, soil organic carbon, plant physiological variables, and heat fluxes. The retrieval of this information requires a TIR imaging system with adequate spatial and spectral resolutions and with day-night following observation capability. Another challenge is the monitoring of temporally high dynamic features like energy fluxes, which require adequate revisit time. The suggested solution is a sensor pointing concept to allow high revisit times for selected target regions (1"5 days at off-nadir). At the same time, global observations in the nadir direction are guaranteed with a lower temporal repeat cycle (>1 month). To account for the demand of a high spatial resolution for complex targets, it is suggested to combine in one optic (1) a hyperspectral TIR system with ~75 bands at 7.2"12.5 -µm (instrument NEDT 0.05 K"0.1 K) and a ground sampling distance (GSD) of 60 m, and (2) a panchromatic high-resolution TIR-imager with two channels (8.0"10.25 -µm and 10.25"12.5 -µm) and a GSD of 20 m. The identified science case requires a good correlation of the instrument orbit with Sentinel-2 (maximum delay of 1"3 days) to combine data from the visible and near infrared (VNIR), the shortwave infrared (SWIR) and TIR spectral regions and to refine parameter retrieval.
Dry tropical forests undergo massive conversion and degradation processes. This also holds true for the extensive Miombo forests that cover large parts of Southern Africa. While the largest proportional area can be found in Angola, the country still struggles with food shortages, insufficient medical and educational supplies, as well as the ongoing reconstruction of infrastructure after 27 years of civil war. Especially in rural areas, the local population is therefore still heavily dependent on the consumption of natural resources, as well as subsistence agriculture. This leads, on one hand, to large areas of Miombo forests being converted for cultivation purposes, but on the other hand, to degradation processes due to the selective use of forest resources. While forest conversion in south-central rural Angola has already been quantitatively described, information about forest degradation is not yet available. This is due to the history of conflicts and the therewith connected research difficulties, as well as the remote location of this area. We apply an annual time series approach using Landsat data in south-central Angola not only to assess the current degradation status of the Miombo forests, but also to derive past developments reaching back to times of armed conflicts. We use the Disturbance Index based on tasseled cap transformation to exclude external influences like inter-annual variation of rainfall. Based on this time series, linear regression is calculated for forest areas unaffected by conversion, but also for the pre-conversion period of those areas that were used for cultivation purposes during the observation time. Metrics derived from linear regression are used to classify the study area according to their dominant modification processes.rnWe compare our results to MODIS latent integral trends and to further products to derive information on underlying drivers. Around 13% of the Miombo forests are affected by degradation processes, especially along streets, in villages, and close to existing agriculture. However, areas in presumably remote and dense forest areas are also affected to a significant extent. A comparison with MODIS derived fire ignition data shows that they are most likely affected by recurring fires and less by selective timber extraction. We confirm that areas that are used for agriculture are more heavily disturbed by selective use beforehand than those that remain unaffected by conversion. The results can be substantiated by the MODIS latent integral trends and we also show that due to extent and location, the assessment of forest conversion is most likely not sufficient to provide good estimates for the loss of natural resources.
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
Avoiding aerial microfibre contamination of environmental samples is essential for reliable analyses when it comes to the detection of ubiquitous microplastics. Almost all laboratories have contamination problems which are largely unavoidable without investments in clean-air devices. Therefore, our study supplies an approach to assess background microfibre contamination of samples in the laboratory under particle-free air conditions. We tested aerial contamination of samples indoor, in a mobile laboratory, within a laboratory fume hood and on a clean bench with particles filtration during the examining process of a fish. The used clean bench reduced aerial microfibre contamination in our laboratory by 96.5%. This highlights the value of suitable clean-air devices for valid microplastic pollution data. Our results indicate, that pollution levels by microfibres have been overestimated and actual pollution levels may be many times lower. Accordingly, such clean-air devices are recommended for microplastic laboratory applications in future research work to significantly lower error rates.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation). This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011"2012) within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations.
Academic self-concept (ASC) is comprised of individual perceptions of one- own academic ability. In a cross-sectional quasi-representative sample of 3,779 German elementary school children in grades 1 to 4, we investigated (a) the structure of ASC, (b) ASC profile formation, an aspect of differentiation that is reflected in lower correlations between domain-specific ASCs with increasing grade level, (c) the impact of (internal) dimensional comparisons of one- own ability in different school subjects for profile formation of ASC, and (d) the role played by differences in school grades between subjects for these dimensional comparisons. The nested Marsh/Shavelson model, with general ASC at the apex and math, writing, and reading ASC as specific factors nested under general ASC fitted the data at all grade levels. A first-order factor model with math, writing, reading, and general ASCs as correlated factors provided a good fit, too. ASC profile formation became apparent during the first two to three years of school. Dimensional comparisons across subjects contributed to ASC profile formation. School grades enhanced these comparisons, especially when achievement profiles were uneven. In part, findings depended on the assumed structural model of ASCs. Implications for further research are discussed with special regard to factors influencing and moderating dimensional comparisons.
Dysfunctional eating behavior is a major risk factor for developing all sorts of eating disorders. Food craving is a concept that may help to understand better why and how these and other eating disorders become chronic conditions through non homeastatically-driven mechanisms. As obesity affects people worldwide, cultural differences must be acknowledged to apply proper therapeutic strategies. In this work, we adapted the Food Craving Inventory (FCI) to the German population. We performed a factor analysis of an adaptation of the original FCI in a sample of 326 men and women. We could replicate the factor structure of the FCI on a German population.rnThe factor extraction procedure produced a factor solution that reproduces the fourfactors described in the original inventory, the FCI. Our instrument presents high internal consistency, as well as a significant correlation with measures of convergent and discriminant validity. The FCI-Deutsch (FCI-DE) is a valid instrument to assess craving for particular foods in Germany, and it could, therefore, prove useful in the clinical and research practice in the field of obesity and eating behaviors.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related activities, most of them enter their professional life unprepared, as classroom assessment is often not part of their educational training. Since teacher judgments matter for the educational development of students, the judgments should be up to a high standard. The present dissertation comprises three studies focusing on accuracy of teacher judgments (Study 1), consequences of (mis-)judgment regarding teacher nomination for gifted programming (Study 2) and teacher recommendations for secondary school tracks (Study 3), and individual student characteristics that impact and potentially bias teacher judgment (Studies 1 through 3). All studies were designed to contribute to a further understanding of classroom assessment skills of teachers. Overall, the results implied that, teacher judgment of cognitive ability was an important constant for teacher nominations and recommendations but lacked accuracy. Furthermore, teacher judgments of various traits and school achievement were substantially related to social background variables, especially the parents" educational background. However, multivariate analysis showed social background variables to impact nomination and recommendation only marginally if at all. All results indicated differentiated but potentially biased teacher judgments to impact their far-reaching referral decisions directly, while the influence of social background on the referral decisions itself seems mediated. Implications regarding further research practices and educational assessment strategies are discussed. The implications on the needs of teachers to be educated on judgment and educational assessment are of particular interest and importance.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
It is generally assumed that the temperature increase associated with global climate change will lead to increased thunderstorm intensity and associated heavy precipitation events. In the present study it is investigated whether the frequency of thunderstorm occurrences will in- or decrease and how the spatial distribution will change for the A1B scenario. The region of interest is Central Europe with a special focus on the Saar-Lor-Lux region (Saarland, Lorraine, Luxembourg) and Rhineland-Palatinate.Daily model data of the COSMO-CLM with a horizontal resolution of 4.5 km is used. The simulations were carried out for two different time slices: 1971"2000 (C20), and 2071"2100 (A1B). Thunderstorm indices are applied to detect thunderstorm-prone conditions and differences in their frequency of occurrence in the two thirty years timespans. The indices used are CAPE (Convective Available Potential Energy), SLI (Surface Lifted Index), and TSP (Thunderstorm Severity Potential).The investigation of the present and future thunderstorm conducive conditions show a significant increase of non-thunderstorm conditions. The regional averaged thunderstorm frequencies will decrease in general, but only in the Alps a potential increase in thunderstorm occurrences and intensity is found. The comparison between time slices of 10 and 30 years length show that the number of gridpoints with significant signals increases only slightly. In order to get a robust signal for severe thunderstorm, an extension to more than 75 years would be necessary.
The Firepower of Work Craving: When Self-Control Is Burning under the Rubble of Self-Regulation
(2017)
Work craving theory addresses how work-addicted individuals direct great emotion-regulatory efforts to weave their addictive web of working. They crave work for two main emotional incentives: to overcompensate low self-worth and to escape (i.e., reduce) negative affect, which is strategically achieved through neurotic perfectionism and compulsive working. Work-addicted individuals" strong persistence and self-discipline with respect to work-related activities suggest strong skills in volitional action control. However, their inability to disconnect from work implies low volitional skills. How can work-addicted individuals have poor and strong volitional skills at the same time? To answer this paradox, we elaborated on the relevance of two different volitional modes in work craving: self-regulation (self-maintenance) and self-control (goal maintenance). Four hypotheses were derived from Wojdylo- work craving theory and Kuhl- self-regulation theory: (H1) Work craving is associated with a combination of low self-regulation and high self-control. (H2) Work craving is associated with symptoms of psychological distress. (H3) Low self-regulation is associated with psychological distress symptoms. (H4) Work craving mediates the relationships between self-regulation deficits and psychological distress symptoms at high levels of self-control. Additionally, we aimed at supporting the discriminant validity of work craving with respect to work engagement by showing their different volitional underpinnings. Results of the two studies confirmed our hypotheses: whereas work craving was predicted by high self-control and low self-regulation and associated with higher psychological distress, work engagement was predicted by high self-regulation and high self-control and associated with lower symptoms of psychological distress. Furthermore, work styles mediated the relationship between volitional skills and symptoms of psychological distress. Based on these new insights, several suggestions for prevention and therapeutic interventions for work-addicted individuals are proposed.
Phase-amplitude cross-frequency coupling is a mechanism thought to facilitate communication between neuronal ensembles. The mechanism could underlie the implementation of complex cognitive processes, like executive functions, in the brain. This thesis contributes to answering the question, whether phase-amplitude cross-frequency coupling - assessed via electroencephalography (EEG) - is a mechanism by which executive functioning is implemented in the brain and whether an assumed performance effect of stress on executive functioning is reflected in phase-amplitude coupling strength. A huge body of studies shows that stress can influence executive functioning, in essence having detrimental effects. In two independent studies, each being comprised of two core executive function tasks (flexibility and behavioural inhibition as well as cognitive inhibition and working memory), beta-gamma phase-amplitude coupling was robustly detected in the left and right prefrontal hemispheres. No systematic pattern of coupling strength modulation by either task demands or acute stress was detected. Beta-gamma coupling might also be present in more basic attention processes. This is the first investigation of the relationship between stress, executive functions and phase-amplitude coupling. Therefore, many aspects have not been explored yet. For example, studying phase precision instead of coupling strength as an indicator for phase-amplitude coupling modulations. Furthermore, data was analysed in source space (independent component analysis); comparability to sensor space has still to be determined. These as well as other aspects should be investigated, due to the promising finding of very robust and strong beta-gamma coupling for all executive functions. Additionally, this thesis tested the performance of two widely used phase-amplitude coupling measures (mean vector length and modulation index). Both measures are specific and sensitive to coupling strength and coupling width. The simulation study also drew attention to several confounding factors, which influence phase-amplitude coupling measures (e. g. data length, multimodality).
Besides well-known positive aspects of conservation tillage combined with mulching, a drawback may be the survival of phytopathogenic fungi like Fusarium species on plant residues. This may endanger the health of the following crop by increasing the infection risk for specific plant diseases. In infected plant organs, these pathogens are able to produce mycotoxins like deoxynivalenol (DON). Mycotoxins like DON persist during storage, are heat resistant and of major concern for human and animal health after consumption of contaminated food and feed, respectively. Among fungivorous soil organisms, there are representatives of the soil fauna which are obviously antagonistic to a Fusarium infection and the contamination with mycotoxins. Earthworms (Lumbricus terrestris), collembolans (Folsomia candida) and nematodes (Aphelenchoides saprophilus) provide a wide range of ecosystem services including the stimulation of decomposition processes which may result in the regulation of plant pathogens and the degradation of environmental contaminants. Several investigations under laboratory conditions and in the field were conducted to test the following hypotheses: (1) Fusarium-infected and DON-contaminated wheat straw provides a more attractive food substrate than non-infected control straw (2) the introduced soil fauna reduce the biomass of F. culmorum and the content of DON in infected wheat straw under laboratory and field conditions (3) the species interaction of the introduced soil fauna enhances the degradation of Fusarium biomass and DON concentration in wheat straw; (4) the degradation efficiency of soil fauna is affected by soil texture. The results of the present thesis pointed out that the degradation performance of the introduced soil fauna must be considered as an important contribution to the biological control of plant diseases and environmental pollutants. As in particular L. terrestris revealed to be the driver of the degradation process, earthworms contribute to a sustainable control of fungal pathogens like Fusarium and its mycotoxins in wheat straw, thus reducing the risk of plant diseases and environmental pollution as ecosystem services.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.
Exposure to fine and ultra-fine environmental particles is still a problem of concern in many industrialized parts of the world and the intensified use of nanotechnology may further increase exposure to small particles. Since many years air pollution is recognized as a critical problem in western countries, which led to rigorous regulation of air quality and the introduction of strict guidelines. However, the upper thresholds for particulates in ambient air recommended by the world health organization are often exceeded several times in newly industrialized countries. Such high levels of air pollution have the potential to induce adverse effects on human health. The response triggered by air pollutants is not limited to local effects of the respiratory system but is often systemic, resulting in endothelial dysfunction or atherosclerotic malady. The link between air pollution and cardiovascular disease is now accepted by the scientific community but the underlying mechanisms responsible for the pro-atherogenic potential still need to be unraveled in detail. Based on the results from in- vivo and in vitro studies the production of reactive oxygen species due to exposure to particles is the most important mechanism to explain the observed adverse effects. However, the doses that were applied in many in vivo and in vitro studies are far beyond the range of what humans are exposed to and there is the need for more realistic exposure studies. Complex in vitro coculture systems may be valuable tools to study particle-induced processes and to extrapolate effects of particles on the lung. One of the objectives of this PhD thesis was the establishment and further improvement of a complex coculture system initially described by Alfaro-Moreno et al. [1]. The system is composed of an alveolar type-II cell line (A549), differentiated macrophage-like cells (THP-1), mast cells (HMC-1) and endothelial cells (EA.hy 926), seeded in a 3D-orientation on a microporous membrane to mimic the cell response of the alveolar surface in vitro in conjunction with native aerosol exposure (VitrocellTM chamber). The tetraculture system was carefully characterized to ensure its performance and repeatability of results. The spatial distribution of the cells in the tetraculture was analyzed by confocal laser scanning microscopy (CLSM), showing a confluent layer of endothelial and epithelial cells on both sides of the Transwellâ„¢. Macrophage-like cells and mast cells can be found on top of the epithelial cells. The latter cells formed colonies under submerged conditions, which disappeared at the air-liquid-interface (ALI). The VitrocellTM aerosol exposure system was not significantly influencing the viability. Using this system, cells were exposed to an aerosol of 50 nm SiO2-Rhodamine nanoparticles (NPs) in PBS. The distribution of the NPs in the tetraculture after exposure was evaluated by CLSM. Fluorescence from internalized particles was detected in CD11b-positive THP-1 cells only. Furthermore, all cell lines were found to be able to respond to xenobiotic model compounds, such as benzo[a]pyrene (B[a]P) or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) with the upregulation of CYP1 mRNA. With this tetraculture system the response of the endothelial part of the alveolar barrier was studied in- vitro in a still realistic exposure scenario representing the conditions for a polluted situation without direct exposure of endothelial cells. After exposure to diesel exhaust particulate matter (DEPM) the expression of different anti-oxidant target genes and inflammatory genes such as NAD(P)H dehydrogenase quinone 1 (NQO1), superoxide dismutase 1 (SOD1) and heme oxygenase 1 (HMOX1), as well as the nuclear translocation nuclear factor erythroid-derived 2 (Nrf2) was evaluated. In addition, the potential of DEPM to induce the upregulation of CYP1A1 mRNA in the endothelium was analyzed. DEPM exposure led not to an upregulation of the anti-oxidant or inflammatory target genes, but to clear nuclear translocation of Nrf2. The endothelial cells responded to the DEPM treatment also with the upregulation of CYP1A1 mRNA and nuclear translocation of the aryl hydrocarbon receptor (AhR). Overall, DEPM triggered a response in the endothelial cells after indirect exposure of the tetraculture system to low doses of DEPM, underlining the sensitivity of ALI exposure systems. The use of the tetraculture together with the native aerosol exposure equipment may finally lead to a more realistic judgment regarding the hazard of new compounds and/or new nano-scaled materials in the future. For the first time, it was possible to study the response of the endothelial cells of the alveolar barrier in vitro in a realistic exposure scenario avoiding direct exposure of endothelial cells to high amounts of particulates.
The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.
The equity premium (Mehra and Prescott, 1985) is still a puzzle in the sense that there are still no convincing explanations for the size of the equity premium. In this dissertation, we study this long-standing puzzle and several possible behavioral explanations. First, we apply the IRR methodology proposed by Fama and French (1999) to achieve large firm level data on the equity premia for N = 28,256 companies in 54 countries around the world. Second, by using preferences data from the INTRA study (Rieger et. al., 2014), we could test the relevant risk factors together with time cognition to explain the equity premium. We document the failure of the Myopic Loss Aversion hypothesis by Benartzi and Thaler (1995) but provides rigorous empirical evidence to support the behavioral theory of ambiguity aversion to account for the equity premium. The observations shed some light on the new approach of integrating risk and ambiguity (together with time preferences) into a more general model of uncertainty, in which both risk premium and ambiguity premium play roles in asset pricing models.
Cognitive performance is contingent upon multiple factors. Beyond the impact of en-vironmental circumstances, the bodily state may hinder or promote cognitive processing. Af-ferent transmission from the viscera, for instance, is crucial not only for the genesis of affect and emotion, but further exerts significant influences on memory and attention. In particular, afferent cardiovascular feedback from baroreceptors demonstrated subcortical and cortical inhibition. Consequences for human cognition and behavior are the impairment of simple perception and sensorimotor functioning. Four studies are presented that investigate the mod-ulatory impact of baro-afferent feedback on selective attention. The first study demonstrates that the modulation of sensory processing by baroreceptor activity applies to the processing of complex stimulus configurations. By the use of a visual masking task in which a target had to be selected against a visual mask, perceptual interference was reduced when target and mask were presented during the ventricular systole compared to the diastole. In study two, selection efficiency was systematically manipulated in a visual selection task in which a target letter was flanked by distracting stimuli. By comparing participants" performance under homogene-ous and heterogeneous stimulus conditions, selection efficiency was assessed as a function of the cardiac cycle phase in which the targets and distractors were presented. The susceptibility of selection performance to the stimulus condition at hand was less pronounced during the ventricular systole compared to the diastole. Study one and two therefore indicate that inter-ference from irrelevant sensory input, resulting from temporally overlapping processing traces or from the simultaneous presentation of distractor stimuli, is reduced during phases of in-creased baro-afferent feedback. Study three experimentally manipulated baroreceptor activity by systematically varying the participant- body position while a sequential distractor priming task was completed. In this study, negative priming and distractor-response binding effects were obtained as indices of controlled and automatic distractor processing, respectively. It was found that only controlled distractor processing was affected by tonic increases in baro-receptor activity. In line with study one and two these results indicate that controlled selection processes are more efficient during enhanced baro-afferent feedback, observable in dimin-ished aftereffects of controlled distractor processing. Due to previous findings that indicated baro-afferent transmission to affect central, rather than response-related processing stages, study four measured lateralized-readiness potentials (LRPs) and reaction times (RTs), while participants, again, had to selectively respond to target stimuli that were surrounded by dis-tractors. The impact of distractor inhibition on stimulus-related, but not on response-related LRPs suggests that in a sequential distractor priming task, the sensory representations of dis-tractors, rather than motor responses are targeted by inhibition. Together with the results from studies one through three and the finding of baroreceptor-mediated behavioral inhibition tar-geting central processing stages, study four corroborates the presumption of baro-afferent signal transmission to modulate controlled processes involved in selective attention. In sum, the work presented shows that visual selective attention benefits from in-creased baro-afferent feedback as its effects are not confined to simple perception, but may facilitate the active suppression of neural activity related to sensory input from distractors. Hence, due to noise reduction, baroreceptor-mediated inhibition may promote effective selec-tion in vision.
The efficacy and effectiveness of psychotherapeutic interventions have been proven time and again. We therefore know that, in general, evidence-based treatments work for the average patient. However, it has also repeatedly been shown that some patients do not profit from or even deteriorate during treatment. Patient-focused psychotherapy research takes these differences between patients into account by focusing on the individual patient. The aim of this research approach is to analyze individual treatment courses in order to evaluate when and under which circumstances a generally effective treatment works for an individual patient. The goal is to identify evidence based clinical decision rules for the adaptation of treatment to prevent treatment failure. Patient-focused research has illustrated how different intake indicators and early change patterns predict the individual course of treatment, but they leave a lot of variance unexplained. The thesis at hand analyzed whether Ecological Momentary Assessment (EMA) strategies could be integrated into patient-focused psychotherapy research in order to improve treatment response prediction models. EMA is an electronically supported diary approach, in which multiple real-time assessments are conducted in participants" everyday lives. We applied EMA over a two-week period before treatment onset in a mixed sample of patients seeking outpatient treatment. The four daily measurements in the patients" everyday environment focused on assessing momentary affect and levels of rumination, perceived self-efficacy, social support and positive or negative life events since the previous assessment. The aim of this thesis project was threefold: First, to test the feasibility of EMA in a routine care outpatient setting. Second, to analyze the interrelation of different psychological processes within patients" everyday lives. Third and last, to test whether individual indicators of psychological processes during everyday life, which were assessed before treatment onset, could be used to improve prediction models of early treatment response. Results from Study I indicate good feasibility of EMA application during the waiting period for outpatient treatment. High average compliance rates over the entire assessment period and low average burdens perceived by the patients support good applicability. Technical challenges and the results of in-depth missing analyses are reported to guide future EMA applications in outpatient settings. Results from Study II shed further light on the rumination-affect link. We replicated results from earlier studies, which identified a negative association between state rumination and affect on a within-person level and additionally showed a) that this finding holds for the majority but not every individual in a diverse patient sample with mixed Axis-I disorders, b) that rumination is linked to negative but also to positive affect and c) that dispositional rumination significantly affects the state rumination-affect association. The results provide exploratory evidence that rumination might be considered a transdiagnostic mechanism of psychological functioning and well-being. Results from Study III finally suggest that the integration of indicators derived from EMA applications before treatment onset can improve prediction models of early treatment response. Positive-negative affect ratios as well as fluctuations in negative affect measured during patients" daily lives allow the prediction of early treatment response. Our results indicate that the combination of commonly applied intake predictors and EMA indicators of individual patients" daily experiences can improve treatment response predictions models. We therefore conclude that EMA can successfully be integrated into patient-focused research approaches in routine care settings to ameliorate or optimize individual care.
Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.
Globalization and the emergence of global value chains have not only changed the way we live, but also the way economists study international economics. These changes are visible in various areas and dimension. This dissertation deals " mostly empirically " with some of these issues related to global value chains. It starts by critically examining the political economy forces determining the occurrence and the extent of trade liberalization conditions in World Bank lending agreements. The focal point is whether these are affected by the World Bank- most influential member countries. Afterwards, the thesis moves on to describe trade of the European Union member countries at each stage of the value chain. The description is based on a new classification of goods into parts, components and final products as well as a newly developed measure describing the average level of development of a countries trading partners. This descriptive exercise is followed by critically examining discrepancies between gross trade and trade in value added with respect to comparative advantage. A gravity model is employed to contrast results when studying the institutional determinants of comparative advantage. Finally, the thesis deals with determinants of regional location choices for foreign direct investment. The analysis is based on a theoretical new economic geography model and employs a newly developed index that accounts for the presence of potentially all suppliers and buyers at all stages of the value chain.
Floods are hydrological extremes that have enormous environmental, social and economic consequences.The objective of this thesis was a contribution to the implementation of a processing chain that integrates remote sensing information into hydraulic models. Specifically, the aim was to improve water elevation and discharge simulations by assimilating microwave remote sensing-derived flood information into hydraulic models. The first component of the proposed processing chain is represented by a fully automated flood mapping algorithm that enables the automated, objective, and reliable flood extent extraction from Synthetic Aperture Radar images, providing accurate results in both rural and urban regions. The method operates with minimum data requirements and is efficient in terms of computational time. The map obtained with the developed algorithm is still subject to uncertainties, both introduced by the flood mapping algorithm and inherent in the image itself. In this work, particular attention was given to image uncertainty deriving from speckle. By bootstrapping the original satellite image pixels, several synthetic images were generated and provided as input to the developed flood mapping algorithm. From the analysis performed on the mapping products, speckle uncertainty can be considered as a negligible component of the total uncertainty. In the final step of the proposed processing chain real event water elevations, obtained from satellite observations, were assimilated in a hydraulic model with an adapted version of the Particle Filter, modified to work with non-Gaussian distribution of observations. To deal with model structure error and possibly biased observations, a global and a local weight variant of the Particle Filter were tested. The variant to be preferred depends on the level of confidence that is attributed to the observations or to the model. This study also highlighted the complementarity of remote sensing derived and in-situ data sets. An accurate binary flood map represents an invaluable product for different end users. However, deriving from this binary map additional hydraulic information, such as water elevations, is a way of enhancing the value of the product itself. The derived data can be assimilated into hydraulic models that will fill the gaps where, for technical reasons, Earth Observation data cannot provide information, also enabling a more accurate and reliable prediction of flooded areas.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
1.The Discursive Construction of Black Masculinity: Intersections of Race, Gender, and Sexuality
1.1.The Plight of Black Men: A History of Lynchings and Castrations
1.2.The Discursive Construction of the Black Man as Otherrn
1.3.Black Corporeality and the Scopic Regime of Racism
2. Ralph Ellison's 'Invisible man'
2.1.Invisible Black Men: Between Emasculation and Hypermasculinityrn
2.2.Transcending Invisibility