Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (518) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (17)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Finanzierung (9)
- cortisol (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (53)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
- Fachbereich 2 (12)
- Medienwissenschaft (4)
- Politikwissenschaft (3)
- Universitätsbibliothek (3)
- Fachbereich 3 (2)
- Fachbereich 5 (2)
- Pädagogik (2)
- Soziologie (2)
- Computerlinguistik und Digital Humanities (1)
- Geschichte, mittlere und neuere (1)
- Japanologie (1)
- Pflegewissenschaft (1)
- Phonetik (1)
- Sinologie (1)
The present study covers the period from the late-ninth to the early-sixteenth centuries. Within this period, the late-thirteenth to mid-fourteenth centuries marked the decisive turning point, shaped more by attitudes and actions among the Christian majority than among Jewish agents. Our findings indicate an intensification of anti-Jewish tendencies, rooted in religious developments in Western Christendom. According to circumstances, however, these tendencies had a very varying impact across time and space. The frequent religious and ecclesiastical reform movements of Western Europe offer cases in point. In the 'German' Empire north of the Alps the monastic reforms of Saint Maximin and Gorze were by no means confined to the realm of monasticism; they were essential for shaping the historical circumstances in which the foundations of Ashkenazic Judaism were laid in the tenth and early-eleventh centuries. The concept of 'honor' was used by leading ecclesiastics such as bishop Rudiger of Speyer in 1084 to justify the settlement of Jews, but also by civic authorities such as those of Regensburg later on. It is significant for the long-term tendency, therefore, that the late-medieval expulsions from cities like Trier, Cologne, and Regensburg were eventually also legitimized by reference to the idea of honor.
Digital technologies have become central to social interaction and accessing goods and services. Development strategies and approaches to governance have increasingly deployed self-labelled ‘smart’ technologies and systems at various spatial scales, often promoted as rectifying social and geographic inequalities and increasing economic and environmental efficiencies. These have also been accompanied with similarly digitalized commercial and non-profit offers, particularly within the sharing economy. Concern has grown, however, over possible inequalities linked to their introduction. In this paper we critically analyse the role of sharing economies’ contribution to more inclusive, socially equitable
and spatially just transitions. Conceptually, this paper brings together literature on sharing economies, smart urbanism
and just transitions. Drawing on an explorative database of sharing initiatives within the cross-border region of Luxembourg and Germany, we discuss aspects of sustainability as they relate to distributive justice through spatial accessibility, intended benefits, and their operationalization. The regional analysis shows the diversity of sharing models, how they are appropriated in different ways and how intent and operationalization matter in terms of potential benefits.
Results emphasize the need for more fine-grained, qualitative research revealing who is, and is not, participating and
benefitting from sharing economies.
We study planned changes in protective routines after the COVID-19 pandemic: in a survey in Germany among >650 respondents, we find that the majority plans to use face masks in certain situations even after the end of the pandemic. We observe that this willingness is strongly related to the perception that there is something to be learned from East Asians’ handling of pandemics, even when controlling for perceived protection by wearing masks. Given strong empirical evidence that face masks help prevent the spread of respiratory diseases and given the considerable estimated health and economic costs of such diseases even pre-Corona, this would be a very positive side effect of the current crisis.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.
The main research question of this thesis was to set up a framework to allow for the identification of land use changes in drylands and reveal their underlying drivers. The concept of describing land cover change processes in a framework of global change syndrome was introduced by Schellnhuber et al. (1997). In a first step the syndrome approach was implemented for semi-natural areas of the Iberian Peninsula based on time series analysis of the MEDOKADS archive. In the subsequent study the approach was expanded and adapted to other land cover strata. Furthermore, results of an analysis of the relationship of annual NDVI and rainfall data were incorporated to designate areas that show a significant relationship indicating that at least a part of the variability found in NDVI time series was caused by precipitation. Additionally, a first step was taken towards the integration of socio-economic data into the analysis; population density changes between 1961 and 2008 were utilized to support the identification of processes related to land abandonment accompanied by cessation of agricultural practices on the one hand and urbanization on the other. The main findings of the studies comprise three major land cover change processes caused by human interaction: (i) shrub and woody vegetation encroachment in the wake of land abandonment of marginal areas, (ii) intensification of non-irrigated and irrigated, intensively used fertile regions and (iii) urbanization trends along the coastline caused by migration and the increase of mass tourism. Land abandonment of cultivated fields and the give-up of grazing areas in marginal mountainous areas often lead to the encroachment of shrubs and woody vegetation in the course of succession or reforestation. Whereas this cover change has positive effects concerning soil stabilization and carbon sequestration the increase of biomass involves also negative consequences for ecosystem goods and services; these include decreased water yield as a result of increased evapotranspiration, increasing fire risk, decreasing biodiversity due to landscape homogenization and loss of aesthetic value. Arable land in intensively used fertile zones of Spain was further intensified including the expansion of irrigated arable land. The intensification of agriculture has also generated land abandonment in these areas because less people are needed in the agricultural labour sector due to mechanization. Urbanization effects due to migration and the growth of the tourism sector were mapped along the eastern Mediterranean coast. Urban sprawl was only partly detectable by means of the MEDOKADS archive as the changes of urbanization are often too subtle to be detected by data with a spatial resolution of 1 km-². This is in line with a comparison of a Landsat TM time series and the NOAA AVHRR archive for a study area in the Greece that showed that small scale changes cannot be detected based on this approach, even though they might be of high relevance for local management of resources. This underlines the fact that land degradation processes are multi-scale problems and that data of several spatial and temporal scales are mandatory to build a comprehensive dryland observation system. Further land cover processes related to a decrease of greenness did not play an important role in the observation period. Thus, only few patches were identified, suggesting that no large-scale land degradation processes are taking place in the sense of decline of primary productivity after disturbances. Nevertheless, the land cover processes detected impact ecosystem functioning and using the example of shrub encroachment, bear risks for the provision of goods and services which can be valued as land degradation in the sense of a decline of important ecosystem goods and services. This risk is not only confined to the affected ecosystem itself but can also impact adjacent ecosystems due to inter-linkages. In drylands water availability is of major importance and the management of water resources is an important political issue. In view of climate change this topic will become even more important because aridity in Spain did increase within the last decades and is likely to further do so. In addition, the land cover changes detected by the syndrome approach could even augment water scarcity problems. Whereas the water yield of marginal areas, which often serve as headwaters of rivers, decreases with increasing biomass, water demand of agriculture and tourism is not expected to decline. In this context it will be of major importance to evaluate the trade-offs between different land uses and to take decisions that maintain the future functioning of the ecosystems for human well-being.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
Up-to-date information about the type and spatial distribution of forests is an essential element in both sustainable forest management and environmental monitoring and modelling. The OpenStreetMap (OSM) database contains vast amounts of spatial information on natural features, including forests (landuse=forest). The OSM data model includes describing tags for its contents, i.e., leaf type for forest areas (i.e., leaf_type=broadleaved). Although the leaf type tag is common, the vast majority of forest areas are tagged with the leaf type mixed, amounting to a total area of 87% of landuse=forests from the OSM database. These areas comprise an important information source to derive and update forest type maps. In order to leverage this information content, a methodology for stratification of leaf types inside these areas has been developed using image segmentation on aerial imagery and subsequent classification of leaf types. The presented methodology achieves an overall classification accuracy of 85% for the leaf types needleleaved and broadleaved in the selected forest areas. The resulting stratification demonstrates that through approaches, such as that presented, the derivation of forest type maps from OSM would be feasible with an extended and improved methodology. It also suggests an improved methodology might be able to provide updates of leaf type to the OSM database with contributor participation.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
Background: The growing production and use of engineered AgNP in industry and private households make increasing concentrations of AgNP in the environment unavoidable. Although we already know the harmful effects of AgNP on pivotal bacterial driven soil functions, information about the impact of silver nanoparticles (AgNP) on the soil bacterial community structure is rare. Hence, the aim of this study was to reveal the long-term effects of AgNP on major soil bacterial phyla in a loamy soil. The study was conducted as a laboratory incubation experiment over a period of 1 year using a loamy soil and AgNP concentrations ranging from 0.01 to 1 mg AgNP/kg soil. Effects were quantified using the taxon-specific 16S rRNA qPCR.
Results: The short-term exposure of AgNP at environmentally relevant concentration of 0.01 mg AgNP/kg caused significant positive effects on Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%), whereas beta-Proteobacteria population was minimized by 14.2% relative to the control (p ≤ 0.05). After 1 year of exposure to 0.01 mg AgNP/kg diminished Acidobacteria (p = 0.007), Bacteroidetes (p = 0.005) and beta-Proteobacteria (p = 0.000) by 14.5, 10.1 and 13.9%, respectively. Actino- and alpha-Proteobacteria were statistically unaffected by AgNP treatments after 1-year exposure. Furthermore, a statistically significant regression and correlation analysis between silver toxicity and exposure time confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions.
Conclusions: Even very low concentrations of AgNP may cause disadvantages for the autotrophic ammonia oxidation (nitrification), the organic carbon transformation and the chitin degradation in soils by exerting harmful effects on the liable bacterial phyla.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Food waste is the origin of major social and environmental issues. In industrial societies, domestic households are the biggest contributors to this problem. But why do people waste food although they buy and value it? Answering this question is mandatory to design effective interventions against food waste. So far, however, many interventions have not been based on theoretical knowledge. Integrating food waste literature and ambivalence research, we propose that domestic food waste can be understood via the concept of ambivalence—the simultaneous presence of positive and negative associations towards the same attitude object. In support of this notion, we demonstrated in three pre-registered experiments that people experienced ambivalence towards non-perishable food products with expired best before dates. The experience of ambivalence was in turn associated with an increased willingness to waste food. However, two informational interventions aiming to prevent people from experiencing ambivalence did not work as intended (Experiment 3). We hope that the outlined conceptualization inspires theory-driven research on why and when people dispose of food and on how to design effective interventions.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
The study at hand deals with madness as it is represented in English Canadian fiction. The topic seemed most interesting and fruitful for analysis due to the fact that as the ways madness has been defined, understood, described, judged and handled differ quite profoundly from society to society, from era to era, as the language, ideas and associations surrounding insanity are both strongly culture-relative and shifting, madness as a theme of myth and literature has always been a excellent vehicle to mirror the assumptions and arguments, the aspirations and nostalgia, the beliefs and values, hopes and fears of its age and society. Thus, while the overall intent of this study is to elucidate some discernible patterns of structure and style which accompany the use of madness in Canadian literature, to investigate the varying sorts of portrayal and the conventions of presentation, to interpret the use of madness as literary devices and to highlight the different statements which are made, the continuity, variation, and changes in the theme of madness provide an informing principle in terms of certain Canadian experiences and perceptions. By examining madness as it represents itself in Canadian literature and considering the respective explorations of the deranged mind within their historical context, I hope to demonstrate that literary interpretations of madness both reflect and question cultural, political, religious and psychological assumptions of their times and that certain symptoms or usages are characteristic of certain periods. Such an approach, it is hoped, might not only contribute towards an assessment of the wealth of associations which surround madness and the ambivalence with which it is viewed, but also shed some light on the Canadian imagination. As such this study can be considered not only as a history of literary madness, but a history of Canadian society and the Canadian mind.
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
Service innovation has increasingly gained acknowledgement to contribute to economic growth and well-being. Despite this increased relevance in practice, service innovation is a developing research field. To advance literature on service innovation, this work analyzes with a qualitative study how firms manage service innovation activities in their organization differently. In addition, it evaluates the influence of top management commitment and corporate service innovativeness on service innovation capabilities of a firm and their implications for firm-level performance by conducting a quantitative study. Accordingly, the main overall research questions of this dissertation are: 1.) How and why do firms manage service innovation activities in their organization differently? 2.) What influence do top management commitment and corporate service innovativeness have on service innovation capabilities of a firm and what are the implications for firm-level performance? To respond to the first research question the way firms manage service innovation activities in their organization is investigated and by whom and how service innovations are developed. Moreover, it is examined why firms implement their service innovation activities differently. To achieve this a qualitative empirical study is conducted which included 22 semi-structured interviews with 15 firms in the sectors of construction, financial services, IT services, and logistics. Addressing the second research question, the aim is to improve the understanding about factors that enhance firm-level performance through service innovations. Deploying a dynamic capabilities perspective, a quantitative study is performed which underlines the importance of service innovation capabilities. More specifically, a theoretical framework is developed that proposes a positive relationship of top management commitment and corporate service innovativeness with service innovation capabilities and a positive relationship between service innovation capabilities and the firm-level performance indicators market performance, competitive advantage, and efficiency. A survey with double respondents from 87 companies from the sectors construction, financial services, IT services, and logistics was conducted to test the proposed theoretical framework by applying partial least squares structural equation modeling (PLS-SEM).
Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
Exposure to fine and ultra-fine environmental particles is still a problem of concern in many industrialized parts of the world and the intensified use of nanotechnology may further increase exposure to small particles. Since many years air pollution is recognized as a critical problem in western countries, which led to rigorous regulation of air quality and the introduction of strict guidelines. However, the upper thresholds for particulates in ambient air recommended by the world health organization are often exceeded several times in newly industrialized countries. Such high levels of air pollution have the potential to induce adverse effects on human health. The response triggered by air pollutants is not limited to local effects of the respiratory system but is often systemic, resulting in endothelial dysfunction or atherosclerotic malady. The link between air pollution and cardiovascular disease is now accepted by the scientific community but the underlying mechanisms responsible for the pro-atherogenic potential still need to be unraveled in detail. Based on the results from in- vivo and in vitro studies the production of reactive oxygen species due to exposure to particles is the most important mechanism to explain the observed adverse effects. However, the doses that were applied in many in vivo and in vitro studies are far beyond the range of what humans are exposed to and there is the need for more realistic exposure studies. Complex in vitro coculture systems may be valuable tools to study particle-induced processes and to extrapolate effects of particles on the lung. One of the objectives of this PhD thesis was the establishment and further improvement of a complex coculture system initially described by Alfaro-Moreno et al. [1]. The system is composed of an alveolar type-II cell line (A549), differentiated macrophage-like cells (THP-1), mast cells (HMC-1) and endothelial cells (EA.hy 926), seeded in a 3D-orientation on a microporous membrane to mimic the cell response of the alveolar surface in vitro in conjunction with native aerosol exposure (VitrocellTM chamber). The tetraculture system was carefully characterized to ensure its performance and repeatability of results. The spatial distribution of the cells in the tetraculture was analyzed by confocal laser scanning microscopy (CLSM), showing a confluent layer of endothelial and epithelial cells on both sides of the Transwellâ„¢. Macrophage-like cells and mast cells can be found on top of the epithelial cells. The latter cells formed colonies under submerged conditions, which disappeared at the air-liquid-interface (ALI). The VitrocellTM aerosol exposure system was not significantly influencing the viability. Using this system, cells were exposed to an aerosol of 50 nm SiO2-Rhodamine nanoparticles (NPs) in PBS. The distribution of the NPs in the tetraculture after exposure was evaluated by CLSM. Fluorescence from internalized particles was detected in CD11b-positive THP-1 cells only. Furthermore, all cell lines were found to be able to respond to xenobiotic model compounds, such as benzo[a]pyrene (B[a]P) or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) with the upregulation of CYP1 mRNA. With this tetraculture system the response of the endothelial part of the alveolar barrier was studied in- vitro in a still realistic exposure scenario representing the conditions for a polluted situation without direct exposure of endothelial cells. After exposure to diesel exhaust particulate matter (DEPM) the expression of different anti-oxidant target genes and inflammatory genes such as NAD(P)H dehydrogenase quinone 1 (NQO1), superoxide dismutase 1 (SOD1) and heme oxygenase 1 (HMOX1), as well as the nuclear translocation nuclear factor erythroid-derived 2 (Nrf2) was evaluated. In addition, the potential of DEPM to induce the upregulation of CYP1A1 mRNA in the endothelium was analyzed. DEPM exposure led not to an upregulation of the anti-oxidant or inflammatory target genes, but to clear nuclear translocation of Nrf2. The endothelial cells responded to the DEPM treatment also with the upregulation of CYP1A1 mRNA and nuclear translocation of the aryl hydrocarbon receptor (AhR). Overall, DEPM triggered a response in the endothelial cells after indirect exposure of the tetraculture system to low doses of DEPM, underlining the sensitivity of ALI exposure systems. The use of the tetraculture together with the native aerosol exposure equipment may finally lead to a more realistic judgment regarding the hazard of new compounds and/or new nano-scaled materials in the future. For the first time, it was possible to study the response of the endothelial cells of the alveolar barrier in vitro in a realistic exposure scenario avoiding direct exposure of endothelial cells to high amounts of particulates.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
The argan woodlands of South Morocco represent an open-canopy dryland forest with traditional silvopastoral usage that includes browsing by goats, sheep and camels, oil production as well as agricultural use. In the past, these forests have undergone extensive clearing, but are now protected by the state. However, the remaining argan woodlands are still under pressure from intensive grazing and illegal firewood collection. Although the argan-forest area seems to be overall decreasing due to large forest clearings for intensive agriculture, little quantitative data is available on the dynamics and overall state of the remaining argan forest. To determine how the argan woodlands in the High Atlas and the Anti-Atlas had changed in tree-crown cover from 1972 to 2018 we used historical black and white HEXAGON satellite images as well as recent WorldView satellite images (see Part A of our study). Because tree shadows can oftentimes not be separated from the tree crown on panchromatic satellite images, individual trees were mapped in three size categories to determine if trees were unchanged, had decreased/increased in crown size or had disappeared or newly grown. The current state of the argan trees was evaluated by mapping tree architectures in the field. Tree-cover changes varied highly between the test sites. Trees that remained unchanged between 1972 and 2018 were in the majority, while tree mortality and tree establishment were nearly even. Small unchanged trees made up 48.4% of all remaining trees, of these 51% showed degraded tree architectures. 40% of small (re-) grown trees were so overbrowsed that they only appeared as bushes, while medium (3–7 m crown diameter) and large trees (>7 m) showed less degraded trees regardless if they had changed or not. Approaches like grazing exclusion or cereal cultivation lead to a positive influence on tree architecture and less tree-cover decrease. Although the woodland was found to be mostly unchanged 1972–2018, the analysis of tree architecture reveals that a lot of (mostly small) trees remained stable but in a degraded state. This stability might be the result of the small trees’ high degradation status and shows the heavy pressure on the argan forest.
Arctic and Antarctic polynya systems are of high research interest since extensive new ice formation takes place in these regions. The monitoring of polynyas and the ice production is crucial with respect to the changing sea-ice regime. The thin-ice thickness (TIT) distribution within polynyas controls the amount of heat that is released to the atmosphere and has therefore an impact on the ice-production rates. This thesis presents an improved method to retrieve thermal-infrared thin-ice thickness distributions within polynyas. TIT with a spatial resolution of 1 km × 1 km is calculated using the MODIS ice-surface temperature and atmospheric model variables within the Laptev Sea polynya for the winter periods 2007/08 and 2008/09. The improvement of the algorithm is focused on the surface-energy flux parameterizations. Furthermore, a thorough sensitivity analysis is applied to quantify the uncertainty in the thin-ice thickness results. An absolute mean uncertainty of -±4.7 cm for ice below 20 cm of thickness is calculated. Furthermore, advantages and drawbacks using different atmospheric data sets are investigated. Daily MODIS TIT composites are computed to fill the data gaps arising from clouds and shortwave radiation. The resulting maps cover on average 70 % of the Laptev Sea polynya. An intercomparison of MODIS and AMSR-E polynya data indicates that the spatial resolution issue is essential for accurately deriving polynya characteristics. Monthly fast-ice masks are generated using the daily TIT composites. These fast-ice masks are implemented into the coupled sea-ice/ocean model FESOM. An evaluation of FESOM sea-ice concentrations is performed with the result that a prescribed high-resolution fast-ice mask is necessary regarding the accurate polynya location. However, for a more realistic simulation of other small-scale sea-ice features further model improvements are required. The retrieval of daily high-resolution MODIS TIT composites is an important step towards a more precise monitoring of thin sea ice and sea-ice production. Future work will address a combined remote sensing " model assimilation method to simulate fully-covered thin-ice thickness maps that enable the retrieval of accurate ice production values.
The midcingulate cortex has become the focus of scientific interest as it has been associated with a wide range of attentional phenomena. This survey found evidence indicating the relevance of gender and handedness for measures of regional cortical morphology. Although gender was associated with structural variations concerning the neuroanatomy of the midcingulum bundle as well, handedness did not emerge in the analyses of white matter characteristics as significant factor. Hemispheric differences were found at the level of both gray and white matter. Turning to the functional implications of neuroanatomical variations and comparing subjects with a pronounced and a low degree of midcingulate folding, which indicates differential expansions of cytoarchitectural areas, behavioral and electrophysiological differences in the processing of interference became evident. A high degree of leftward midcingulate fissurization was associated with better behavioral performance, presumably caused by a more effective conflict-monitoring system triggering fast and automatic attentional filtering mechanisms. Subjects exhibiting a lower degree of midcingulate fissurization rather seem to rely on more effortful control processes. These results carry implications not only concerning neuronal representations of individual differences in attentional processes, but might also be of relevance for the refinement of models for mental disorders.
Background
The morphology of anuran larvae is suggested to differ between species with tadpoles living in standing (lentic) and running (lotic) waters. To explore which character combinations within the general tadpole morphospace are associated with these habitats, we studied categorical and metric larval data of 123 (one third of which from lotic environments) Madagascan anurans.
Results
Using univariate and multivariate statistics, we found that certain combinations of fin height, body musculature and eye size prevail either in larvae from lentic or lotic environments.
Conclusion
Evidence for adaptation to lotic conditions in larvae of Madagascan anurans is presented. While lentic tadpoles typically show narrow to moderate oral discs, small to medium sized eyes, convex or moderately low fins and non-robust tail muscles, tadpoles from lotic environments typically show moderate to broad oral discs, medium to big sized eyes, low fins and a robust tail muscle.
Objective: Attunement is a novel measure of nonverbal synchrony reflecting the duration of the present moment shared by two interaction partners. This study examined its association with early change in outpatient psychotherapy.
Methods: Automated video analysis based on motion energy analysis (MEA) and cross-correlation of the movement time-series of patient and therapist was conducted to calculate movement synchrony for N = 161 outpatients. Movement-based attunement was defined as the range of connected time lags with significant synchrony. Latent change classes in the HSCL-11 were identified with growth mixture modeling (GMM) and predicted by pre-treatment covariates and attunement using multilevel multinomial regression.
Results: GMM identified four latent classes: high impairment, no change (Class 1); high impairment, early response (Class 2); moderate impairment (Class 3); and low impairment (Class 4). Class 2 showed the strongest attunement, the largest early response, and the best outcome. Stronger attunement was associated with a higher likelihood of membership in Class 2 (b = 0.313, p = .007), Class 3 (b = 0.251, p = .033), and Class 4 (b = 0.275, p = .043) compared to Class 1. For highly impaired patients, the probability of no early change (Class 1) decreased and the probability of early response (Class 2) increased as a function of attunement.
Conclusions: Among patients with high impairment, stronger patient-therapist attunement was associated with early response, which predicted a better treatment outcome. Video-based assessment of attunement might provide new information for therapists not available from self-report questionnaires and support therapists in their clinical decision-making.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
Background
In light of the current biodiversity crisis, DNA barcoding is developing into an essential tool to quantify state shifts in global ecosystems. Current barcoding protocols often rely on short amplicon sequences, which yield accurate identification of biological entities in a community but provide limited phylogenetic resolution across broad taxonomic scales. However, the phylogenetic structure of communities is an essential component of biodiversity. Consequently, a barcoding approach is required that unites robust taxonomic assignment power and high phylogenetic utility. A possible solution is offered by sequencing long ribosomal DNA (rDNA) amplicons on the MinION platform (Oxford Nanopore Technologies).
Findings
Using a dataset of various animal and plant species, with a focus on arthropods, we assemble a pipeline for long rDNA barcode analysis and introduce a new software (MiniBar) to demultiplex dual indexed Nanopore reads. We find excellent phylogenetic and taxonomic resolution offered by long rDNA sequences across broad taxonomic scales. We highlight the simplicity of our approach by field barcoding with a miniaturized, mobile laboratory in a remote rainforest. We also test the utility of long rDNA amplicons for analysis of community diversity through metabarcoding and find that they recover highly skewed diversity estimates.
Conclusions
Sequencing dual indexed, long rDNA amplicons on the MinION platform is a straightforward, cost-effective, portable, and universal approach for eukaryote DNA barcoding. Although bulk community analyses using long-amplicon approaches may introduce biases, the long rDNA amplicons approach signifies a powerful tool for enabling the accurate recovery of taxonomic and phylogenetic diversity across biological communities.
Mental processes are filters which intervene in the literary presentation of nature. This article will take you on a journey through literary landscapes, starting from Joseph Furphy and end-ing with Gerald Murnane. It will try to show the development of Australian literary landscape depiction. The investigation of this extensive topic will show that the perception of the Aus-tralian landscape as foreign and threatening is a coded expression of the protagonists" crisis of identity due to their estrangement from European cultural roots. Only a feeling of being at home enables the characters to perceive landscapes in a positive way and allows the author to depict intimate and familiar views of nature. This topic will be investigated with a range of novels to reveal the development of this theme from the turn of the nineteenth century (the time of Furphy- novel Such is Life) up to the present (i.e. novels by Malouf, Foster, Hall, Murnane).
1.The Discursive Construction of Black Masculinity: Intersections of Race, Gender, and Sexuality
1.1.The Plight of Black Men: A History of Lynchings and Castrations
1.2.The Discursive Construction of the Black Man as Otherrn
1.3.Black Corporeality and the Scopic Regime of Racism
2. Ralph Ellison's 'Invisible man'
2.1.Invisible Black Men: Between Emasculation and Hypermasculinityrn
2.2.Transcending Invisibility
During the last decade, anatomic and physiological neuroscience research has yielded extensive information on the physiological regulators of short-term satiety, visceral and interoceptive sensation. Distinct neural circuits regulate the elements of food ingestion physiologically. The general aim of the current studies is to elucidate the peripheral neural pathways to the brain in healthy subjects to establish the groundwork for the study of the pathophysiology of bulimia nervosa (BN). We aimed to define the central activation pattern during non-nutritive gastric distension in humans, and aimed to define the cognitive responses to this mechanical gastric distension. We estimated regional cerebral blood flow with 15O-water positron emission tomography during intragastric balloon inflation and deflation in 18 healthy young women of normal weight. The contrast between inflated minus deflated in the exploratory analysis revealed activation in more than 20 brain regions. The analysis confirmed several well known areas in the central nervous system that contribute to visceral processing: the inferior frontal cortex, representing a zone of convergence for food related stimuli; the insula and operculum referred to as "visceral cortex"; the anterior cingulate gyrus (and insula), processing affective information; and the brainstem, a site of vagal relay for visceral afferent stimuli. Brain activation in the left ventrolateral prefrontal cortex was reproducible. This area is well known for higher cognitive processing, especially reward-related stimuli. The ventrolateral prefrontal cortex with the insular regions may provide a link between the affective and rewarding components of eating and disordered eating as observed in BN and binge-eating obesity. Gastric distension caused a significant rapid, reversible, and reproducible increase in the feelings of fullness, sleepiness, and gastric discomfort as well as a significant rapid, reversible, and reproducible decrease in the feeling of hunger. We showed that mechanical activation of the neurocircuitry involved in meal termination led to the described phenomena. The current brain activation studies of non-painful, proximal gastric distension could provide groundwork in the field of abnormal eating behavior by suggesting a link between visceral sensation and abnormal eating patterns. A potential treatment for disordered eating and obesity could alter the conscious and unconscious perception and interoceptive awareness of gastric distension contributing to meal termination.
The aim of the thesis was to investigate the role of the immune system in fibromyalgia (FM), as part of a dynamic co-regulation between different bodily systems. FM is a chronic musculoskeletal disorder characterized by widespread pain and specific tender points, combined with other symptoms including fatigue, sleep disturbances, morning stiffness and anxiety. The main goal of the work was to identify possible dysregulation of peripheral immune and endocrine parameters in patients with FM compared to matched healthy controls. Moreover, the possible relation between symptom complaints and the specific parameters measured was also evaluated. A first approach was to investigate possible differences between FM patients and controls in the expression of cytokines, as they have been implicated in the occurrence of several of the symptoms associated with FM. Furthermore, adhesion molecules which are involved in cell-to-cell communication and immune cell trafficking were also studied. The latter are known to be regulated by both cytokines and glucocorticoids (GCs) and their expression is often found altered in patients with immune dysregulation. It was expected that subjects with FM would have an increased production of proinflammatory cytokines and/or a reduced antiinflammatory cytokine production and that certain cytokines and/or adhesion molecules would be differently regulated by dexamethasone (DEX). Unstimulated blood was used in the analysis of adhesion molecule expression by flow cytometry while stimulated whole blood cell cultures were used in cytokine flow cytometry assays. Peripheral blood mononuclear cells (PBMCs) were also cultured and the supernatants collected to determine the concentration of cytokines by biochip protein array. In addition, serum samples were used in enzyme-linked immunosorbent assays (ELISA) for quantification of soluble adhesion molecules. L-selectin was found elevated on monocytes and neutrophils of FM patients. A bias toward lower IL-4 levels was observed in FM patients. Based on studies showing differences in glucocorticoid receptor (GR) affinity and disturbances associated with loss of hypothalamic-pituitary-adrenal (HPA) axis resiliency in FM, it was hypothesized whether FM would be associated with abnormalities in glucocorticoid sensitivity. Total plasma cortisol and salivary free cortisol were quantified by ELISA and time-resolved fluorescence immunoassay, respectively. GR sensitivity through DEX inhibition of IL-6, in stimulated whole blood, was evaluated after cytokine quantification by ELISA. The corticosteroid receptors, GR alpha and mineralocorticoid receptor (MR), as well as the glucocorticoid-induced leucine zipper (GILZ) and the FK506 binding protein 5 mRNA expression were assessed in PBMCs by real-time reverse transcription-polymerase chain reaction (RT-PCR). Furthermore, sequencing of RT-PCR products and/or genomic DNA was used for mutational analysis of the corticosteroid receptors. We observed lower basal plasma cortisol levels (borderline statistical significance) and a lower expression of corticosteroid receptors and GILZ in FM patients when compared to healthy controls. The minor allele of the MR single nucleotide polymorphism (SNP) rs5522 was found more often in FM patients than in controls. In addition, female carriers of this SNP seemed to have reduced salivary cortisol responses to a strong psychological stressor (Trier Social Stress Test) compared to non-carriers. FM patient carriers of an MR intronic SNP (rs17484245), before exon 3, were associated with significantly higher scores of depression symptoms compared to patient non-carriers. The thesis includes also a comprehensive analysis of the complexity of GR regulation and the role of alternative mRNA splicing. It focuses on the differential expression of the untranslated GR first exons, their high sequence homology among different species and how genetic determinants, without apparent relevance, may have implications in health and disease. In FM patients, GR exon 1-C expression was found lower and a significant difference was observed when comparing GR 1-C expression between antidepressant-free and patients who had taken antidepressants until two weeks before sample collection. In summary, the study shows a slight disturbance of some components of the innate immune system of FM patients and suggests an enhanced adhesion and possible recruitment of leukocytes to inflammatory sites. The reduced expression of corticosteroid receptors and possibly the reduced MR function may be associated with an impaired function of the HPA axis in these patients. A hyporesponsiveness of the HPA axis under stress or disturbances of the stress response could make these patients more vulnerable to cytokines and inflammation which, compounded by lower antiinflammatory mediators, may sustain some of the symptoms that contribute to the clinical picture of FM.
Background and rationale: Changing working conditions demand adaptation, resulting in higher stress levels in employees. In consequence, decreased productivity, increasing rates of sick leave, and cases of early retirement result in higher direct, indirect, and intangible costs. Aims of the Research Project: The aim of the study was to test the usefulness of a novel translational diagnostic tool, Neuropattern, for early detection, prevention, and personalized treatment of stress-related disorders. The trial was designed as a pilot study with a wait list control group. Materials and Methods: In this study, 70 employees of the Forestry Department Rhineland-Palatinate, Germany, were enrolled. Subjects were block-randomized according to the functional group of their career field, and either underwent Neuropattern diagnostics immediately, or after a waiting period of three months. After the diagnostic assessment, their physicians received the Neuropattern Medical Report, including the diagnostic results and treatment recommendations. Participants were informed by the Neuropattern Patient Report, and were eligible to an individualized Neuropattern Online Counseling account. Results: The application of Neuropattern diagnostics significantly improved mental health and health-related behavior, reduced perceived stress, emotional exhaustion, overcommitment and possibly, presenteeism. Additionally, Neuropattern sensitively detected functional changes in stress physiology at an early stage, thus allowing timely personalized interventions to prevent and treat stress pathology. Conclusion: The present study encouraged the application of Neuropattern diagnostics to early intervention in non-clinical populations. However, further research is required to determine the best operating conditions.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Nonlocal operators are used in a wide variety of models and applications due to many natural phenomena being driven by nonlocal dynamics. Nonlocal operators are integral operators allowing for interactions between two distinct points in space. The nonlocal models investigated in this thesis involve kernels that are assumed to have a finite range of nonlocal interactions. Kernels of this type are used in nonlocal elasticity and convection-diffusion models as well as finance and image analysis. Also within the mathematical theory they arouse great interest, as they are asymptotically related to fractional and classical differential equations.
The results in this thesis can be grouped according to the following three aspects: modeling and analysis, discretization and optimization.
Mathematical models demonstrate their true usefulness when put into numerical practice. For computational purposes, it is important that the support of the kernel is clearly determined. Therefore nonlocal interactions are typically assumed to occur within an Euclidean ball of finite radius. In this thesis we consider more general interaction sets including norm induced balls as special cases and extend established results about well-posedness and asymptotic limits.
The discretization of integral equations is a challenging endeavor. Especially kernels which are truncated by Euclidean balls require carefully designed quadrature rules for the implementation of efficient finite element codes. In this thesis we investigate the computational benefits of polyhedral interaction sets as well as geometrically approximated interaction sets. In addition to that we outline the computational advantages of sufficiently structured problem settings.
Shape optimization methods have been proven useful for identifying interfaces in models governed by partial differential equations. Here we consider a class of shape optimization problems constrained by nonlocal equations which involve interface-dependent kernels. We derive the shape derivative associated to the nonlocal system model and solve the problem by established numerical techniques.
In this psycho-neuro-endocrine study the molecular basis of different variants of steroid receptors as well as highly conserved non steroidal receptors was investigated. These nuclear receptors (NRs) are important key regulators of a wide variety of different physiological and pathophysiological challenges ranging from inflammation and stress to complex behaviour and disease. NRs control gene transcription in a ligand dependent manner and are embedded in the huge interaction network of the neuroendocrine and immune system. Two receptors, the glucocorticoid receptor (GR) and the chicken ovalbumin upstream promoter-transcription factorII (Coup-TFII), both expressed in the immune and nervous system, were investigated regarding possible splice variants and their implication in the control of gene transcription. Both NRs are known to interact and modulate each other- target gene regulation. This study could be shown that both NRs have different splice variants that are expressed in a tissue specific manner. The different 5-´alternative transcript variants of the human GR were in silico identified in other species and evidence for a highly conserved and tightly controlled function was provided. Investigations of the N-terminal transactivation domain of the GR showed a deletion suggesting an altered glucocorticoid-dependent transactivation profile. The newly identified alternative transcript variant of Coup-TFII leads to a DNA binding deficient Coup-TFII isoform that is highly expressed in the brain. This Coup-TFII isoform alters Coup-TFII target gene expression and is suggested to interact with GR via its ligand binding domain resulting in an impaired GR target gene regulation in the nervous system. In this thesis it was demonstrated that NR variants are important for the understanding of the enormous regulatory potential of this receptor family and have to be taken into account for the development of therapeutic strategies for complex diseases such as stress related and neurodegenerative disorders.
In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night. In a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population. Using the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive. Since bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.
Today obesity has been recognized as a disease. Evidence suggests that obesity often has Genetic, environmental, psychological and other factors. Growing evidence points to heredity as a strong determining factor of obesity. The characterization of uncoupling proteins (UCP) represents a major breakthrough of genetic factors towards understanding the molecular basis for energy expenditure and therefore likely to have important implication for the cause and treatment of human obesity. UCPs as mitochondrial anion carriers which creates a pathway that allows dissipation of the proton electrochemical gradient therefore which when deregulated are key risk factors in the development of obesity and other eating disorders. In order to better understand the roles of both UCP2 and UCP3 which considered as prime candidate genes involved in the pathogenesis of obesity, this study elucidate (1) Genomic organization: The human UCP2 (3) gene spans over 8.7 kb (7.5 kb) distributed on 8 (7) exons. Three UCP genes may have evolved from a common ancestor or are the result from gene duplication events. Two mRNA transcripts are generated from hUCP3 gene, the long and short form of hUCP3 is differing by the presence or absence of 37 amino acid residues at the C-terminus. (2) Mutational analysis revealed a mutation in exon 4 of hUCP2 resulting in the substitution of an alanine by a valine at codon 55 and an insertion polymorphism in exon 8 consisted of a 45 bp repeat located 150 bp downstream of the stop codon in the 3'-UTR. The allele frequencies of both polymorphisms were not significantly elevated in a subgroup of children characterized by low Resting Metabolic Rates (RMR). (3) Promoter Analysis showed that the promoter region of hUCP2 lacks a classical TATA or CAAT box. Functional characterization of hUCP2 promoter showed that minimal promoter activity was observed within 65 bp upstream of the transcriptional start site. 75 bp further upstream a strong cis-acting regulatory element was identified which significantly enhanced basal promoter activity. The regulation of human UCP2 gene expression involves complex interactions among positive and negative regulatory elements. the 5"-flanking region of the hUCP3 gene were characterized in which contains both TATA and CAAT boxes as well as consensus motifs for PPRE, TRE, CRE and muscle-specific MyoD and MEF2 sites. Functional characterization identified a cis-acting negative regulatory element between - 2983 and -982 while the region between -982 and -284 showed greatly increased basal promoter activity suggesting the presence of a strong enhancer element. Promoter activity was particularly enhanced in the murine skeletal muscle cell line C2C12 reflecting the tissue-selective expression pattern of UCP3.
The parameterization of ocean/sea-ice/atmosphere interaction processes is a challenge for regional climate models (RCMs) of the Arctic, particularly for wintertime conditions, when small fractions of thin ice or open water cause strong modifications of the boundary layer. Thus, the treatment of sea ice and sub-grid flux parameterizations in RCMs is of crucial importance. However, verification data sets over sea ice for wintertime conditions are rare. In the present paper, data of the ship-based experiment Transarktika 2019 during the end of the Arctic winter for thick one-year ice conditions are presented. The data are used for the verification of the regional climate model COSMO-CLM (CCLM). In addition, Moderate Resolution Imaging Spectroradiometer (MODIS) data are used for the comparison of ice surface temperature (IST) simulations of the CCLM sea ice model. CCLM is used in a forecast mode (nested in ERA5) for the Norwegian and Barents Seas with 5 km resolution and is run with different configurations of the sea ice model and sub-grid flux parameterizations. The use of a new set of parameterizations yields improved results for the comparisons with in-situ data. Comparisons with MODIS IST allow for a verification over large areas and show also a good performance of CCLM. The comparison with twice-daily radiosonde ascents during Transarktika 2019, hourly microwave water vapor measurements of first 5 km in the atmosphere and hourly temperature profiler data show a very good representation of the temperature, humidity and wind structure of the whole troposphere for CCLM.
In 2014/2015 a one-year field campaign at the Tiksi observatory in the Laptev Sea area was carried out using Sound Detection and Ranging/Radio Acoustic Sounding System (SODAR/RASS) measurements to investigate the atmospheric boundary layer (ABL) with a focus on low-level jets (LLJ) during the winter season. In addition to SODAR/RASS-derived vertical profiles of temperature, wind speed and direction, a suite of complementary measurements at the Tiksi observatory was available. Data of a regional atmospheric model were used to put the local data into the synoptic context. Two case studies of LLJ events are presented. The statistics of LLJs for six months show that in about 23% of all profiles LLJs were present with a mean jet speed and height of about 7 m/s and 240 m, respectively. In 3.4% of all profiles LLJs exceeding 10 m/s occurred. The main driving mechanism for LLJs seems to be the baroclinicity, since no inertial oscillations were found. LLJs with heights below 200 m are likely influenced by local topography.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.