Filtern
Erscheinungsjahr
- 2024 (26)
- 2023 (61)
- 2022 (80)
- 2021 (75)
- 2020 (90)
- 2019 (84)
- 2018 (77)
- 2017 (50)
- 2016 (39)
- 2015 (36)
- 2014 (45)
- 2013 (47)
- 2012 (56)
- 2011 (49)
- 2010 (55)
- 2009 (49)
- 2008 (42)
- 2007 (61)
- 2006 (45)
- 2005 (52)
- 2004 (62)
- 2003 (54)
- 2002 (45)
- 2001 (30)
- 2000 (28)
- 1999 (11)
- 1998 (6)
- 1997 (4)
- 1996 (6)
- 1995 (8)
- 1994 (3)
- 1993 (2)
- 1992 (3)
- 1991 (1)
- 1990 (2)
- 1989 (2)
- 1988 (2)
- 1987 (2)
- 1986 (3)
- 1985 (1)
- 1984 (4)
- 1983 (2)
- 1980 (3)
- 1979 (1)
- 1978 (1)
- 1976 (1)
- 1973 (1)
- 1972 (3)
- 1970 (4)
- 1969 (1)
- (31)
Dokumenttyp
- Dissertation (831)
- Wissenschaftlicher Artikel (219)
- Buch (Monographie) (113)
- Beitrag zu einer (nichtwissenschaftlichen) Zeitung oder Zeitschrift (108)
- Arbeitspapier (62)
- Teil eines Buches (Kapitel) (22)
- Ausgabe (Heft) zu einer Zeitschrift (22)
- Konferenzveröffentlichung (17)
- Sonstiges (15)
- Rezension (10)
Sprache
- Deutsch (838)
- Englisch (517)
- Französisch (75)
- Mehrsprachig (15)
- Russisch (1)
Volltext vorhanden
- ja (1446) (entfernen)
Schlagworte
- Deutschland (84)
- Luxemburg (52)
- Stress (40)
- Schule (37)
- Schüler (33)
- Politischer Unterricht (29)
- Modellierung (28)
- Fernerkundung (25)
- Geschichte (24)
- Demokratie (23)
Institut
- Psychologie (212)
- Raum- und Umweltwissenschaften (212)
- Politikwissenschaft (132)
- Universitätsbibliothek (81)
- Rechtswissenschaft (77)
- Wirtschaftswissenschaften (66)
- Mathematik (65)
- Fachbereich 4 (57)
- Medienwissenschaft (57)
- Fachbereich 6 (50)
Der vorliegende Bericht basiert auf einer universitätsweiten Online-Umfrage zum Status quo des Forschungsdatenma-nagements an der Universität Trier. Er ist ein erster Schritt, um den aktuellen und zukünftigen Bedarf an zentralen Dienstleistungen zu identifizieren. Neue Handlungsfelder sollen frühzeitig erkannt werden, auch um der Strategie-entwicklung eine Richtung zu weisen.rnDie Befragten befürworten generell die Initiative zur Entwicklung zentraler IT- und Beratungsangebote. Sie sind bereit, die eigenen Forschungsdaten anderen zur Nachnutzung zur Verfügung zu stellen, sofern die geeigneten Instrumente vorhanden, sind die eine solche Arbeitsweise unterstützen. Allerdings wird eine unkommentierte Bereit-stellung von Rohdaten eher kritisch beurteilt. Der Dokumentationsaufwand einer öffentlichen Bereitstellung von Daten wird in einem ungünstigen Kosten-Nutzenverhältnis gesehen. Es fällt auf, dass die Datenarchivierung größ-tenteils in proprietären Formaten erfolgt.
This paper provides an overview of five major shifts in urban water supply governance in relation to changing paradigms in the water sector as a whole and in water-related research: i) the municipal hydraulic paradigm in the Global North; ii) its travel to cities in the Global South; iii) the shift from government to governance; iv) the (private) utility model and v) its contestation. The articulation of each shift in the Ghanaian context is described from the creation of the first water supply system during colonial time to the recent contestation against private corporate sector participation. Current challenges are outlined together with new pathways for researching urban water governance. The paper is based on a literature review conducted in 2015 and serves as a background study for further research within the WaterPower project.
Stakeholder Mapping
(2016)
This report presents the results of a stakeholder mapping exercise carried out in the WaterPower project. The mapping was conducted for the following main research areas of the project: water supply, land use planning and management, wetland management and climate change adaptation/disaster risk reduction. The report gives an overview of the stakeholders that play a role in these respective areas and identifies those who have concomitant responsibilities in different sectors. It represents the first step towards further involvement of stakeholders in the WaterPower project.
Earnings functions are an important tool in labor economics as they allow to test a variety of labor market theories. Most empirical earnings functions research focuses on testing hypotheses about sign and magnitude for the variables of interest. In contrast, there is little attention for the explanation power of the econometric models employed. Measures for explanation power are of interest, however, for assessing how successful econometric models are in explaining the real world. Are researchers able to draw a complete picture of the determination of earnings or is there room for further theories leading to alternate econometric models? This article seeks to answer the question with a large microeconometric data set from Germany. Using linear regression estimated by OLS and R2 as well as adjusted R2 as measures for explanation power, the results show that up to 60 percent of wage variation can be explained using only observable variables.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
Academic self-concept (ASC) is comprised of individual perceptions of one- own academic ability. In a cross-sectional quasi-representative sample of 3,779 German elementary school children in grades 1 to 4, we investigated (a) the structure of ASC, (b) ASC profile formation, an aspect of differentiation that is reflected in lower correlations between domain-specific ASCs with increasing grade level, (c) the impact of (internal) dimensional comparisons of one- own ability in different school subjects for profile formation of ASC, and (d) the role played by differences in school grades between subjects for these dimensional comparisons. The nested Marsh/Shavelson model, with general ASC at the apex and math, writing, and reading ASC as specific factors nested under general ASC fitted the data at all grade levels. A first-order factor model with math, writing, reading, and general ASCs as correlated factors provided a good fit, too. ASC profile formation became apparent during the first two to three years of school. Dimensional comparisons across subjects contributed to ASC profile formation. School grades enhanced these comparisons, especially when achievement profiles were uneven. In part, findings depended on the assumed structural model of ASCs. Implications for further research are discussed with special regard to factors influencing and moderating dimensional comparisons.
Background: We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients. Methods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively). Conclusions: Low depression in conjunction with social integration at time of waitlisting is related to enhanced chances for survival after HTx. Both factors should be considered for inclusion in standardized assessments and interventions for HTx candidates. We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients.\r\n\r\nMethods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively).
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
Die Arbeit untersucht den Zusammenhang zwischen dem sprachlichen Zeichen und den Begriffen. Das Lexikon mit seinen Bedeutungsdefinitionen ist der augenscheinlichste Schnittpunkt zwischen dem Sprach- und dem Begriffssystem. Die Bedeutungsdefinition wird dabei als ein empirisches Datum betrachtet, das formal beschrieben werden kann. Die Bedeutungsanalyse überführt die Bedeutungsdefinition in eine komplexe Ordnungsstruktur. Die Methode wurde aus verschiedenen Begriffstheorien entwickelt, hauptsächlich aus Raili Kauppis Begriffstheorie und der Formalen Begriffsanalyse. Im Ergebnis erhält man aus den Bedeutungen eines Lexikons ein komplexes System von ein- bis n-stelligen Begriffen. Dieses Begriffssystem unterscheidet sich von den bekannten Semantischen Netzen durch einen völligen Verzicht auf von außen auf das System projizierte Relationen, wie den sogenannten semantischen Relationen. Die einzigen Relationen in diesem System sind begrifflich.
GIS – what can and what can’t it say about social relations in adaptation to urban flood risk?
(2017)
Urban flooding cannot be avoided entirely and in all areas, particularly in coastal cities. Therefore adaptation to the growing risk is necessary. Geographical Information Systems (GIS) based knowledge on risk informs location-based approach to adaptation to climate risk. It allows managing city- wide coordination of adaptation measures, reducing adverse impacts of local strategies on neighbouring areas to the minimum. Quantitative assessments dominate GIS applications in flood risk management, for instance to demonstrate the distribution of people and assets in a flood prone area. Qualitative, participatory approaches to GIS are on the rise but have not been applied in the context of flooding yet. The overarching research question of this working paper is: what can GIS, and what can it not say about relationships / social relations in adaptation to urban flood risk? The use of GIS in risk mapping has exposed environmental injustices. Applications of GIS further allow model- ling future flood risk in function of demographic and land use changes, and combining it with decision support systems (DSS). While such GIS applications provide invaluable information for urban planners steering adaptation they however fall short on revealing the social relations that shape individual and household adaptation decisions. The relevance of networked social relations in adaptation to flood risk has been demonstrated in case studies, and extensively in the literature on organizational learning and adaptation to change. The purpose of this literature review is to identify the type of social relations that shape adaptive capacities towards urban flood risk which can- not be identified in a conventional GIS application.
Dysfunctional eating behavior is a major risk factor for developing all sorts of eating disorders. Food craving is a concept that may help to understand better why and how these and other eating disorders become chronic conditions through non homeastatically-driven mechanisms. As obesity affects people worldwide, cultural differences must be acknowledged to apply proper therapeutic strategies. In this work, we adapted the Food Craving Inventory (FCI) to the German population. We performed a factor analysis of an adaptation of the original FCI in a sample of 326 men and women. We could replicate the factor structure of the FCI on a German population.rnThe factor extraction procedure produced a factor solution that reproduces the fourfactors described in the original inventory, the FCI. Our instrument presents high internal consistency, as well as a significant correlation with measures of convergent and discriminant validity. The FCI-Deutsch (FCI-DE) is a valid instrument to assess craving for particular foods in Germany, and it could, therefore, prove useful in the clinical and research practice in the field of obesity and eating behaviors.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Background: Psychotherapy is successful for the majority of patients , but not for every patient. Hence, further knowledge is needed on how treatments should be adapted for those who do not profit or deteriorate. In the last years prediction tools as well as feedback interventions were part of a trend to more personalized approaches in psychotherapy. Research on psychometric prediction and feedback into ongoing treatment has the potential to enhance treatment outcomes, especially for patients with an increased risk of treatment failure or drop-out.rnMethods/design: The research project investigates in a randomized controlled trial the effectiveness as well as moderating and mediating factors of psychometric feedback to therapists. In the intended study a total of 423 patients, who applied for a cognitive-behavioral therapy at the psychotherapy clinic of the University Trier and suffer from a depressive and/or an anxietyrndisorder (SCID interviews), will be included. The patients will be randomly assigned either to one therapist as well as to one of two intervention groups (CG, IG2). An additional intervention group (IG1) will be generated from an existing archival data set via propensity score matching. Patients of the control group (CG; n = 85) will be monitored concerning psychological impairment but therapists will not be provided with any feedback about the patients assessments. In both intervention groups (IG1: n = 169; IG2: n = 169) the therapists are provided with feedback about the patients self-evaluation in a computerized feedback portal. Therapists of the IG2 will additionally be provided with clinical support tools, which will be developed in thisrnproject, on the basis of existing systems. Therapists will also be provided with a personalized treatment recommendation based on similar patients (Nearest Neighbors) at the beginning of treatment. Besides the general effectiveness of feedback and the clinical support tools for negatively developing patients, further mediating and moderating variables on this feedback effectrnshould be examined: treatment length, frequency of feedback use, therapist effects, therapist- experience, attitude towards feedback as well as congruence of therapist-andpatient- evaluation concerning the progress. Additional procedures will be implemented to assess treatment adherence as well as the reliability of diagnosis and to include it into the analyses.rnDiscussion: The current trial tests a comprehensive feedback system which combines precision mental health predictions with routine outcome monitoring and feedback tools in routine outpatient psychotherapy. It also adds to previous feedback research a stricter design by investigating another repeated measurement CG as well as a stricter control of treatment integrity. It also includes a structured clinical interview (SCID) and controls for comorbidity (within depression and anxiety). This study also investigates moderators (attitudes towards, use of the feedback system, diagnoses) and mediators (therapists" awareness of negative change and treatment length) in one study.
In dieser Arbeit untersuchen wir das Optimierungsproblem der optimalen Materialausrichtung orthotroper Materialien in der Hülle von dreidimensionalen Schalenkonstruktionen. Ziel der Optimierung ist dabei die Minimierung der Gesamtnachgiebigkeit der Konstruktion, was der Suche nach einem möglichst steifen Design entspricht. Sowohl die mathematischen als auch die mechanischen Grundlagen werden in kompakter Form zusammengetragen und basierend darauf werden sowohl gradientenbasierte als auch auf mechanischen Prinzipien beruhende, neue Erweiterungen punktweise formulierter Optimierungsverfahren entwickelt und implementiert. Die vorgestellten Verfahren werden anhand des Beispiels des Modells einer Flugzeugtragfläche mit praxisrelevanter Problemgröße getestet und verglichen. Schließlich werden die untersuchten Methoden in ihrer Koppelung mit einem Verfahren zur Topologieoptimierung, basierend auf dem topologischen Gradienten untersucht.
In sechs Primar- und zwei Sekundarschulen wurde eine dreimonatige leistungsmotivationsbezogene Intervention mit Schülerinnen und Schülern in sieben Jahrgangsstufen durchgeführt. Die Intervention umfasste 25,5 Zeitstunden und basierte auf einem Training, welches neben didaktischen Impulsen für Lehrpersonen vor allem die Stärkung der Schülerinnen und Schüler im Hinblick auf Selbstwahrnehmung, Selbstwirksamkeitserwartungen, Kausalattribuierung von Erfolgen bzw. Misserfolgen, soziale Beziehungen und Zielsetzung intendierte. Die beiden zugrundeliegenden Hypothesen der Studie formulieren die Erwartungen, dass nach Abschluss der Intervention erstens die Leistungsmotivation und zweitens auch das Wohlbefinden (Flourishing) der Schülerinnen und Schüler nachhaltig ansteigt. Es fanden Erhebungen zu drei Messzeitpunkten (Pre- und Posttest, Follow-Up sechs Monate nach Beendigung der Intervention) statt. Beide Hypothesen wurden in der empirischen Evaluation (RM-ANOVA) nicht bestätigt. Ergänzende explorative Untersuchungen (t-Tests und Clusteranalysen) zeigten vereinzelte Tendenzen in Richtung der Hypothesen, sind jedoch nicht aussagekräftig. Aufgrund dieser Befunde wurde im Anschluss an die Studie eine qualitative Inhaltsanalyse des schriftlichen Feedbacks der beteiligten Lehrpersonen durchgeführt. Hierbei konnten fünf erfolgskritische Faktoren (Commitment der Lehrpersonen, Anstrengungsgrad, Rolle der Schülerinnen und Schüler, Projektorganisation, sowie Inhalt und Methodik der Intervention) identifiziert werden, deren Beachtung für das Gelingen von positiv-psychologischen Interventionen in Organisationen unerlässlich erscheinen. Die Befunde der qualitativen Inhaltsanalyse führen schließlich zu der Annahme, dass aufgrund fehlender Programmintegrität keine Aussage über die tatsächliche Wirksamkeit des Trainings getroffen werden kann. Die Arbeit endet mit Empfehlungen zur optimalen Gestaltung positiv-psychologischer Interventionen in Bildungsorganisationen.
The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
Erschöpfung ist ein prominentes, unspezifisches Symptom mit vielfältigen Begleitsymptomen (z. B. Schmerzen, Schlafstörungen, Reizbarkeit, Niedergeschlagenheit). Gängige Konzepte erschöpfungsbezogener Erkrankungen und Syndrome werden häufig in Bezug auf ihre Differenzierungskraft oder Struktur kritisiert. Die Ursachen für die Entstehung von Erschöpfung sind vielfältig und die Behandlung kann nur mit gründlicher Differentialdiagnostik erfolgen. Anhand adaptionsbezogener Stressmodelle kann die Entstehung von Erschöpfung beschrieben und in drei Formen eingeteilt werden (I: reversibel, II: prädispositioniert und III: emotional-dysregulativ). Poststress-Symptome (z. B. "Wochenend-Migräne", "UrlaubsInfekte") stellen möglicherweise eine Erschöpfungsform dar, welche durch eine zentrale Entleerung der Noradrenalin-Spiegel bedingt ist. In der vorliegenden Arbeit wurden die Verlässlichkeit der Neuropattern-Erschöpfungsskala, sowie der Zusammenhang von Erschöpfung, Stress, dem Langzeit-Gesundheitsstatus und Poststress-Symptomen geprüft. Hierzu wurden Daten ambulanter und stationärer Patienten und Mitarbeitern verwendet, die an einer randomisierten klinischen Studie zur Neuropattern-Diagnostik teilnahmen. Zusätzlich wurden Daten von gesunden Personen zur Erhebung einer Normstichprobe verwendet. Die Neuropattern-Erschöpfungsskala zeigte sich als reliables und valides Maß. Sie war Indikator für direkte, indirekte und intangible Gesundheitskosten (z. B. erhöhte Arzt-, Therapeutenbesuche, Medikamenteneinnahme und Arbeitsunfähigkeit, reduziertes psychisches und physisches Wohlbefinden). Es zeigte sich, dass sowohl Stress, als auch Erschöpfung den Gesundheitszustand über den Verlauf von zwölf Monaten vorhersagt. Bemerkenswert ist, dass der Zusammenhang zwischen Stress und dem Langzeit-Gesundheitszustand vornehmlich durch Erschöpfung vermittelt wurde. Schließlich wurde die Prävalenz von Poststress-Symptomen bei gesunden Personen (2.9%), ambulanten (20%) und stationären Patienten (34,7%) bestimmt. Auch hier war nicht Stress der stärkste Prädiktor für das Auftreten von Poststress-Symptomen, sondern Erschöpfung. Modelle der psychophysiologischen Stressreaktion können die Entstehung von Erschöpfung erklären und die Diagnostik und Behandlung stressbezogener Gesundheitsstörungen verbessern. Die vorgestellte Neuropattern-Erschöpfungsskala ist dabei ein verlässliches und für die Praxis gut geeignetes Instrument, welches zur Indikation und Validierung präventiver und therapeutischer Maßnahmen eingesetzt werden kann. Je nach Erschöpfungsform bieten sich verschiedene Maßnahmen des regenerativen, instrumentellen oder kognitiven Stressmanagements, Nahrungsergänzungsmittel und Pharmakotherapie an.
Die Zielsetzung der vorliegenden Dissertation lag in der ausführlichen und systematischen Exploration, Deskription und Analyse von relational bedingten sozio-medialen Ungleichheiten zwischen jugendlichen Nutzern sozialer Netzwerkplattformen (WhatsApp, Facebook, Snapchat etc.). Im Rahmen der qualitativen Untersuchung wurden insgesamt sechs problemzentrierte Einzelinterviews und drei problemzentrierte Gruppendiskussionen mit Jugendlichen im Alter von 15 bis 20 Jahren sowie eine Gruppendiskussion mit pädagogischen Fachkräften durchgeführt und inhaltsanalytisch ausgewertet. Die vorliegende Arbeit konzentriert sich auf die Bedingungen und Wechselwirkungen zwischen den Individualmerkmalen von jugendlichen Nutzern (Interessen, Motive, Nutzungsweisen, Kompetenzen) und ihren relationalen Merkmalen (Beziehungen in sozialen Onlinenetzwerken) sowie den daraus resultierenden sozialen Ressourcen und Risiken. Die relationalen Merkmale sind laut den Ergebnissen der Dissertation gleich in zweifacher Hinsicht an der Reproduktion sozialer Ungleichheitsstrukturen beteiligt: Erstens nehmen sie Einfluss auf den Zugang zu und die Bewertung von medienvermittelten Informationen, da Informationen auf den Plattformen vorwiegend innerhalb des mediatisierten Beziehungsnetzwerkes kreisen. Zweitens bestimmen die relationalen Merkmale maßgeblich mit, welche Kompetenzen und Präferenzen Jugendliche im Umgang mit sozialen Netzwerkplattformen erwerben. Bezüglich der Mediennutzung und -wirkung können bei den Jugendlichen mit niedrigem Bildungsniveau folgende ungleichheitsrelevante Befunde konstatiert werden: verzögerte Adaption neuer Plattformen, intensivere Nutzung, geringere Nutzungskompetenzen, vermehrte Aufmerksamkeitserzeugung sowie eine primär unterhaltungsorientierte Nutzung. Zudem weisen sie überwiegend homogene Netzwerkstrukturen auf, was sich nachteilig auf ihren Zugang zu medienvermittelten Informationen und deren Bewertung auswirkt. Für Jugendliche mit hohem Bildungsniveau lassen sich hingegen deutlich positivere Verstärkungseffekte durch die Mediennutzung feststellen.
DNA methylation, through 5-methyl- and 5-hydroxymethylcytosine (5mC and 5hmC) is considered to be one of the principal interfaces between the genome and our environment and it helps explain phenotypic variations in human populations. Initial reports of large differences in methylation level in genomic regulatory regions, coupled with clear gene expression data in both imprinted genes and malignant diseases provided easily dissected molecular mechanisms for switching genes on or off. However, a more subtle process is becoming evident, where small (<10%) changes to intermediate methylation levels were associated with complex disease phenotypes. This has resulted in two clear methylation paradigms. The latter "subtle change" paradigm is rapidly becoming the epigenetic hallmark of complex disease phenotypes, although we were currently hampered by a lack of data addressing the true biological significance and meaning of these small differences. The initial expectation of rapidly identifying mechanisms linking environmental exposure to a disease phenotype led to numerous observational/association studies being performed. Although this expectation remains unmet, there is now a growing body of literature on specific genes, suggesting wide ranging transcriptional and translational consequences of such subtle methylation changes. Data from the glucocorticoid receptor (NR3C1) has shown that a complex interplay between DNA methylation, extensive 5"UTR splicing and microvariability gives rise to the overall level and relative distribution of total and N-terminal protein isoforms generated. Additionally, the presence of multiple AUG translation initiation codons throughout the complete, processed, mRNA enables translation variability, hereby enhancing the translational isoforms and the resulting protein isoform diversity; providing a clear link between small changes in DNA methylation and significant changes in protein isoforms and cellular locations. Methylation changes in the NR3C1 CpG island, alters the NR3C1 transcription and eventually protein isoforms in the tissues, resulting in subtle but visible physiological variability. Implying external environmental stimuli act through subtle methylation changes, with transcriptional microvariability as the underlying mechanism, to fine-tune the total NR3C1 protein levels. The ubiquitous distribution of genes with similar structure as NR3C1, combined with an increasing number of studies linking subtle methylation changes in specific genes with wide ranging transcriptional and translational consequences, suggested a more genome-wide spread of subtle DNA methylation changes and transcription variability. The subtle methylation paradigm and the biological relevance of such changes were supported by two epigenetic animal models, which linked small methylation changes to either psychopathological or immunological effects. The first model, rats subjected to maternal deprivation, showed long term behavioural and stress response changes. A second model, exposing mice to early life infection with H1N1, illustrated long-term immunological effects. Both models displayed subtle changes within the methylome. Suggesting/Indicating that early life adversity and early life viral infection "programmed" the CNS and innate immune response respectively, via subtle DNA methylation changes genome-wide. The research presented in this thesis investigated the ever-growing roles of DNA methylation; the physiological and functional relevance of subtle small DNA methylation changes genome-wide, in particular for the CNS (MD model) and the immune system (early life viral infection model) ; and the evidence available, particularly from the glucocorticoid of the cascade of events initiated by such subtle methylation changes, as well as addressing the underlying question as to what represents a genuine biologically significant difference in methylation.
Zeugen die ein Tatgeschehen nicht beobachtet, sondern nur auditiv wahrgenommen haben, werden als Ohrenzeugen bezeichnet. Im Rahmen des Strafverfahrens erhalten Ohrenzeugen die Aufgabe, die Täterstimme im Rahmen einer akustischen Wahlgegenüberstellung (Voice Line-up) wiederzuerkennen. Die forensische Praxis zeigt, dass Ohrenzeugen diese Aufgabe unterschiedlich gut bewältigen können, ohne dass sich ein klares Muster erkennen lässt. In der Ohrenzeugenforschung gibt es jedoch Hinweise, dass musikalische Ausbildung die Fähigkeit zur Sprechererkennung verbessert.
Ziel dieser Arbeit ist es zu prüfen, ob das Ausmaß musikalischer Wahrnehmungskompetenz eine Prognose der Sprechererkennungsleistung erlaubt.
Um dies zu prüfen, nahmen 60 Versuchspersonen sowohl an einem „Musikalitätstest“ in Form der Montreal Battery for the Evaluation of Amusia (MBEA) als auch an einem target present Voice Line-up teil. Mittels Regressionsmodellen konnte bestimmt werden, dass die Wahrscheinlichkeit für eine korrekte Sprechererkennung steigt, je höher das Testergebnis der MBEA ausfällt. Dieses Testergebnis ermöglicht eine signifikante Prognose der Sprechererkennungsleistung. Die ebenfalls mittels Fragebögen erhobene Dauer der musikalischen Ausbildung erlaubt hingegen keine signifikante Prognose. Das durchgeführte Experiment zeigt auch, dass die Dauer der musikalischen Ausbildung das Testergebnis im Musikalitätstest nur eingeschränkt erklärt.
Diese Beobachtungen führen zu dem Schluss, dass bei einer Bewertung von Ohrenzeugen ein direktes Testen von musikalischer Wahrnehmungsfähigkeit einer Inferenz auf der Basis musik-biografischer Angaben vorzuziehen ist.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
The Firepower of Work Craving: When Self-Control Is Burning under the Rubble of Self-Regulation
(2017)
Work craving theory addresses how work-addicted individuals direct great emotion-regulatory efforts to weave their addictive web of working. They crave work for two main emotional incentives: to overcompensate low self-worth and to escape (i.e., reduce) negative affect, which is strategically achieved through neurotic perfectionism and compulsive working. Work-addicted individuals" strong persistence and self-discipline with respect to work-related activities suggest strong skills in volitional action control. However, their inability to disconnect from work implies low volitional skills. How can work-addicted individuals have poor and strong volitional skills at the same time? To answer this paradox, we elaborated on the relevance of two different volitional modes in work craving: self-regulation (self-maintenance) and self-control (goal maintenance). Four hypotheses were derived from Wojdylo- work craving theory and Kuhl- self-regulation theory: (H1) Work craving is associated with a combination of low self-regulation and high self-control. (H2) Work craving is associated with symptoms of psychological distress. (H3) Low self-regulation is associated with psychological distress symptoms. (H4) Work craving mediates the relationships between self-regulation deficits and psychological distress symptoms at high levels of self-control. Additionally, we aimed at supporting the discriminant validity of work craving with respect to work engagement by showing their different volitional underpinnings. Results of the two studies confirmed our hypotheses: whereas work craving was predicted by high self-control and low self-regulation and associated with higher psychological distress, work engagement was predicted by high self-regulation and high self-control and associated with lower symptoms of psychological distress. Furthermore, work styles mediated the relationship between volitional skills and symptoms of psychological distress. Based on these new insights, several suggestions for prevention and therapeutic interventions for work-addicted individuals are proposed.
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Gegenstand der vorliegenden Arbeit ist die Untersuchung der Lexik der spätmittelalterlichen Luxemburger Rechnungsbücher unter der Prämisse der Urbanität. Da auf keine ausgearbeitete Methodik zurückgegriffen werden konnte, anhand derer eine Einteilung in für die Analyse relevante bzw. irrelevante Lexik vorgenommen werden konnte, wurde im Rahmen der Arbeit unter Rückgriff auf sprachwissenschaftliche und geschichtswissenschaftliche Konzepte eine eigene Methodik entwickelt. Auf deren Basis erfolgte die Anlage des Untersuchungskorpus' auf der Grundlage der von 1388-1500 fast lückenlos überlieferten Rechnungsbücher der Stadt Luxemburg mit dem Ziel der Analyse spezifisch urbaner Lexik. Bei der Analyse wurde schließlich eine dreifache Zielsetzung verfolgt: Einerseits die Untersuchung der Lexik mit Blick auf die Verteilung von types und tokens in als spezifisch urban definierten Domänen, andererseits die Anlage eines Glossars, das als textphilologisches Werkzeug als Hilfsmittel bei der Erschließung der Rechnungsbücher dienen soll. Daneben wurde ebenfalls auf die geschichtswissenschaftlichen Erkenntnisgewinne eingegangen, die durch die jeweilige Wortschatzanalyse realisiert werden konnten.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
This dissertation details how Zeami (ca. 1363 - ca.1443) understood his adoption of the heavenly woman dance within the historical conditions of the Muromachi period. He adopted the dance based on performances by the Ōmi troupe player Inuō in order to expand his own troupe’s repertoire to include a divinely powerful, feminine character. In the first chapter, I show how Zeami, informed by his success as a sexualized child in the service of the political elite (chigo), understood the relationship between performer and audience in gendered terms. In his treatises, he describes how a player must create a complementary relationship between patron and performer (feminine/masculine or yin/yang) that escalates to an ecstasy of successful communication between the two poles, resembling sexual union. Next, I look at how Zeami perceived Inuō’s relationships with patrons, the daimyo Sasaki Dōyo in chapter two and shogun Ashikaga Yoshimitsu in chapter three. Inuō was influenced by Dōyo’s masculine penchant for powerful, awe-inspiring art, but Zeami also recognized that Inuō was able to complement Dōyo’s masculinity with feminine elegance (kakari and yūgen). In his relationship with Yoshimitsu, Inuō used the performance of subversion, both in his public persona and in the aesthetic of his performances, to maintain a rebellious reputation appropriate within the climate of conflict among the martial elite. His play “Aoi no ue” draws on the aristocratic literary tradition of the Genji monogatari, giving Yoshimitsu the role of Prince Genji and confronting him with the consequences of betrayal in the form of a demonic, because jilted, Lady Rokujō. This performance challenged Zeami’s early notion that the extreme masculinity of demons and elegant femininity as exemplified by the aristocracy must be kept separate in character creation. In the fourth chapter, I show how Zeami also combined dominance (masculinity) and submission (femininity) in the corporal capacity of a single player when he adopted the heavenly woman dance. The heavenly woman dance thus complemented not only the masculinity of his male patrons with femininity but also the political power of his patrons with another dominant power, which plays featuring the heavenly woman dance label divine rather than masculine.
Die Arbeit geht von der These aus, dass zwischen Webers Konzept einer verstehenden Soziologie und der materialen Studie "Die Protestantische Ethik und der Geist des Kapitalismus" (PE) eine Differenz in Form einer Mehrleistung auf Seiten der PE besteht. Diese Annahme fußt auf der Beobachtung, dass die PE verschiedene Perspektiven auf die Entstehung sinnhafter Handlungsorientierungen offeriert und sich gleichsam Strategien zu deren Plausibilisierung identifizieren lassen. Derartige Zusammenhänge wurden von Weber in den methodologischen Schriften scheinbar nur am Rande thematisiert und die entsprechenden Passagen erwecken den Eindruck, dass die Frage nach der Geschichtlichkeit der sinnhaften Handlungsorientierungen lediglich als Prämisse bzw. als Begründung für die Notwendigkeit einer verstehenden Sinnerfassung Beachtung findet. Diese Beobachtung bestimmt den weiteren Gang der Untersuchung und führte zu einem argumentativen Aufbau, welcher sich als Dreischritt beschreiben lässt: a) Eine Diskussion des Erklärungsprofils von Webers Konzept einer verstehenden Soziologie sowie Beispiele für vermutete Mehrleistungen auf Seiten der PE dienen zunächst der genaueren Explikation der identifizierten Problemstellung (vgl. Abschnitt I). Hierauf aufbauend erweisen sich mit Blick auf den aktuellen Forschungsstand b) jene Argumentationszusammenhänge der materialen Forschung als problematisch bzw. in ihrer logischen Beziehung zur Methodologie Webers als weiterhin ungeklärt, welche in Abschnitt I zunächst auf eine Mehrleistung auf Seiten der PE hindeuten. Hierbei zeigt eine gegenüberstellende Untersuchung von Vertretern von Einheitsthesen (vgl. Prewo 1979, Schluchter 1998, Collins 1986a) sowie Vertretern von Differenzthesen (vgl. v. Schelting 1934, Bendix 1964, Kalberg 2001), dass der aktuelle Diskussionsstand weiterhin durch offene Fragen und Unstimmigkeiten charakterisiert ist (vgl. Abschnitt II). Implizite Antworten auf diese Probleme des aktuellen Diskussionsstands lassen sich über c) einen erneuten rekonstruierenden Blick auf die in der PE enthaltenen Zusammenhänge und Plausibilisierungsstrategien gewinnen. Hier ist die Strategie doppelseitig angelegt: Für einen Teil der identifizierten Probleme ist es von besonderer Bedeutung, einen systematischen Einblick in die in der PE enthaltenen Zusammenhänge zu gewinnen (vgl. Abschnitt III). Die hierbei gewonnenen Erträge dienen als Grundlage zur adäquaten Rekonstruktion der methodischen Umsetzung und ermöglichen ein Verständnis davon, wie Weber die in den Fokus der Forschung gestellten Phänomene zu erklären suchte (vgl. Abschnitt IV).
Die Kunstgewerbeschule Pforzheim nimmt innerhalb der Bildungsanstalten, die zur künstlerischen Förderung der Gewerbe im 19. Jahrhundert gegründet worden waren, eine Sonderstellung ein. Lehrplan und Ausbildungsgang orientierten sich vorrangig an den Bedürfnissen der in Pforzheim seit 1767 ansässigen Schmuckindustrie, die maßgeblich an der Gründung und Förderung der Kunstgewerbeschule beteiligt war. In der Dissertation werden die Rahmenbedingungen, die zur Gründung der Pforzheimer Kunstgewerbeschule im Jahr 1877 führten, sowie die Qualität und die Methoden der dort angebotenen künstlerisch-technischen Ausbildung unter Berücksichtigung zeitgenössischer Bildungsideale analysiert. Im Anschluss wird das Ansehen der Kunstgewerbeschule unter Zeitgenossen beurteilt sowie die Bedeutung dieser Institution für die Pforzheimer Schmuckindustrie herausgearbeitet. Der Betrachtungszeitraum erstreckt sich von 1877, dem Gründungsjahr der Kunstgewerbeschule, bis 1911, dem Todesjahr ihres ersten Direktors, Alfred Waag. Zeitgenössische Berichte und Archivmaterialien sowie der kontinuierlich erweiterte Lehrmittelbestand der Kunstgewerbeschule bilden die Grundlage für die Untersuchungen. Ein Großteil der Musterstücke, viele Bücher und Vorlagenwerke, die zur künstlerischen Ausbildung der Schüler angeschafft wurden, sind bis heute in Archiven und Museen erhalten und zeugen von der Qualität und der Fortschrittlichkeit der Ausbildungsstätte. Vor allem in den Bereichen Entwurf und Technik setzte man an der Kunstgewerbeschule Pforzheim Maßstäbe. Unter dem Einfluss der Schule entstanden Entwürfe für die lokale Schmuckindustrie, die speziell auf die serielle Fertigung zugeschnitten waren und damit beispielhaft für eine gelungene Allianz von Kunst, Technik und Wirtschaftlichkeit stehen. Die Zusammenarbeit der lokalen Schmuckhersteller mit Lehrern oder Absolventen der Kunstgewerbeschule ließ sich ebenso belegen wie die erfolgreiche Teilnahme verschiedener Schüler an überregionalen Wettbewerben für Schmuckentwürfe. Dank der quellengestützten Recherche konnten Beziehungen zwischen den als mustergültig empfundenen Vorbildern, der Entwurfsarbeit an der Schule und dem in Pforzheim industriell hergestellten Schmuck aufgezeigt werden. Der häufig geäußerte Vorwurf, Pforzheimer Firmen hätten vor allem fremde Schmuckentwürfe kopiert und durch maschinelle Fertigungstechniken billig produziert, verkennt den eigenen künstlerischen Anspruch einer Industrie, die zur ästhetisch-technischen Ausbildung ihrer Arbeiter und Lehrlinge eine Kunstgewerbeschule ins Leben rief, die bis heute unter dem Namen Hochschule Pforzheim - Gestaltung, Technik, Wirtschaft und Recht Bestand hat.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
Pränatal, postnatal und aktuell auftretende chronische Stressbelastung sind bedeutsame Risikofaktoren für mentale und körperliche Beeinträchtigungen im Erwachsenenalter. Ziel dieser Dissertationsschrift ist es, den Einfluss von Stress im Lebenslauf (pränatale, postnatale, aktuelle Stressbelastung) auf verschiedene Erschöpfungsvariablen und Depressivität zu analysieren und mögliche Mediatoreffekte von aktuell auftretendem Stress auf Assoziationen zwischen pränatalem bzw. postnatalem Stress und Erschöpfung bzw. Depressivität zu bestimmen. Zur Prüfung dieser Fragestellung wurden Daten von chronisch gestressten Lehrpersonen (N = 186; 67,70% weiblich) ohne Diagnose für eine psychische Erkrankung sowie von Hausarzt- (N = 473; 59% weiblich) und Klinikpatienten (N = 284; 63,7% weiblich) mit mindestens einer stressbezogenen mentalen Gesundheitsstörung erhoben. Prä-postnataler Stress, subjektive Erschöpfung und Depressivität wurden in allen Stichproben erfasst, aktuelle Stressbelastung und Poststresssymptome in den Patientenstichproben. Zusätzlich wurden konzeptuelle Endophänotypen als psychobiologisches Erschöpfungsmaß in beiden Patientenstichproben sowie Übernachtaktivität des parasympathischen Nervensystems als Maß vagaler Erholung in der Hausarztstichprobe operationalisiert. Bei den Lehrpersonen wurde anhand univariater Varianzanalysen analysiert, ob Lehrkräfte mit frühkindlicher Belastung unterschiedliche Erschöpfungs- bzw. Depressionswerte aufwiesen im Vergleich zu Lehrkräften ohne frühkindliche Belastung. In den Patientenstichproben wurden multiple und binärlogistische Regressionsmodelle verwendet, um Assoziationen zwischen pränatalem, postnatalem sowie aktuellem Stress mit Erschöpfung, Depressivität, den konzeptuellen Endophänotypen der Neuropattern-Diagnostik sowie Übernachtaktivität des parasympathischen Nervensystems (nur bei Hausarztpatienten) zu prüfen. Mögliche Mediatoreffekte aktueller Stressbelastung auf Assoziationen zwischen pränatalem und postnatalem Stress mit Erschöpfung, Depressivität, der konzeptuellen Endophänotypen bzw. der Übernachtaktivität des parasympathischen Nervensystems (nur bei Hausarztpatienten) wurden bestimmt. Ad hoc wurde mittels zusätzlich ein möglicher Moderatoreffekt von pränatalem Stress auf die Assoziation zwischen aktuellem Stress und der Übernachtherzrate getestet. Pränataler Stress war bei sonst gesunden Lehrkräften mit einer stärker ausgeprägten Gratifikationskrise und höherer emotionaler Erschöpfung assoziiert. Postnataler Stress ging mit höheren Werten für Depressivität, Anstrengungs-Belohnungs-Ungleichgewicht, der MBI Gesamtskala, emotionaler Erschöpfung und vitaler Erschöpfung einher. Sowohl bei Hausarzt- als auch bei Klinikpatienten waren aktuelle psychosoziale Belastung und aktuelle Beeinträchtigung durch Lebensereignisse mit Depressivität, Erschöpfung und Poststress assoziiert. Bei Hausarztpatienten sagte aktuelle Stressbelastung eine erhöhte Odds Ratio der Noradrenalin-Hypoaktivität sowie Serotonin-Hyperreaktivität vorher; bei Klinikpatienten für Noradrenalin-Hypoaktivität. Des Weiteren zeigten Hausarztpatienten mit starker psychosozialer Belastung erhöhte parasympathische Aktivität über Nacht. Bei Hausarztpatienten ist hoher pränataler Stress assoziiert mit wahrgenommener psychosozialer Belastung, aktuellen Lebensereignissen und Poststresssymptomen. Pränataler Stress ging mit einer verringerten vagalen Aktivität einher. Weiter ist postnataler Stress assoziiert mit Depressivität, wahrgenommener psychosozialer Belastung, aktuellen Lebensereignissen, Erschöpfung und Poststresssymptomen sowie einem erhöhten Odds Ratio für die Noradrenalin-Hypoaktivität sowie mit CRH-Hyperaktivität. Die Assoziationen zwischen pränatalem bzw. postnatalem Stress und Poststress, Erschöpfung, Depressivität und Noradrenalin-Hypoaktivität wurden signifikant durch aktuelle Stressbelastung mediiert. Die Assoziation zwischen aktuellem Stress und parasympathischer Aktivität über Nacht wurde durch pränatalen Stress moderiert: Bei geringer bis mittlerer nicht aber bei hoher pränataler Belastung ging eine hohe psychosoziale Belastung mit erhöhter Übernachtaktivität des parasympathischen Nervensystems einher. Bei Klinikpatienten zeigten sich keine signifikanten Zusammenhänge zwischen pränatalem bzw. postnatalem Stress und Erschöpfung bzw. Depressivität. Pränataler Stress kann trophotrope Funktionen beeinträchtigen und damit die Vulnerabilität für Erschöpfung und Depressivität erhöhen. Fortgesetzte postnatale und aktuelle Stressbelastung erhöhen den kumulativen Stress im Lebenslauf einer Person und tragen zu psychobiologischen Dysfunktionen sowie Erschöpfung und Depressivität bei.
Die Untersuchung widmet sich dem Verhältnis von Kunst und Fernsehen in Deutschland seit den 1960er Jahren bis heute unter Berücksichtigung des gesellschaftlichen und künstlerischen Diskurses. In den 1960er Jahren begann die Zusammenarbeit von Künstlern und dem Fernsehen mit Projekten wie "Black Gate Cologne" oder Gerry Schums "Fernsehgalerie" äußerst vielversprechend. In enger Zusammenarbeit mit den Fernsehverantwortlichen wurden Sendungen speziell für die Ausstrahlung im Fernsehen produziert und auch als Fernsehkunst gesendet. Die Akzeptanz und Resonanz auf diese Projekte waren jedoch nach anfänglicher Euphorie bescheiden bis ablehnend. Allerdings führte dies nicht zu einem Scheitern und einer Rückverlagerung der Kunst in den Präsentationsort Museum oder Galerie, sondern zu einer Weiterentwicklung der Fernsehkunst bis in die heutige Zeit. Fernsehkunst hat sich ihrem Aufführungs- und Produktionskontext, aber auch bei der Wahl ihrer Themen der jeweiligen Epoche mit ihren technischen und kommunikativen Möglichkeiten sowie dem gesellschaftlichen Diskurs zu öffentlichkeitsrelevanten Themen anpasst. Fernsehkunst ist stets ein Spiegel der aktuellen Diskurse in Kunst und Gesellschaft. In der bisherigen Forschung wurde Fernsehkunst als gescheitert und damit als nicht mehr existent angesehen. Die Stigmatisierung des Fernsehens als reines Unterhaltungs- und Informationsmedium führte dazu, dass Fernsehkunst als Begriff und als Kunstgattung im öffentlichen und im wissenschaftlichen Diskurs nicht vorkam. Die typologische und inhaltliche Analyse hat jedoch gezeigt, dass Fernsehkunst in klarer Abgrenzung zur Videokunst auch gegenwärtig existiert.
Nos recherches ont exploré l’espace transculturel dans la dramaturgie québécoise contemporaine. Notre travail a été principalement basé sur le concept de transgressivité de Bertrand Westphal [WESTPHAL : 2007] et la notion de transculturalité proposée par Wolfgang Welsch [WELSCH : 1999].
La réflexion menée par Welsch nous a inspiré dans l’établissement des trois grands axes de notre analyse, autour desquels se sont articulées les dimensions transculturelles superposées : l’axe syncrétique, l’axe intime et l’axe cosmopolite. Ces axes ont déterminé le choix de notre corpus, provenant de l’époque transculturelle du Québec entre 1975 et 1995. L’axe syncrétique s’est dessiné à partir de la présence de cultures modernes interconnectées, où les façons de vivre ne se limitent pas aux frontières culturelles nationales. Elles les « transgressent » et se retrouvent dans d’autres cultures. L’axe intime découle de ce que les individus – le(s) Moi(s) – sont des hybrides culturels, chaque individu se formant par des attachements multiples. Ils interagissent entre eux, créant ainsi une transculturalité interne. L’axe cosmopolite renferme une dimension qui représente de nombreuses façons de vivre et diverses vies culturelles qui s’interpénètrent mutuellement. Elles interagissent entre elles, mais aussi avec des espaces considérés comme étant hors du contexte transculturel.
Nous avons tenu à développer notre projet autour des prémisses théoriques de la géocritique. Cela nous a conduit à établir une grille d’analyse spécifique afin de découvrir le mode de fonctionnement de l’espace humain transculturel. L’analyse s’est basée uniquement sur le texte dramatique. Des dispositifs inspirés de la géocritique ont dévoilé quelques caractéristiques primordiales des dimensions transculturelles superposées de la diversité québécoise.
This paper describes the concept of the hyperspectral Earth-observing thermal infrared (TIR) satellite mission HiTeSEM (High-resolution Temperature and Spectral Emissivity Mapping). The scientific goal is to measure specific key variables from the biosphere, hydrosphere, pedosphere, and geosphere related to two global problems of significant societal relevance: food security and human health. The key variables comprise land and sea surface radiation temperature and emissivity, surface moisture, thermal inertia, evapotranspiration, soil minerals and grain size components, soil organic carbon, plant physiological variables, and heat fluxes. The retrieval of this information requires a TIR imaging system with adequate spatial and spectral resolutions and with day-night following observation capability. Another challenge is the monitoring of temporally high dynamic features like energy fluxes, which require adequate revisit time. The suggested solution is a sensor pointing concept to allow high revisit times for selected target regions (1"5 days at off-nadir). At the same time, global observations in the nadir direction are guaranteed with a lower temporal repeat cycle (>1 month). To account for the demand of a high spatial resolution for complex targets, it is suggested to combine in one optic (1) a hyperspectral TIR system with ~75 bands at 7.2"12.5 -µm (instrument NEDT 0.05 K"0.1 K) and a ground sampling distance (GSD) of 60 m, and (2) a panchromatic high-resolution TIR-imager with two channels (8.0"10.25 -µm and 10.25"12.5 -µm) and a GSD of 20 m. The identified science case requires a good correlation of the instrument orbit with Sentinel-2 (maximum delay of 1"3 days) to combine data from the visible and near infrared (VNIR), the shortwave infrared (SWIR) and TIR spectral regions and to refine parameter retrieval.
Dry tropical forests undergo massive conversion and degradation processes. This also holds true for the extensive Miombo forests that cover large parts of Southern Africa. While the largest proportional area can be found in Angola, the country still struggles with food shortages, insufficient medical and educational supplies, as well as the ongoing reconstruction of infrastructure after 27 years of civil war. Especially in rural areas, the local population is therefore still heavily dependent on the consumption of natural resources, as well as subsistence agriculture. This leads, on one hand, to large areas of Miombo forests being converted for cultivation purposes, but on the other hand, to degradation processes due to the selective use of forest resources. While forest conversion in south-central rural Angola has already been quantitatively described, information about forest degradation is not yet available. This is due to the history of conflicts and the therewith connected research difficulties, as well as the remote location of this area. We apply an annual time series approach using Landsat data in south-central Angola not only to assess the current degradation status of the Miombo forests, but also to derive past developments reaching back to times of armed conflicts. We use the Disturbance Index based on tasseled cap transformation to exclude external influences like inter-annual variation of rainfall. Based on this time series, linear regression is calculated for forest areas unaffected by conversion, but also for the pre-conversion period of those areas that were used for cultivation purposes during the observation time. Metrics derived from linear regression are used to classify the study area according to their dominant modification processes.rnWe compare our results to MODIS latent integral trends and to further products to derive information on underlying drivers. Around 13% of the Miombo forests are affected by degradation processes, especially along streets, in villages, and close to existing agriculture. However, areas in presumably remote and dense forest areas are also affected to a significant extent. A comparison with MODIS derived fire ignition data shows that they are most likely affected by recurring fires and less by selective timber extraction. We confirm that areas that are used for agriculture are more heavily disturbed by selective use beforehand than those that remain unaffected by conversion. The results can be substantiated by the MODIS latent integral trends and we also show that due to extent and location, the assessment of forest conversion is most likely not sufficient to provide good estimates for the loss of natural resources.
Avoiding aerial microfibre contamination of environmental samples is essential for reliable analyses when it comes to the detection of ubiquitous microplastics. Almost all laboratories have contamination problems which are largely unavoidable without investments in clean-air devices. Therefore, our study supplies an approach to assess background microfibre contamination of samples in the laboratory under particle-free air conditions. We tested aerial contamination of samples indoor, in a mobile laboratory, within a laboratory fume hood and on a clean bench with particles filtration during the examining process of a fish. The used clean bench reduced aerial microfibre contamination in our laboratory by 96.5%. This highlights the value of suitable clean-air devices for valid microplastic pollution data. Our results indicate, that pollution levels by microfibres have been overestimated and actual pollution levels may be many times lower. Accordingly, such clean-air devices are recommended for microplastic laboratory applications in future research work to significantly lower error rates.
This study aims to estimate the cotton yield at the field and regional level via the APSIM/OZCOT crop model, using an optimization-based recalibration approach based on the state variable of the cotton canopy - the leaf area index (LAI), derived from atmospherically corrected Landsat-8 OLI remote sensing images in 2014. First, a local sensitivity and global analysis approach was employed to test the sensitivity of cultivar, soil and agronomic parameters to the dynamics of the LAI. After sensitivity analyses, a series of sensitive parameters were obtained. Then, the APSIM/OZCOT crop model was calibrated by observations over a two-year span (2006-2007) at the Aksu station, combined with these sensitive cultivar parameters and the current understanding of cotton cultivar parameters. Third, the relationship between the observed in-situ LAI and synchronous perpendicular vegetation indices derived from six Landsat-8 OLI images covering the entire growth stage was modelled to generate LAI maps in time and space. Finally, the Particle Swarm Optimization (PSO) and general-purpose optimization approach (based on Nelder-Mead algorithm) were used to recalibrate four sensitive agronomic parameters (row spacing, sowing density per row, irrigation amount and total fertilization) according to the minimization of the root-mean-square deviation (RMSE) between the simulated LAI from the APSIM/OZCOT model and retrieved LAI from Landsat-8 OLI remote sensing images. After the recalibration, the best simulated results compared with observed cotton yield were obtained. The results showed that: (1) FRUDD, FLAI and DDISQ were the major cultivar parameters suitable for calibrating the cotton cultivar. (2) After the calibration, the simulated LAI performed well with an RMSE and mean absolute error (MAE) of 0.45 and 0.33, respectively, in 2006 and 0.46 and 0.41, respectively, in 2007. The coefficient of determination between the observed and simulated LAI was 0.83 and 0.97, respectively, in 2006 and 2007. The Pearson- correlation coefficient was 0.913 and 0.988 in 2006 and 2007, respectively, with a significant positive correlation between the simulated and observed LAI. The difference between the observed and simulated yield was 776.72 kg/ha and 259.98 kg/ha in 2006 and 2007, respectively. (3) Cotton cultivation in 2014 was obtained using three Landsat-8 OLI images - DOY136 (May), DOY 168 (June) and DOY 200 (July) - based on the phenological differences in cotton and other vegetation types. (4) The yield estimation after the assimilation closely approximated the field-observed values, and the coefficient of determination was as high as 0.82, after recalibration of the APSIM/OZCOT model for ten cotton fields. The difference between the observed and assimilated yields for the ten fields ranged from 18.2 to 939.7 kg/ha. The RMSE and MAE between the assimilated and observed yield was 417.5 and 303.1 kg/ha, respectively. These findings provide scientific evidence for the feasibility of coupled remote sensing and APSIM/OZCOT model at the field level. (5) Upscaling from field level to regional level, the assimilation algorithm and scheme are both especially important. Although the PSO method is very efficient, the computational efficiency is also the shortcoming of the assimilation strategy on a regional scale. Comparisons between the PSO and general-purpose optimization method (based on the Nelder-Mead algorithm) were implemented from the RSME, LAI curve and computational time. The general-purpose optimization method (based on the Nelder-Mead algorithm) was used for the regional assimilation between remote sensing and the APSIM/OZCOT model. Meanwhile, the basic unit for regional assimilation was also determined as cotton field rather than pixel. Moreover, the crop growth simulation was also divided into two phases (vegetative growth and reproductive growth) for regional assimilation. (6) The regional assimilation at the vegetative growth stage between the remote sensing derived and APSIM/OZCOT model-simulated LAI was implemented by adjusting two parameters: row spacing and sowing density per row. The results showed that the sowing density of cotton was higher in the southern part than in the northern part of the study area. The spatial pattern of cotton density was also consistent with the reclamation from 2001 to 2013. Cotton fields after early reclamation were mainly located in the southern part while the recent reclamation was located in the northern part. Poor soil quality, lack of irrigation facilities and woodland belts of cotton fields in the northern part caused the low density of cotton. Regarding the row spacing, the northern part was larger than the southern part due to the variation of two agronomic modes from military and private companies. (7) The irrigation and fertilization amount were both used as key parameters to be adjusted for regional assimilation during the reproductive growth period. The result showed that the irrigation per time ranged from 58.14 to 89.99 mm in the study area. The spatial distribution of the irrigation amount is higher in the northern part while lower in southern study area. The application of urea fertilization ranged from 500.35 to 1598.59 kg/ha in the study area. The spatial distribution of fertilization was lower in the northern part and higher in the southern part. More fertilization applied in the southern study area aims to increase the boll weight and number for pursuing higher yields of cotton. The frequency of the RSME during the second assimilation was mainly located in the range of 0.4-0.6 m2/m2. The estimated cotton yield ranged from 1489 to 8895 kg/ha. The spatial distribution of the estimated yield is also higher in the southern part than the northern study area.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation). This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011"2012) within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
Diese Arbeit konzentriert sich auf die Darstellung gemeinsamer Projekte von Hotelunternehmen und Hochschulen mit hotelspezifischen Studienangeboten. Infolge der demografischen Entwicklungen sowie des Wertewandels gewinnen Personalgewinnung und Mitarbeiterloyalisierung zunehmend an Bedeutung und werden zu einem Wettbewerbsparameter der Hotellerie. Für diese essentielle Herausforderung sind Hotelbetriebe mit engagierter Mitarbeiterförderung gefragt. Viele Hochschulen haben neue Studiengänge im Tourismus, Event- oder Hotelmanagementbereich praxisorientiert aufgelegt, um der Skepsis der Hotellerie entgegen zu wirken und um zugleich den Erwartungen der Studenten gerecht zu werden. Viele der Studenten wären potenzielle Auszubildende, die sich bei der Abwägung allerdings für die Studienoption entschieden haben. Daher ist es wichtig, in enger Kooperation mit den hierzu passenden Institutionen und Bildungsträgern, vor allem praxisnahe Studienmodelle für sich verändernde Erwartungen der Bewerber mit modernen Lehrinhalten zu entwickeln und erfolgreich am Markt zu platzieren. Daher verfolgt diese Arbeit den Ansatz, adäquate Kriterien und Faktoren für den Erfolg vertraglich vereinbarter Kooperationen zwischen Hotelketten und Hochschulen zu analysieren und daraus Handlungsempfehlungen abzuleiten. Die große Anzahl an Kooperationen macht deutlich, dass die Notwendigkeit für die Hotellerie, sich im Bereich der Mitarbeitergewinnung, -bindung und -entwicklung mit akademischen Partnern zusammen zu schließen, bei einer ansteigenden Zahl von Hotelgruppen nachvollzogen wird. Durch die zurückhaltende Vermarktung vieler Kooperationsmodelle ist deren Bekanntheit jedoch begrenzt und dadurch auch deren positive Auswirkungen auf das Image der Hotellerie. Gleichwohl sind in der Bildungslandschaft steigende Studentenzahlen und eine Vermehrung der Studiengänge bei gleichzeitig gravierender Abnahme der Zahl berufsfachlich Ausgebildeter erkennbar. Die Kooperationsmodelle sind daher ein sinnvolles Instrument, um auf diese Marktentwicklungen zu reagieren, wobei ihre Bedeutung primär von Unternehmen mit strategischer Personalpolitik erkannt wird. Daraus wurde die "Typologie privilegierter Bildungspartnerschaften" mit einer Bandbreite von zehn Kooperationstypen entwickelt. Damit werden unterschiedliche Intensitäten der partnerschaftlichen Bildungselemente ebenso deutlich wie ein individualisiertes "Faktoren-Phasenmodell", dass die Prozessstruktur der Kooperationsentwicklung abbildet. Je nach Enge der Zusammenarbeit, nach Unternehmens- und Hochschulphilosophie und entsprechend der Erfahrungen mit Kooperationen entstehen vor allem Verpflichtungen und Herausforderungen in der aktiven Gestaltung und verlässlichen Kommunikation in einem Kooperationsmodell. Eine Schlüsselrolle nimmt der persönlich verantwortliche Koordinator ein, der als Garant für eine effiziente Organisation und Professionalität angesehen wird. Daraus ableitend sind die Erfolgsfaktoren im ASP-Modell herausgefiltert worden: Attraktivität, Sicherheit und Persönlichkeit machen den Erfolg einer privilegierten Bildungspartnerschaft aus. Bestätigt wurde zudem, dass die Erfahrung der beiden Partner einer Kooperation zueinander passen muss und eine klare Zielvereinbarung mit Fixierung der Pflichten und Aufgaben erforderlich ist. Ein hoher Qualitätsanspruch, Transparenz und Prozesseffizienz ergänzen dies und machen deutlich, dass der Bildungsbereich als Teil der Personalpolitik eines Unternehmens sensibel und anspruchsvoll zugleich ist. Die Verankerung auf der Führungsebene eines Unternehmens ist entscheidend, um durch ein Signal nach innen und außen den Stellenwert einer Bildungsallianz zu verdeutlichen. Wenn aus Lernen und Wissen wirtschaftliche Vorteile erarbeitet werden können, wird Bildung noch mehr als Markenkern eines guten Arbeitgebers interpretiert. Auf dieser Grundlage wird der Gedanke der Personalentwicklung durch den Ansatz fortwährender Mitarbeiterbildung perfektioniert und der Lösungsansatz einer "privilegierten Bildungspartnerschaft" legt den Grundstein dafür. Nachwuchskräfteförderung wird zum strategischen Mittel der Mitarbeiterbindung und zur Vermeidung kostenintensiver Vakanzen, zudem sichern Netzwerke Fachwissen und stärken das Unternehmensimage. Mit privilegierten Bildungspartnerschaften werden geeignete Modelle vorgestellt, um einsatzfreudige Mitarbeiter zu halten und sie gleichzeitig auf den nächsten Karriereschritt vorzubereiten. Die vorliegende Ausarbeitung liefert einen Diskussionsbeitrag zum besseren gegenseitigen Verständnis einer Symbiose aus Hotelkette und Hochschule im Bildungsbereich sowie erfolgreiche Konzeptideen für vielfältige Netzwerkstrukturen.
Die organische Bodensubstanz (OBS) ist eine fundamentale Steuergröße aller biogeochemischen Prozesse und steht in engem Zusammenhang zu Kohlenstoffkreisläufen und globalem Klima. Die derzeitige Herausforderung der Ökosystemforschung ist die Identifizierung der für die Bodenqualität relevanten Bioindikatoren und deren Erfassung mit Methoden, die eine nachhaltige Nutzung der OBS in großem Maßstab überwachen und damit zu globalen Erderkundungsprogrammen beitragen können. Die fernerkundliche Technik der Vis-NIR Spektroskopie ist eine bewährte Methode für die Beurteilung und das Monitoring von Böden, wobei ihr Potential bezüglich der Erfassung biologischer und mikrobieller Bodenparameter bisher umstritten ist. Das Ziel der vorgestellten Arbeit war die quantitative und qualitative Untersuchung der OBS von Ackeroberböden mit unterschiedlichen Methoden und variierender raumzeitlicher Auflösung sowie die anschließende Bewertung des Potentials non-invasiver, spektroskopischer Methoden zur Erfassung ausgewählter Parameter dieser OBS. Dafür wurde zunächst eine umfassende lokale Datenbank aus chemischen, physikalischen und biologischen Bodenparametern und dazugehörigen Bodenspektren einer sehr heterogenen geologischen Region mit gemäßigten Klima im Südwesten Deutschlands erstellt. Auf dieser Grundlage wurde dann das Potential der Bodenspektroskopie zur Erfassung und Schätzung von Feld- und Geländedaten ausgewählter OBS Parameter untersucht. Zusätzlich wurde das Optimierungspotential der Vorhersagemodelle durch statistische Vorverarbeitung der spektralen Daten getestet. Die Güte der Vorhersagewahrscheinlichkeit gebräuchlicher fernerkundlicher Bodenparameter (OC, N) konnte für im Labor erhobene Hyperspektralmessungen durch statistische Optimierungstechniken wie Variablenselektion und Wavelet-Transformation verbessert werden. Ein zusätzliches Datenset mit mikrobiellen/labilen OBS Parametern und Felddaten wurde untersucht um zu beurteilen, ob Bodenspektren zur Vorhersage genutzt werden können. Hierzu wurden mikrobieller Kohlenstoff (MBC), gelöster organischer Kohlenstoff (DOC), heißwasserlöslicher Kohlenstoff (HWEC), Chlorophyll α (Chl α) und Phospholipid-Fettsäuren (PLFAs) herangezogen. Für MBC und DOC konnte abhängig von Tiefe und Jahreszeit eine mittlere Güte der Vorhersagewahrscheinlichkeit erreicht werden, wobei zwischen hohen und niedrigen Konzentration unterschieden werden konnte. Vorhersagen für OC und PLFAs (Gesamt-PLFA-Gehalt sowie die mikrobiellen Gruppen der Bakterien, Pilze und Algen) waren nicht möglich. Die beste Prognosewahrscheinlichkeit konnte für das Chlorophyll der Grünalgen an der Bodenoberfläche (0-1cm Bodentiefe) erzielt werden, welches durch Korrelation mit MBC vermutlich auch für dessen gute Vorhersagewahrscheinlichkeit verantwortlich war. Schätzungen des Gesamtgehaltes der OBS, abgeleitet durch OC, waren hingegen nicht möglich, was der hohen Dynamik der mikrobiellen OBS Parameter an der Bodenoberfläche zuzuschreiben ist. Das schränkt die Repräsentativität der spektralen Messung der Bodenoberfläche zeitlich ein. Die statistische Optimierungstechnik der Variablenselektion konnte für die Felddaten nur zu einer geringen Verbesserung der Vorhersagemodelle führen. Die Untersuchung zur Herkunft der organischen Bestandteile und ihrer Auswirkungen auf die Quantität und Qualität der OBS konnte die mikrobielle Nekromasse und die Gruppe der Bodenalgen als zwei mögliche weitere signifikante Quellen für die Entstehung und Beständigkeit der OBS identifizieren. Insgesamt wird der mikrobielle Beitrag zur OBS höher als gemeinhin angenommen eingestuft. Der Einfluss mikrobieller Bestandteile konnte für die OBS Menge, speziell in der mineralassoziierten Fraktion der OBS in Ackeroberböden, sowie für die OBS Qualität hinsichtlich der Korrelation von mikrobiellen Kohlenhydraten und OBS Stabilität gezeigt werden. Die genaue Quantifizierung dieser OBS Parameter und ihre Bedeutung für die OBS Dynamik sowie ihre Prognostizierbarkeit mittels spektroskopischer Methoden ist noch nicht vollständig geklärt. Für eine abschließende Beurteilung sind deshalb weitere Studien notwendig.
It is generally assumed that the temperature increase associated with global climate change will lead to increased thunderstorm intensity and associated heavy precipitation events. In the present study it is investigated whether the frequency of thunderstorm occurrences will in- or decrease and how the spatial distribution will change for the A1B scenario. The region of interest is Central Europe with a special focus on the Saar-Lor-Lux region (Saarland, Lorraine, Luxembourg) and Rhineland-Palatinate.Daily model data of the COSMO-CLM with a horizontal resolution of 4.5 km is used. The simulations were carried out for two different time slices: 1971"2000 (C20), and 2071"2100 (A1B). Thunderstorm indices are applied to detect thunderstorm-prone conditions and differences in their frequency of occurrence in the two thirty years timespans. The indices used are CAPE (Convective Available Potential Energy), SLI (Surface Lifted Index), and TSP (Thunderstorm Severity Potential).The investigation of the present and future thunderstorm conducive conditions show a significant increase of non-thunderstorm conditions. The regional averaged thunderstorm frequencies will decrease in general, but only in the Alps a potential increase in thunderstorm occurrences and intensity is found. The comparison between time slices of 10 and 30 years length show that the number of gridpoints with significant signals increases only slightly. In order to get a robust signal for severe thunderstorm, an extension to more than 75 years would be necessary.
Erosion durch Regen und Wind schädigt fruchtbare Bodensubstanz irreversibel, verursacht weltweit riesige ökologische und sozio-ökonomische Schäden und ist eines der Hauptanliegen bezüglich Ökosystemdienstleistungen und Nahrungsmittelsicherheit. Die Quantifizierung von Abtragsraten ist immer noch höchst spekulativ, und fehlende empirische Daten führen zu großen Unsicherheiten von Risikoanalysemodellen. Als ein wesentlicher Grund für diese Unsicherheiten wird in dieser Arbeit die Prozesse der Beeinflussung von Wassererosion durch Wind und, im Speziellen, die Erosionsleistung von windbeeinflussten Regentropfen im Gegensatz zu windlosen Tropfen inklusive unterschiedlicher Oberflächenparameter beleuchtet. Der Forschungsansatz war experimentell-empirisch und beinhaltete die Entwicklung und Formulierung der Forschungshypothesen, die Konzeption und Durchführung von Experimenten mit einem mobilen Wind-Regenkanal, die Probenverarbeitung und Analyse sowie Interpretation der Daten. Die Arbeit gliedert sich in die Teile 1. "Bodenerosionsexperimente zu windbeeinflusstem Regen auf autochthonen und naturähnlichen Böden", 2. "Experimente zu Substratpartikeltransport durch windbeeinflussten Tropfenschlag" und 3. "Zusammenführung der Freiland- und Labortests". 1. Tests auf autochthonen degradierten Böden im semiariden Südspanien sowie auf kohäsionslosem sandigen Substrat wurden durchgeführt, um die relativen Auswirkungen von windbeeinflusstem Regen auf Oberflächenabflussbildung und Erosion zu untersuchen und zu quantifizieren. In der überwiegenden Anzahl der Versuche wurde klar eine Erhöhung der Erosionsraten festgestellt, was die Forschungshypothese, windbeeinflusster Regen sei erosiver als windloser Regen, deutlich bestätigte. Neben den stark erhöhten wurden auch niedrigere Abtragswerte gemessen, was zum einen die ausnehmende Relevanz der Oberflächenstrukturen und damit von in-situ- Experimenten belegte, zum anderen auf eine Erhöhung der Variabilität der Erosionsprozesse deutete. Diese Variabilität scheint zuzunehmen mit der Erhöhung der beteiligten Faktoren. 2. Ein sehr spezialisiertes Versuchsdesign wurde entwickelt und eingesetzt, um explizite Messungen der Tropfenschlagprozesse mit und ohne Windeinfluss durchzuführen. Getestet wurden die Erosionsagenzien Regen, Wind und windbeeinflusster Regen sowie drei Neigungen, drei Rauheiten und zwei Substrate. Alle Messergebnisse zeigten eine klare windinduzierte Erhöhung der Erosion um bis zu zwei Größenordnungen gegenüber windlosem Tropfenschlag und Wind. Windbeeinflusster Regen wird durch die gesteigerte Transportmenge und Weite als wesentlicher Erosionsfaktor bestätigt und ist damit ein Schlüsselparameter bei der Quantifizierung von globaler Bodenerosion, Erstellung von Sedimentbudgets und bei der Erforschung von Connectivity. Die Daten sind von hervorragender Qualität und sowohl für anspruchsvollere Analysemethoden (multivariate Statistik) als auch für Modellierungsansätze geeignet. 3. Eine Synthese aus Feld- und Laborversuchen (darunter auch ein bis dato unveröffentlichtes Versuchsset) inklusive einer statistischen Analyse bestätigt WDR als den herausragenden Faktor, der alle anderen Faktoren überlagert. Die Zusammenführung der beiden komplementären Experimentgruppen bringt die Forschungsreihe zu windbeeinflusstem Regen auf eine weiterführende Ebene, indem die Messergebnisse in einen ökologischen Zusammenhang gesetzt werden. Eine vorsichtige Projektion auf Landschaftsebene ermöglicht einen Einblick in die Risikobewertung von Bodenerosion durch windbeeinflussten Regen. Es wird deutlich, dass er sich gerade auch im Zusammenhang mit den durch den Klimawandel verstärkt auftretenden Regensturmereignissen katastrophal auf Bodenerosionsraten auszuwirken kann und dringend in die Bodenerosionsmodellierung integriert werden muss.
Die Publikation, die sich primär an Forschende aus den Geisteswissenschaften wendet, bietet eine praxisbezogene kurze Einführung in das Forschungsdatenmanagement. Sie ist als Planungsinstrument für ein Forschungsprojekt konzipiert und bietet Hilfestellung bei der Erarbeitung eines digitalen Forschungskonzepts und der Erstellung eines Datenmanagementplans. Ausgehend von der Analyse ausgewählter Arbeitssituationen (Projektplanung und Antrag-stellung, Quellenbearbeitung, Publikation und Archivierung) und deren Veränderung in einer zunehmend digital organisierten Forschungspraxis werden die Zusammenhänge zwischen Forschungs- und Datenmanagementprozess thematisiert. Eine Checkliste in Form eines Fragenkatalogs und eine kommentierte Mustervorlage für einen Daten-managementplan helfen bei der Projektplanung und -beantragung.
The dissertation deals with methods to improve design-based and model-assisted estimation techniques for surveys in a finite population framework. The focus is on the development of the statistical methodology as well as their implementation by means of tailor-made numerical optimization strategies. In that regard, the developed methods aim at computing statistics for several potentially conflicting variables of interest at aggregated and disaggregated levels of the population on the basis of one single survey. The work can be divided into two main research questions, which are briefly explained in the following sections.
First, an optimal multivariate allocation method is developed taking into account several stratification levels. This approach results in a multi-objective optimization problem due to the simultaneous consideration of several variables of interest. In preparation for the numerical solution, several scalarization and standardization techniques are presented, which represent the different preferences of potential users. In addition, it is shown that by solving the problem scalarized with a weighted sum for all combinations of weights, the entire Pareto frontier of the original problem can be generated. By exploiting the special structure of the problem, the scalarized problems can be efficiently solved by a semismooth Newton method. In order to apply this numerical method to other scalarization techniques as well, an alternative approach is suggested, which traces the problem back to the weighted sum case. To address regional estimation quality requirements at multiple stratification levels, the potential use of upper bounds for regional variances is integrated into the method. In addition to restrictions on regional estimates, the method enables the consideration of box-constraints for the stratum-specific sample sizes, allowing minimum and maximum stratum-specific sampling fractions to be defined.
In addition to the allocation method, a generalized calibration method is developed, which is supposed to achieve coherent and efficient estimates at different stratification levels. The developed calibration method takes into account a very large number of benchmarks at different stratification levels, which may be obtained from different sources such as registers, paradata or other surveys using different estimation techniques. In order to incorporate the heterogeneous quality and the multitude of benchmarks, a relaxation of selected benchmarks is proposed. In that regard, predefined tolerances are assigned to problematic benchmarks at low aggregation levels in order to avoid an exact fulfillment. In addition, the generalized calibration method allows the use of box-constraints for the correction weights in order to avoid an extremely high variation of the weights. Furthermore, a variance estimation by means of a rescaling bootstrap is presented.
Both developed methods are analyzed and compared with existing methods in extensive simulation studies on the basis of a realistic synthetic data set of all households in Germany. Due to the similar requirements and objectives, both methods can be successively applied to a single survey in order to combine their efficiency advantages. In addition, both methods can be solved in a time-efficient manner using very comparable optimization approaches. These are based on transformations of the optimality conditions. The dimension of the resulting system of equations is ultimately independent of the dimension of the original problem, which enables the application even for very large problem instances.
Gegenstand der Dissertation ist die Geschichte und Manifestation des Nationaltheaters in Japan, der Transfer einer europäischen Kulturinstitution nach und deren Umsetzungsprozess in Japan, welcher mit der Modernisierung Japans ab Mitte des 19. Jahrhunderts begann und erst hundert Jahre später mit der Eröffnung des ersten Nationaltheaters 1966 endete. Dazu werden theaterhistorische Entwicklungen, Veränderungen in der Theaterproduktion und -architektur in Bezug auf die Genese eines japanischen Nationaltheaters beleuchtet. Das Ergebnis zeigt, dass sich die Institution Nationaltheater in seiner japanischen kulturellen Translation bzw. Manifestation wesentlich von den vom Land selbst als Model anvisierten Pendants in Europa in Inhalt, Organisations- und Produktionsstruktur unterscheidet. Kulturell übersetzt wurde allein die Hülle der europäischen Institution. Das erste Nationaltheater in Japan manifestiert sich als eine von der Regierung im Rahmen des Denkmalschutzgesetztes initiierte und bestimmte, spezifisch japanische Variante eines Nationaltheaters, die unter dem Management von staatlichen Angestellten und Beamten den Erhalt traditioneller Künste in dafür ausgerichteten Bühnen zur Aufgabe hat. Nationaltheaterensemble gibt es nicht, die Produktionen werden mit Schauspielern kommerzieller Theaterunternehmen realisiert. Der lange Prozess dieser Genese liegt in der nicht vorhandenen Theaterförderung seitens der Regierung und der eher zurückhaltenden Haltung der Theaterwelt gegenüber einem staatlich betriebenen Theater begründet. Das Hüllen-Konzept des ersten Nationaltheaters diente, genau wie dessen Management durch Beamte, als Prototyp für die fünf weiteren bis 2004 eröffneten Nationaltheater in Japan, welche als Spartentheater der spezifisch japanischen Vielfalt an Theaterformen, auch in ihrer Bühnenarchitektur Rechnung tragen.
Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)
An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.
Wie erleben Schüler*innen die vorhandenen demokratischen Mitgestaltungsmöglichkeiten im Unterricht? Welche Bereitschaft, den Entwicklungsprozess mitzutragen, gibt es in der Schulgemeinschaft? Welche Widerstände gilt es zu beachten und wo liegen bisher ungenutzte Potentiale und Ideen für die Innovation von Schule und Unterricht?
„Wie cool wäre es, wenn wir Schüler einen eigenen TV-Sender hätten?“, schwärmt Sophie. „Vielleicht lieber ein Webchannel, damit man unsere Berichte von überall sehen kann,“ ruft Max. „Auf jeden Fall sollte es um Themen gehen, die uns Schülern wichtig sind“, sind sich die beiden einig. Gemeinsam grübeln sie: Ist das rechtlich überhaupt möglich? Wer kann uns dabei helfen?...
Schule entwickeln – ohne feste Tagesordnung, ohne starres Rednerpodium mit festgelegten Beiträgen und alle Mitglieder der Schulgemeinschaft können frei entscheiden, was sie wann tun möchten und werden mit ihren Ideen und Anliegen gehört. Klingt das sowohl interessant, als auch ein wenig verrückt? Open Space als Werkzeug zur demokratischen Schulentwicklung macht es möglich.
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.
Die Vermittlung des Grauens
(2018)
Im Rahmen meiner zweiteiligen Dissertation „Die Vermittlung des Grauens“ untersuchte ich das Einsatzpotential von Multimedia-Technologien an museal aufbereiteten Originalschauplätzen des Opfergedenkens in Frankreich und Deutschland. Vordergründig stand die Klärung der Frage, ob die heute verfügbaren technischen Hilfsmittel die traditionelle Vermittlungsarbeit sinnvoll ergänzen können. Die Forts Douaumont und Vaux in Verdun und die Alte Synagoge Essen schienen mir aufgrund ihrer stark divergierenden Musealisierungen für eine dahingehende Analyse besonders geeignet. Vor Ort widmete ich mich zum einen dem Prozess der „Vergegenwärtigung der Vergangenheit am nahezu originalbelassenen Erinnerungsort durch Multimediaguides“ und begab mich ebenso auf „Spurensuche in der Alten Synagoge Essen“, da die Neukonzeptionierung der Stätte sämtliche Erinnerungen an das örtliche Geschehen im Nationalsozialismus überzeichnet hat. Diese Umstände wirken sich dementsprechend auch unterschiedlich auf die Authentizität des jeweiligen Ortes aus, worauf das dortige Konzept reagieren und für einen angemessenen Ausgleich sorgen muss. Um die Aussagekraft demgemäß zu fördern, bieten sich heutzutage insbesondere Technologien wie Audio- und Multimediaguides an, deren Potential anhand der genannten Objekte überprüft wurde. Neben diese mittlerweile schon traditionellen Maßnahmen, traten im Laufe der Zeit weitere Präsentationsmöglichkeiten wie die Touchscreen-, Virtual oder Augmented Reality-Technologien, der QR-Code, die Nahbereichskommunikation NFC und die sogenannten Museums-Apps, die ebenso zur Sprache kamen. Dieser Umstand trägt sich nicht zuletzt dadurch, dass bei der musealen Vermittlungsarbeit das Zusammenwirken von formalen Bildungsträgern und informellen Lernorten immer mehr an Bedeutung gewinnt. Die große Herausforderung hierbei ist die altersgerechte Darbietung der Inhalte.
This study examines to what extent a banking crisis and the ensuing potential liquidity shortage affect corporate cash holdings. Specifically, how do firms adjust their liquidity management prior to and during a banking crisis when they are restricted in their financing options? These restrictions might not result from firm-specific characteristics but also incorporate the effects of certain regulatory requirements. I analyse the real effects of indicators of a potential crisis and the occurrence of a crisis event on corporate cash holdings for both unregulated and regulated firms from 31 different countries. In contrast to existing studies, I perform this analysis on the basis of a long observation period (1997 to 2014 respectively 2003 to 2014) using multiple crisis indicators (early warning signals) and multiple crisis events. For regulated firms, this study makes use of a unique sample of country-specific regulatory information, which is collected by hand for 15 countries and converted into an ordinal scale based on the severity of the regulation. Regulated firms are selected from a single industry: Real Estate Investment Trusts. These firms invest in real estate properties and let these properties to third parties. Real Estate Investment Trusts that comply with the aforementioned regulations are exempt from income taxation and are punished for a breach, which makes this industry particularly interesting for the analysis of capital structure decisions.
The results for regulated and unregulated firms are mostly inconclusive. I find no convincing evidence that the degree of regulation affects the level of cash holdings for regulated firms before and during a banking crisis. For unregulated firms, I find strong evidence that financially constrained firms have higher cash holdings than unconstrained firms. Further, there is no real evidence that either financially constrained firms or unconstrained firms increase their cash holdings when observing an early warning signal. In case of a banking crisis, the results differ for univariate tests and in panel regressions. In the univariate setting, I find evidence that both types of firms hold higher levels of cash during a banking crisis. In panel regressions, the effect is only evident for financially unconstrained firms from the US, and when controlling for financial stress, it is also apparent for financially constrained US firms. For firms from Europe, the results are predominantly inconclusive. For banking crises that are preceded by an early warning signal, there is only evidence for an increase in cash holdings for unconstrained US firms when controlling for financial stress.
In the present study a non-motion-stabilized scanning Doppler lidar was operated on board of RV Polarstern in the Arctic (June 2014) and Antarctic (December 2015– January 2016). This is the first time that such a system measured on an icebreaker in the Antarctic. A method for a motion correction of the data in the post-processing is presented.
The wind calculation is based on vertical azimuth display (VAD) scans with eight directions that pass a quality control. Additionally a method for an empirical signal-tonoise ratio (SNR) threshold is presented, which can be calculated for individual measurement set-ups. Lidar wind profiles are compared to total of about 120 radiosonde profiles and also to wind measurements of the ship.
The performance of the lidar measurements in comparison with radio soundings generally shows small root mean square deviation (bias) for wind speed of around 1ms-1(0.1ms-1) and for wind direction of around 10 (1). The post-processing of the non-motion-stabilized data shows comparably high quality to studies with motion-stabilized systems.
Two case studies show that a flexible change in SNR threshold can be beneficial for special situations. Further the studies reveal that short-lived low-level jets in the atmospheric boundary layer can be captured by lidar measurements with a high temporal resolution in contrast to routine radio soundings. The present study shows that a non-motionstabilized Doppler lidar can be operated successfully on an
icebreaker. It presents a processing chain including quality control tests and error quantification, which is useful for further measurement campaigns.
Salivary alpha-amylase (sAA) influences the perception of taste and texture, features both relevant in acquiring food liking and, with time, food preference. However, no studies have yet investigated the relationship between basal activity levels of sAA and food preference. We collected saliva from 57 volunteers (63% women) who we assessed in terms of their preference for different food items. These items were grouped into four categories according to their nutritional properties: high in starch, high in sugar, high glycaemic index, and high glycaemic load. Anthropometric markers of cardiovascular risk were also calculated. Our findings suggest that sAA influences food
preference and body composition in women. Regression analysis showed that basal sAA activity is inversely associated with subjective but not self-reported behavioural preference for foods high in sugar. Additionally, sAA and subjective preference are associated with anthropometric markers of cardiovascular risk. We believe that this pilot study points to this enzyme as an interesting candidate to consider among the physiological factors that modulate eating behaviour.
Possibilités et opportunités d’une organisation démocratique du système scolaire luxembourgeois
(2018)
« Aujourd’hui, le socialisme et la démocratie ne sont plus de simples questions réservées aux partis politiques, ce sont des questions vitales. L’école et les enseignants devront s’en saisir (...) et donner au peuple de demain, par un enseignement solide, mais surtout par une éducation formatrice, les moyens de remplir sa mission principale : régner ensemble. » – ein luxemburger Schulfreund, 1920
« Lorsque l’on réfléchit aux perspectives du développement scolaire, notre action doit être guidée par la vision d’une école démocratique, d’une école vivante dans un État démocratique ainsi que par la conception de l’être humain à laquelle elle est associée. Dans ses processus de conception et dans sa vie quotidienne, l’école de demain doit elle-même être une mise en pratique de la démocratie. » (Hartmut Wenzel)
Comment les élèves vivent-ils les possibilités actuelles de participation démocratique dans l’enseignement ? La communauté scolaire est-elle prête à porter le processus de développement ? Quelles résistances faut-il prendre en compte et où se trouvent les potentiels et les idées inexploités pour innover dans l’école et dans l’enseignement ?
« Ce serait vraiment cool si les élèves avaient leur propre chaîne de télévision, non ? », s’extasie Sophie. « Peut-être plutôt une web TV, pour qu’on puisse voir nos reportages de partout dans le monde », s’exclame Max. « En tout cas, il faut que ça parle de sujets qui sont importants pour les élèves » : sur ce point, les deux collégiens sont d’accord. Ensemble, ils se creusent les méninges : pour commencer, est-ce possible juridiquement ? Qui peut nous aider ?
Développer l’école – sans ordre du jour fixe, sans tribune ni interventions programmées. Tous les membres de la communauté scolaire peuvent décider librement ce qu’ils veulent faire et quand ils veulent le faire. Leurs idées et leurs souhaits sont écoutés. Tout cela vous semble à la fois intéressant et un peu fou ? Grâce à la méthode du forum ouvert, outil de développement scolaire démocratique, c’est possible.
Academic achievement is a central outcome in educational research, both in and outside higher education, has direct effects on individual’s professional and financial prospects and a high individual and public return on investment. Theories comprise cognitive as well as non-cognitive influences on achievement. Two examples frequently investigated in empirical research are knowledge (as a cognitive determinant) and stress (as a non-cognitive determinant) of achievement. However, knowledge and stress are not stable, what raises questions as to how temporal dynamics in knowledge on the one hand and stress on the other contribute to achievement. To study these contributions in the present doctoral dissertation, I used meta-analysis, latent profile transition analysis, and latent state-trait analysis. The results support the idea of knowledge acquisition as a cumulative and long-term process that forms the basis for academic achievement and conceptual change as an important mechanism for the acquisition of knowledge in higher education. Moreover, the findings suggest that students’ stress experiences in higher education are subject to stable, trait-like influences, as well as situational and/or interactional, state-like influences which are differentially related to achievement and health. The results imply that investigating the causal networks between knowledge, stress, and academic achievement is a promising strategy for better understanding academic achievement in higher education. For this purpose, future studies should use longitudinal designs, randomized controlled trials, and meta-analytical techniques. Potential practical applications include taking account of students’ prior knowledge in higher education teaching and decreasing stress among higher education students.
Die vorliegende Arbeit befasst sich mit einer komplexen Fragestellung: Wie geschieht der dynamische Umbau der sprachlichen Strukturen unter der Wirkung der innersprachlichen und außersprachlichen Parameter. Im Fokus der Forschung steht der Mechanismus des Werdens der Sprachstruktur, der hier als ein einziger Modus des Daseins der Sprache betrachtet wird. Als Material der Untersuchung dient die Operationalisierung der Bestandteile der verbalen Wortbildungsprozesse in der deutschen Sprache. Die Auswahl des verbalen Teils des Vokabulars ist dadurch bedingt, dass diese Wortart ein Zentralelement ist, das die ganze Sprachmaterie konsolidiert. Als einer der Schlüsselparameter gilt dabei der Frequenzfaktor, der bisher keinen einheitlichen Status in der Sprachtheorie bekommen hat. Die Suche nach dem Ursprung der Macht dieses Faktors führt unumgänglich über die Grenzen des Sprachsystems hinaus. Die Beobachtungen über das Verhalten des Frequenzfaktors in den Prozessen und Strukturen unterschiedlichster Natur lassen behaupten, dass wir es hier mit einem sehr komplexen Phänomen zu tun haben, das ein Bestandteil des allgemeinen kognitiven Anpassungsmechanismus des Menschen zur Umwelt ist. Als solcher ist er auch ein unveräußerlicher Aspekt der Semiose, des Sprachzeichens.
Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.
Reptiles belong to a taxonomic group characterized by increasing worldwide population declines. However, it has not been until comparatively recent years that public interest in these taxa has increased, and conservation measures are starting to show results. While many factors contribute to these declines, environmental pollution, especially in form of pesticides, has seen a strong increase in the last few decades, and is nowadays considered a main driver for reptile diversity loss. In light of the above, and given that reptiles are extremely underrepresented in ecotoxicological studies regarding the effects of plant protection products, this thesis aims at studying the impacts of pesticide exposure in reptiles, by using the Common wall lizard (Podarcis muralis) as model species. In a first approach, I evaluated the risk of pesticide exposure for reptile species within the European Union, as a means to detect species with above average exposure probabilities and to detect especially sensitive reptile orders. While helpful to detect species at risk, a risk evaluation is only the first step towards addressing this problem. It is thus indispensable to identify effects of pesticide exposure in wildlife. For this, the use of enzymatic biomarkers has become a popular method to study sub-individual responses, and gain information regarding the mode of action of chemicals. However, current methodologies are very invasive. Thus, in a second step, I explored the use of buccal swabs as a minimally invasive method to detect changes in enzymatic biomarker activity in reptiles, as an indicator for pesticide uptake and effects at the sub-individual level. Finally, the last part of this thesis focuses on field data regarding pesticide exposure and its effects on reptile wildlife. Here, a method to determine pesticide residues in food items of the Common wall lizard was established, as a means to generate data for future dietary risk assessments. Subsequently, a field study was conducted with the aim to describe actual effects of pesticide exposure on reptile populations at different levels.
A basic assumption of standard small area models is that the statistic of interest can be modelled through a linear mixed model with common model parameters for all areas in the study. The model can then be used to stabilize estimation. In some applications, however, there may be different subgroups of areas, with specific relationships between the response variable and auxiliary information. In this case, using a distinct model for each subgroup would be more appropriate than employing one model for all observations. If no suitable natural clustering variable exists, finite mixture regression models may represent a solution that „lets the data decide“ how to partition areas into subgroups. In this framework, a set of two or more different models is specified, and the estimation of subgroup-specific model parameters is performed simultaneously to estimating subgroup identity, or the probability of subgroup identity, for each area. Finite mixture models thus offer a fexible approach to accounting for unobserved heterogeneity. Therefore, in this thesis, finite mixtures of small area models are proposed to account for the existence of latent subgroups of areas in small area estimation. More specifically, it is assumed that the statistic of interest is appropriately modelled by a mixture of K linear mixed models. Both mixtures of standard unit-level and standard area-level models are considered as special cases. The estimation of mixing proportions, area-specific probabilities of subgroup identity and the K sets of model parameters via the EM algorithm for mixtures of mixed models is described. Eventually, a finite mixture small area estimator is formulated as a weighted mean of predictions from model 1 to K, with weights given by the area-specific probabilities of subgroup identity.
Die vorliegende Dissertation befasst sich mit der Bildung der Modelle der Komposita in der englischen Sprache.Um eine linguistische Theorie richtig zu bilden, stellen wir 7 Hypothesen auf, die auf umfangreiches englisches Sprachmaterial basieren. Wir schaffen den Regelkreis, der die Möglichkeiten für weitere Untersuchungen in diesem Bereich gibt. In unserem Fall ist diese Untersuchung ein begrenzter Bereich, der als die Bereicherung des Regelkreises von Köhler (2005) gilt (synergetisch-linguistische Modellierung).
Stiftungsunternehmen sind Unternehmen, die sich ganz oder teilweise im Eigentum einer gemeinnützigen oder privaten Stiftung befinden. Die Anzahl an Stiftungsunternehmen in Deutschland ist in den letzten Jahren deutlich gestiegen. Bekannte deutsche Unternehmen wie Aldi, Bosch, Bertelsmann, LIDL oder Würth befinden sich im Eigentum von Stiftungen. Einige von ihnen, wie beispielsweise Fresenius, ZF Friedrichshafen oder Zeiss, sind sogar an der Börse notiert. Die Mehrzahl der Stiftungsunternehmen entsteht dadurch, dass Unternehmensgründer oder Unternehmerfamilien ihr Unternehmen in eine Stiftung einbringen, anstatt es zu vererben oder zu verkaufen.
Die Motive hierfür sind vielfältig und können familiäre Gründe (z. B. Kinderlosigkeit, Vermeidung von Familienstreit), unternehmensbezogene Gründe (z. B. Möglichkeit der langfristigen Planung durch stabile Eigentümerstruktur) und steuerliche Gründe (Vermeidung oder Reduzierung der Erbschaftssteuer) haben oder sind durch die Person des Gründers motiviert (Möglichkeit, das Unternehmen auch nach dem eigenen Tod über die Stiftung noch weiterhin zu prägen). Aufgrund der Tatsache, dass Stiftungsunternehmen zumeist aus Familienunternehmen hervorgehen, wird in der Forschung häufig nicht zwischen Familien- und Stiftungsunternehmen differenziert. Aus diesem Grund werden in dieser Dissertation zu Beginn anhand des Drei-Kreis-Modells für Familienunternehmen die Unterschiede zwischen Stiftungs- und Familienunternehmen dargestellt. Die Ergebnisse zeigen, dass nur eine sehr geringe Anzahl von Stiftungsunternehmen eine große Ähnlichkeit zu klassischen Familienunternehmen aufweist. Die meisten Stiftungsunternehmen unterscheiden sich zum Teil sehr stark von Familienunternehmen. Diese Ergebnisse verdeutlichen, dass Stiftungsunternehmen als separates Forschungsfeld betrachtet werden sollten.
Da innerhalb der Gruppe der Stiftungsunternehmen ebenfalls eine starke Heterogenität herrscht, werden im Anschluss Performanceunterschiede innerhalb der Gruppe der Stiftungsunternehmen untersucht. Hierzu wurden die Daten von 142 deutschen Stiftungsunternehmen für die Jahre 2006-2016 erhoben und mittels einer lineareren Regression ausgewertet. Die Ergebnisse zeigen, dass zwischen den verschiedenen Typen signifikante Unterschiede herrschen. Unternehmen, die von einer gemeinnützigen Stiftung gehalten werden, weisen eine signifikant schlechtere Performance auf, als Unternehmen die eine private Stiftung als Shareholder haben.
Im nächsten Schritt wird die Gruppe der börsennotierten Stiftungsunternehmen untersucht. Mittels einer Ereignisstudie wird getestet, wie sich die Stiftung als Eigentümer eines börsennotierten Unternehmens auf den Shareholder Value auswirkt. Die Ergebnisse zeigen, dass eine Anteilsverringerung einer Stiftung einen positiven Einfluss auf den Shareholder Value hat. Stiftungen werden vom Kapitalmarkt dementsprechend negativ bewertet. Aufgrund der divergierenden Ziele von Stiftung und Unternehmen birgt die Verbindung zwischen Stiftung und Unternehmen potentielle Konflikte und Herausforderungen für die beteiligten Personen. Mittels eines qualitativen explorativen Ansatzes, wird basierend auf Interviews, ein Modell entwickelt, welches die potentiellen Konflikte in Stiftungsunternehmen anhand des Beispiels der Doppelstiftung aufzeigt.
Im letzten Schritt werden Handlungsempfehlungen in Form eines Entwurfs für einen Corporate Governance Kodex erarbeitet, die (potentiellen) Stifterinnen und Stiftern helfen sollen, mögliche Konflikte entweder zu vermeiden oder bereits bestehende Probleme zu lösen.
Die Ergebnisse dieser Dissertation sind relevant für Theorie und Praxis. Aus theoretischer Sicht liegt der Wert dieser Untersuchungen darin, dass Forscher künftig besser zwischen Stiftungs- und Familienunternehmen unterscheiden können. Zudem bringt diese Arbeit den aktuellen Forschungsstand zum Thema Stiftungsunternehmen weiter. Außerdem bietet diese Dissertation insbesondere potentiellen Stiftern einen Überblick über die verschiedenen Ausgestaltungsmöglichkeiten und die Vor- und Nachteile, die diese Konstruktionen mit sich bringen. Die Handlungsempfehlungen ermöglichen es Stiftern, vorab potentielle Gefahren erkennen zu können und diese zu umgehen.
Acute social and physical stress interact to influence social behavior: the role of social anxiety
(2018)
Stress is proven to have detrimental effects on physical and mental health. Due to different tasks and study designs, the direct consequences of acute stress have been found to be wide-reaching: while some studies report prosocial effects, others report increases in antisocial behavior, still others report no effect. To control for specific effects of different stressors and to consider the role of social anxiety in stress-related social behavior, we investigated the effects of social versus physical stress on behavior in male participants possessing different levels of social anxiety. In a randomized, controlled two by two design we investigated the impact of social and physical stress on behavior in healthy young men. We found significant influences on various subjective increases in stress by physical and social stress, but no interaction effect. Cortisol was significantly increased by physical stress, and the heart rate was modulated by physical and social stress as well as their combination. Social anxiety modulated the subjective stress response but not the cortisol or heart rate response. With respect to behavior, our results show that social and physical stress interacted to modulate trust, trustworthiness, and sharing. While social stress and physical stress alone reduced prosocial behavior, a combination of the two stressor modalities could restore prosociality. Social stress alone reduced nonsocial risk behavior regardless of physical stress. Social anxiety was associated with higher subjective stress responses and higher levels of trust. As a consequence, future studies will need to investigate further various stressors and clarify their effects on social behavior in health and social anxiety disorders.
The changing views on the evolutionary relationships of extant Salamandridae (Amphibia: Urodela)
(2018)
The phylogenetic relationships among members of the family Salamandridae have been repeatedly investigated over the last 90 years, with changing character and taxon sampling. We review the changing composition and the phylogenetic position of salamandrid genera and species groups and add a new phylogeny based exclusively on sequences of nuclear genes. Salamandrina often changed its position depending on the characters used. It was included several times in a clade together with the primitive newts (Echinotriton, Pleurodeles, Tylototriton) due to their seemingly ancestral morphology. The latter were often inferred as a monophyletic clade. Respective monophyly was almost consistently established in all molecular studies for true salamanders (Chioglossa, Lyciasalamandra, Mertensiella, Salamandra), modern Asian newts (Cynops, Laotriton, Pachytriton, Paramesotriton) and modern New World newts (Notophthalmus, Taricha). Reciprocal non-monophyly has been established through molecular studies for the European mountain newts (Calotriton, Euproctus) and the modern European newts (Ichthyosaura, Lissotriton, Neurergus, Ommatotriton, Triturus) since Calotriton was identified as the sister lineage of Triturus. In pre-molecular studies, their respective monophyly had almost always been assumed, mainly because a complex courtship behaviour shared by their respective members. Our nuclear tree is nearly identical to a mito-genomic tree, with all but one node being highly supported. The major difference concerns the position of Calotriton, which is no longer nested within the modern European newts. This has implications for the evolution of courtship behaviour of European newts. Within modern European newts, Ichthyosaura and Lissotriton changed their position compared to the mito-genomic tree. Previous molecular trees based on seemingly large nuclear data sets, but analysed together with mitochondrial data, did not reveal monophyly of modern European newts since taxon sampling and nuclear gene coverage was too poor to obtain conclusive results. We therefore conclude that mitochondrial and nuclear data should be analysed on their own.
Species can show strong variation of local abundance across their ranges. Recent analyses suggested that variation in abundance can be related to environmental suitability, as the highest abundances are often observed in populations living in the most suitable areas. However, there is limited information on the mechanisms through which variation in environmental suitability determines abundance. We analysed populations of the microendemic salamander Hydromantes flavus, and tested several hypotheses on potential relationships linking environmental suitability to population parameters. For multiple populations across the whole species range, we assessed suitability using species distribution models, and measured density, activity level, food intake and body condition index. In high-suitability sites, the density of salamanders was up to 30-times higher than in the least suitable ones. Variation in activity levels and population performance can explain such variation of abundance. In high-suitability sites, salamanders were active close to the surface, and showed a low frequency of empty stomachs. Furthermore, when taking into account seasonal variation, body condition was better in the most suitable sites. Our results show that the strong relationship between environmental suitability and population abundance can be mediated by the variation of parameters strongly linked to individual performance and fitness.
Das Ideal einer lebendigen Demokratie mit engagierten Bürger/innen lässt nach Wegen suchen, um die Partizipationsbereitschaft nachkommender Generationen zu fördern. Ausgehend von der Prämisse, dass das Wohlbefinden einen zentralen Motivator darstellt, der sich auch bei jungen Menschen mithilfe der Konzepte der Positiven Psychologie gezielt fördern lässt (Brohm & Endres, 2015), nähert sich der vorliegende Beitrag den beiden folgenden Forschungsfragen: Erstens, lässt sich ein Zusammenhang zwischen dem Wohlbefinden Jugendlicher und ihrer politischen Partizipationsbereitschaft respektive ihrem politischen Interesse empirisch nachweisen? Zweitens, in welchen Kontexten und in welchem Maße erleben Jugendliche Wohlbefinden in der Auseinandersetzung mit politischen Themen, Akteur/innen und Prozessen?
Da bisherige bundesweite und regionale Studien zu den Themen Jugend, Politik und politisch-gesellschaftlicher Teilhabe nur in bedingtem Maße Rückschlüsse auf die fünf Elemente des Wohlbefindens nach Seligman (2012) zuließen, wurde im Vorfeld des vorliegenden Beitrags eine schriftliche Befragung von 100 Jugendlichen aus dem saarländischen Landkreis Saarlouis durchgeführt.
Die Befragten charakterisieren sich mehrheitlich als allgemein aufgeschlossen und interessiert, doch es gelingt der Politik und ihren Mittler/innen nur eingeschränkt, dieses Potenzial in aktive politische Teilhabe zu übersetzen. Unter anderem persönliche Kontakte zu politisch Engagierten und eigene Erfahrungen in politischen Verbänden erweisen sich indes als positiv und korrelieren mit dem politischen Interesse und der Offenheit der Jugendlichen gegenüber politischer Partizipation.
Das Working Paper beleuchtet den Themenbereich‚ Mobilität und Verkehr und arbeitet Herausforderungen für die Raumentwicklung der Großregion ab. Insbesondere legt es einen Fokus auf die Territoriale Verteilung der grenzüberschreitenden Arbeitnehmerströme und deren Abhängigkeit vom Auto innerhalb der Großregion sowie den Einfluss von EU Politik auf Herausforderungen des grenzüberschreitenden Verkehrs.
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
Die Untersuchung verbindet Methoden der Korpuslinguistik und des close readings, um an einem repräsentativen Einzeltext mittlerer Länge das Verhältnis der syntaktischen und metrischen Ebene im mittelhochdeutschen Reimpaarvers zu untersuchen. Herausgearbeitet werden regelmäßig wiederkehrende Muster, die beide Ebenen stets gleich aufeinander abbilden. Diese Regelmäßigkeiten lassen sich aus den Lautstrukturen des mhd. Wortschatzes, den syntaktischen Bauplänen der Phrasen und Sätze, schließlich den Erfordernissen des metrischen Schemas erklären. Der häufig zur Erklärung herangezogene Reimzwang erweist sich bei näherer Betrachtung als eher sekundärer Einfluss auf die syntaktische Struktur. Neben typischen „Normalfällen“ bei denen sich statistisch häufige Betonungsmuster der Wörter, in üblichen, einfachen Satzstellungsmustern in immer gleicher Weise problemlos in den Reimpaarvers integrieren lassen, können auch wiederkehrende Abweichungsvarianten erklärt und beschrieben werden. Die festgestellten Regularitäten sind nur zu einem kleinen Teil und in wenigen Fällen deterministisch, es lässt sich jedoch, um die statistischen Auffälligkeiten zu begründen, zeigen, welche Vorteile sich aus bestimmten Varianten ergeben und welche Schwierigkeiten bei anderen entstehen, wie sich eine Variante durch eine andere ersetzen lässt. Beschrieben wird so der Gestaltungsraum des Dichters und die von ihm gewählten Lösungen. Indirekt ergibt sich zugleich ein Negativbild der Syntax, die den Zwängen des metrischen Schemas nicht unterworfen ist.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
The main socio-ecological pressures in five wetlands in the Greater Accra Region were first identified and then summarized by reviewing the relevant literature. As a second step, fieldwork in the region was carried out in 2016 to further examine the pressures identified in the literature. Most research on the wetlands in Ghana was published around the year 2000. Yet, similar socio-ecological pressures persist today. Based on both, fieldwork observations and the literature review, these pressures were ranked using the IUCN pressures system analysis framework. It is suggested that further research needs to proceed with uncovering how trade-offs between ecosystem and quality of life can be defined.
Der vorliegende Artikel beschreibt die klimatische Situation des Naturparks Saar-Hunsrück. Neben der Einordnung der Region in die großskalige klimatische Zirkulation werden die wesentlichen Klimaelemente beschrieben. Da sich die Klimaelemente mit zunehmender Höhe verändern, bestimmt im unteren Saartal, im Saar-Nahe-Bergland und Hunsrück mit Osburger Hochwald, Schwarzwälder Hochwald und Idarwald die Höhenlage entscheidend die räumliche Struktur der einzelnen Klimaelemente. Die Niederschlagsverteilung zeigt deutlich den Luveffekt in den westlichen Teilen des Naturparks und die Abnahme der Niederschlagshöhen in nordöstlicher Richtung. Die räumlichen Muster der mittleren und maximalen Lufttemperatur folgen der Topographie, während Minimalwerte der Temperatur ein weniger differenziertes Bild zeigen. In den tiefer gelegenen Regionen des Naturparks treten 4-7 Hitzetage in langjährigen Mittel auf, in den Hochlagen des Hunsrücks werden nur noch 1-3 Tage / Jahr beobachtet. Oberhalb der 600 m-Höhenlinie ergeben sich im Mittel 110-130 Frosttage im Jahr, im südwestlichen Teil des Naturparks geht die Zahl auf 50 Tage / Jahr zurück. Die mittlere Anzahl der Tage mit Schneedecke liegt, bezogen auf das Areal des Naturparks, insgesamt zwischen 10 und 90 Tagen pro Jahr. Ihre Veränderung infolge des regionalen Klimawandels zeigt eine Abnahme zwischen 3-15 Tagen pro Jahr zwischen den Zeiträumen 1961-1990 und 1981-2010. Die aktuelle Sonnenscheindauer beträgt im westlichen Teil des Naturparks im Mittel 1500-1600 Sonnenscheinstunden pro Jahr, im südöstlichen Teil werden bis 1600 Stunden pro Jahr erreicht.
Kartenschätze aus Italien
(2018)
Die Entdeckungen der Neuzeit sowie verbesserte Druckverfahren führten ab dem 16. Jahrhundert zu einem enormen Aufschwung der Kartographie. Gerade in Italien entstanden blühende kartographische Zentren mit exzellentem Ruf, die innerhalb kurzer Zeit große Fortschritte hinsichtlich Genauigkeit und Übersichtlichkeit machten. Aus dem der Universitätsbibliothek Trier vermachten Nachlass des Kartensammlers Fritz Hellwig werden drei repräsentative Beispiele vorgestellt.
Die A4-Strategie versucht, eine Antwort auf die Frage zu geben, ob es in der modernen Medienlandschaft optimale und effiziente Kommunikationsstrategien gibt, deren Erfolg nicht von der Höhe des Budgets abhängig ist und die gleichzeitig eine hohe Kundenbindung bewirken und eine effiziente Auswahl der Kommunikationsinstrumente sichert. Dies ist gerade für KMUs von besonderer Bedeutung, die sich in einem komplexen und globalisierten Marktumfeld bewegen. Die Marketingkommunikation ist nicht nur ein unvermeidliches Werkzeug, um die Verkaufszahlen zu steigern. Sie bedarf heutzutage auch einer durchdachten Strategie. Denn es wird erst eine erfolgreiche Interaktion zwischen diesen Partnern (Unternehmen und Kunden) stattfinden, wenn sowohl ein optimaler Unternehmenswert als auch ein Kundenwert generiert wird. Die A4-Strategie hilft zugleich dabei, Antworten auf die für eine optimale Marketingkommunikation relevante Fragen zu finden: Wer kann einkaufen? Wer kauft tatsächlich ein? Was sind die relevanten Informationen über den, der tatsächlich einkauft? Anhand der Antworten auf diese Frage wird ermittelt, wo und wie die potentielle Kundschaft besser akquiriert und wo und wie die bestehende Kundschaft optimal betreut werden kann. Dieses Konzept bietet eine strukturierte Herangehensweise und Vorgehensweise, um eine Kommunikationsstrategie je nach Situation zu entwickeln. Sie ist somit keine fertige Lösung, sondern schafft sie Rahmen für ein methodisches Entscheidungsprozess. Sie hilft zudem situationsadäquat Entscheidungen zu treffen und Handlungen vorzunehmen, die diese Entscheidungen konsequent umzusetzen.
Early life adversity (ELA) is associated with a higher risk for diseases in adulthood. Changes in the immune system have been proposed to underlie this association. Although higher levels of inflammation and immunosenescence have been reported, data on cell-specific immune effects are largely absent. In addition, stress systems and health behaviors are altered in ELA, which may contribute to the generation of the "ELA immune phenotype". In this thesis, we have investigated the ELA immune phenotype on a cellular level and whether this is an indirect consequence of changes in behavior or stress reactivity. To address these questions the EpiPath cohort was established, consisting of 115 young adults with or without ELA. ELA participants had experienced separation from their parents in early childhood and were subsequently adopted, which is a standard model for ELA, whereas control participants grew up with their biological parents. At a first visit, blood samples were taken for analysis of epigenetic markers and immune parameters. A selection of the cohort underwent a standardized laboratory stress test (SLST). Endocrine, immune, and cardiovascular parameters were assessed at several time points before and after stress. At a second visit, participants underwent structural clinical interviews and filled out psychological questionnaires. We observed a higher number of activated T cells in ELA, measured by HLA-DR and CD25 expression. Neither cortisol levels nor health-risk behaviors explained the observed group differences. Besides a trend towards higher numbers of CCR4+CXCR3-CCR6+ CD4 T cells in ELA, relative numbers of immune cell subsets in circulation were similar between groups. No difference was observed in telomere length or in methylation levels of age-related CpGs in whole blood. However, we found a higher expression of senescence markers (CD57) on T cells in ELA. In addition, these cells had an increased cytolytic potential. A mediation analysis demonstrated that cytomegalovirus infection " an important driving force of immunosenescence " largely accounted for elevated CD57 expression. The psychological investigations revealed that after adoption, family conditions appeared to have been similar to the controls. However, PhD thesis MMC Elwenspoek 18 ELA participants scored higher on a depression index, chronic stress, and lower on self-esteem. Psychological, endocrine, and cardiovascular parameters significantly responded to the SLST, but were largely similar between the two groups. Only in a smaller subset of groups matched for gender, BMI, and age, the cortisol response seemed to be blunted in ELA participants. Although we found small differences in the methylation level of the GR promoter, GR sensitivity and mRNA expression levels GR as well as expression of the GR target genes FKBP5 and GILZ were similar between groups. Taken together, our data suggest an elevated state of immune activation in ELA, in which particularly T cells are affected. Furthermore, we found higher levels of T cells immunosenescence in ELA. Our data suggest that ELA may increase the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell specific immunosenescence. Importantly, we found no evidence of HPA dysregulation in participants exposed to ELA in the EpiPath cohort. Thus, the observed immune phenotype does not seem to be secondary to alterations in the stress system or health-risk behaviors, but rather a primary effect of early life programming on immune cells. Longitudinal studies will be necessary to further dissect cause from effect in the development of the ELA immune phenotype.
There are large health, societal, and economic costs associated with attrition from psychological services. The recently emerged, innovative statistical tool of complex network analysis was used in the present proof-of-concept study to improve the prediction of attrition. Fifty-eight patients undergoing psychological treatment for mood or anxiety disorders were assessed using Ecological Momentary Assessments four times a day for two weeks before treatment (3,248 measurements). Multilevel vector autoregressive models were employed to compute dynamic symptom networks. Intake variables and network parameters (centrality measures) were used as predictors for dropout using machine-learning algorithms. Networks for patients differed significantly between completers and dropouts. Among intake variables, initial impairment and sex predicted dropout explaining 6% of the variance. The network analysis identified four additional predictors: Expected force of being excited, outstrength of experiencing social support, betweenness of feeling nervous, and instrength of being active. The final model with the two intake and four network variables explained 32% of variance in dropout and identified 47 out of 58 patients correctly. The findings indicate that patients" dynamic network structures may improve the prediction of dropout. When implemented in routine care, such prediction models could identify patients at risk for attrition and inform personalized treatment recommendations.
Das Working Paper beleuchtet den Themenbereich Energie und arbeitet Herausforderungen für die Raumentwicklung der Großregion ab. Es diskutiert den Begriff der Energiewende und legt einen Fokus auf Energiesysteme und –träger, insbesondere den Ausbau der Windkraft und Energiegewinnung aus Biomasse im Zusammenhang mit der Entwicklung fossil-atomarer Energiequellen in Deutschland und Frankreich.
Die räumliche Entwicklung von Städten und Regionen wird durch Trends wie Klimawandel, demographische Veränderungen und Strukturwandel beeinflusst, welche nicht an Verwaltungsgrenzen aufhören, sondern die Entwicklung großflächiger Gebiete bestimmen. Außerdem weisen Grenzräume häufig funktionale und thematische Verflechtungen auf, die über die nationalen Grenzen hinweg bestehen. Damit verbunden sind ein regelmäßiger Austausch und Abhängigkeiten zwischen Grenzräumen und deren Bewohnern. Daher ist die Koordination der grenzüberschreitenden Raumentwicklung entscheidend für eine zukunftsorientierte und nachhaltige räumliche Entwicklung. Aufgrund seiner hohen Bedeutung wird dieses Thema von europäischen Wissenschaftlern in der ersten Ausgabe der Themenhefte Borders in Perspective aus verschiedenen Perspektiven beleuchtet.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Die Dissertation untersucht vergleichend deutsch-japanische fotografische Kriegspropaganda des Zweiten Weltkrieges anhand der zu jener Zeit auflagenstärksten illustrierten Zeitschriften "Illustrierter Beobachter" auf deutscher und "Shashin Shūhō" (Fotografische Wochenzeitung) auf japanischer Seite. Unter Rückgriff auf die ikonographisch-ikonologische Methode des Kunsthistorikers Erwin Panofsky bei gleichzeitiger Bezugnahme auf das Bildverständnis Charles Sanders Peirces werden Muster der bildlichen Darstellung von Kindern und Jugendlichen analysiert, um hierdurch Rückschlüsse zu ziehen auf Gemeinsamkeiten und Unterschiede in der Ausgestaltung der Bildpropaganda beider Länder unmittelbar vor und im Zweiten Weltkrieg (1939-1945), auf allgemeine Tendenzen in der Gestaltung von Propaganda im selben Zeitraum, auf die Organisation und Funktion von Propaganda in radikalnationalistischen Staaten. Gleichzeitig wird durch Einbeziehung der Rezipientensicht die Frage nach Mehrdeutigkeit und, hiermit einhergehend, Wirkungsweise und Wirkungsgrad der Propaganda gestellt. Schwerpunkt der Untersuchung bilden sämtliche publizierten Ausgaben der zweiten Jahreshälften 1938 und 1943.
Das Working Paper beleuchtet den Themenbereich Beschäftigung und wirtschaftliche Entwicklung und arbeitet Herausforderungen für die Raumentwicklung der Großregion ab. Insbesondere legt es einen Fokus auf die Industriegeschichte sowie die Beschäftigung und grenzüberschreitende Arbeit in der Großregion.
Das Working Paper beleuchtet den Themenbereich‚Demografie und Migration und arbeitet Herausforderungen für die Raumentwicklung der Großregion ab. Insbesondere legt es einen Fokus auf die grenzüberschreitende Wohnmobilität an den Grenzen des Großherzogtums; die Bevölkerungsalterung und die Sicherung der Daseinsvorsorge im Gesundheitsbereich in ländlichen Gebieten.
At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.
We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.
The forward effect of testing refers to the finding that retrieval practice of previously studied information increases retention of subsequently studied other information. It has recently been hypothesized that the forward effect (partly) reflects the result of a reset-of-encoding (ROE) process. The proposal is that encoding efficacy decreases with an increase in study material, but testing of previously studied information resets the encoding process and makes the encoding of the subsequently studied information as effective as the encoding of the previously studied information. The goal of the present study was to verify the ROE hypothesis on an item level basis. An experiment is reported that examined the effects of testing in comparison to restudy on items’ serial position curves. Participants studied three lists of items in each condition. In the testing condition, participants were tested immediately on non-target lists 1 and 2, whereas in the restudy condition, they restudied lists 1 and 2. In both conditions, participants were tested immediately on target list 3. Influences of condition and items’ serial learning position on list 3 recall were analyzed. The results showed the forward effect of testing and furthermore that this effect varies with items’ serial list position. Early target list items at list primacy positions showed a larger enhancement effect than middle and late target list items at non-primacy positions. The results are consistent with the ROE hypothesis on an item level basis. The generalizability of the ROE hypothesis across different experimental tasks, like the list-method directed-forgetting task, is discussed.