Refine
Year of publication
- 2017 (50) (remove)
Document Type
- Doctoral Thesis (35)
- Article (12)
- Working Paper (2)
- Book (1)
Keywords
- Fernerkundung (4)
- Landsat (3)
- Angola (2)
- Depression (2)
- Entrepreneurship (2)
- Erschöpfung (2)
- Gesundheit (2)
- Meaning (2)
- Psychotherapie (2)
- Remote Sensing (2)
Institute
- Geographie und Geowissenschaften (13)
- Psychologie (13)
- Wirtschaftswissenschaften (7)
- Fachbereich 2 (2)
- Informatik (2)
- Soziologie (2)
- Anglistik (1)
- Computerlinguistik und Digital Humanities (1)
- Fachbereich 1 (1)
- Fachbereich 6 (1)
Nos recherches ont exploré l’espace transculturel dans la dramaturgie québécoise contemporaine. Notre travail a été principalement basé sur le concept de transgressivité de Bertrand Westphal [WESTPHAL : 2007] et la notion de transculturalité proposée par Wolfgang Welsch [WELSCH : 1999].
La réflexion menée par Welsch nous a inspiré dans l’établissement des trois grands axes de notre analyse, autour desquels se sont articulées les dimensions transculturelles superposées : l’axe syncrétique, l’axe intime et l’axe cosmopolite. Ces axes ont déterminé le choix de notre corpus, provenant de l’époque transculturelle du Québec entre 1975 et 1995. L’axe syncrétique s’est dessiné à partir de la présence de cultures modernes interconnectées, où les façons de vivre ne se limitent pas aux frontières culturelles nationales. Elles les « transgressent » et se retrouvent dans d’autres cultures. L’axe intime découle de ce que les individus – le(s) Moi(s) – sont des hybrides culturels, chaque individu se formant par des attachements multiples. Ils interagissent entre eux, créant ainsi une transculturalité interne. L’axe cosmopolite renferme une dimension qui représente de nombreuses façons de vivre et diverses vies culturelles qui s’interpénètrent mutuellement. Elles interagissent entre elles, mais aussi avec des espaces considérés comme étant hors du contexte transculturel.
Nous avons tenu à développer notre projet autour des prémisses théoriques de la géocritique. Cela nous a conduit à établir une grille d’analyse spécifique afin de découvrir le mode de fonctionnement de l’espace humain transculturel. L’analyse s’est basée uniquement sur le texte dramatique. Des dispositifs inspirés de la géocritique ont dévoilé quelques caractéristiques primordiales des dimensions transculturelles superposées de la diversité québécoise.
This dissertation details how Zeami (ca. 1363 - ca.1443) understood his adoption of the heavenly woman dance within the historical conditions of the Muromachi period. He adopted the dance based on performances by the Ōmi troupe player Inuō in order to expand his own troupe’s repertoire to include a divinely powerful, feminine character. In the first chapter, I show how Zeami, informed by his success as a sexualized child in the service of the political elite (chigo), understood the relationship between performer and audience in gendered terms. In his treatises, he describes how a player must create a complementary relationship between patron and performer (feminine/masculine or yin/yang) that escalates to an ecstasy of successful communication between the two poles, resembling sexual union. Next, I look at how Zeami perceived Inuō’s relationships with patrons, the daimyo Sasaki Dōyo in chapter two and shogun Ashikaga Yoshimitsu in chapter three. Inuō was influenced by Dōyo’s masculine penchant for powerful, awe-inspiring art, but Zeami also recognized that Inuō was able to complement Dōyo’s masculinity with feminine elegance (kakari and yūgen). In his relationship with Yoshimitsu, Inuō used the performance of subversion, both in his public persona and in the aesthetic of his performances, to maintain a rebellious reputation appropriate within the climate of conflict among the martial elite. His play “Aoi no ue” draws on the aristocratic literary tradition of the Genji monogatari, giving Yoshimitsu the role of Prince Genji and confronting him with the consequences of betrayal in the form of a demonic, because jilted, Lady Rokujō. This performance challenged Zeami’s early notion that the extreme masculinity of demons and elegant femininity as exemplified by the aristocracy must be kept separate in character creation. In the fourth chapter, I show how Zeami also combined dominance (masculinity) and submission (femininity) in the corporal capacity of a single player when he adopted the heavenly woman dance. The heavenly woman dance thus complemented not only the masculinity of his male patrons with femininity but also the political power of his patrons with another dominant power, which plays featuring the heavenly woman dance label divine rather than masculine.
GIS – what can and what can’t it say about social relations in adaptation to urban flood risk?
(2017)
Urban flooding cannot be avoided entirely and in all areas, particularly in coastal cities. Therefore adaptation to the growing risk is necessary. Geographical Information Systems (GIS) based knowledge on risk informs location-based approach to adaptation to climate risk. It allows managing city- wide coordination of adaptation measures, reducing adverse impacts of local strategies on neighbouring areas to the minimum. Quantitative assessments dominate GIS applications in flood risk management, for instance to demonstrate the distribution of people and assets in a flood prone area. Qualitative, participatory approaches to GIS are on the rise but have not been applied in the context of flooding yet. The overarching research question of this working paper is: what can GIS, and what can it not say about relationships / social relations in adaptation to urban flood risk? The use of GIS in risk mapping has exposed environmental injustices. Applications of GIS further allow model- ling future flood risk in function of demographic and land use changes, and combining it with decision support systems (DSS). While such GIS applications provide invaluable information for urban planners steering adaptation they however fall short on revealing the social relations that shape individual and household adaptation decisions. The relevance of networked social relations in adaptation to flood risk has been demonstrated in case studies, and extensively in the literature on organizational learning and adaptation to change. The purpose of this literature review is to identify the type of social relations that shape adaptive capacities towards urban flood risk which can- not be identified in a conventional GIS application.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
Zeugen die ein Tatgeschehen nicht beobachtet, sondern nur auditiv wahrgenommen haben, werden als Ohrenzeugen bezeichnet. Im Rahmen des Strafverfahrens erhalten Ohrenzeugen die Aufgabe, die Täterstimme im Rahmen einer akustischen Wahlgegenüberstellung (Voice Line-up) wiederzuerkennen. Die forensische Praxis zeigt, dass Ohrenzeugen diese Aufgabe unterschiedlich gut bewältigen können, ohne dass sich ein klares Muster erkennen lässt. In der Ohrenzeugenforschung gibt es jedoch Hinweise, dass musikalische Ausbildung die Fähigkeit zur Sprechererkennung verbessert.
Ziel dieser Arbeit ist es zu prüfen, ob das Ausmaß musikalischer Wahrnehmungskompetenz eine Prognose der Sprechererkennungsleistung erlaubt.
Um dies zu prüfen, nahmen 60 Versuchspersonen sowohl an einem „Musikalitätstest“ in Form der Montreal Battery for the Evaluation of Amusia (MBEA) als auch an einem target present Voice Line-up teil. Mittels Regressionsmodellen konnte bestimmt werden, dass die Wahrscheinlichkeit für eine korrekte Sprechererkennung steigt, je höher das Testergebnis der MBEA ausfällt. Dieses Testergebnis ermöglicht eine signifikante Prognose der Sprechererkennungsleistung. Die ebenfalls mittels Fragebögen erhobene Dauer der musikalischen Ausbildung erlaubt hingegen keine signifikante Prognose. Das durchgeführte Experiment zeigt auch, dass die Dauer der musikalischen Ausbildung das Testergebnis im Musikalitätstest nur eingeschränkt erklärt.
Diese Beobachtungen führen zu dem Schluss, dass bei einer Bewertung von Ohrenzeugen ein direktes Testen von musikalischer Wahrnehmungsfähigkeit einer Inferenz auf der Basis musik-biografischer Angaben vorzuziehen ist.
Die Arbeit geht von der These aus, dass zwischen Webers Konzept einer verstehenden Soziologie und der materialen Studie "Die Protestantische Ethik und der Geist des Kapitalismus" (PE) eine Differenz in Form einer Mehrleistung auf Seiten der PE besteht. Diese Annahme fußt auf der Beobachtung, dass die PE verschiedene Perspektiven auf die Entstehung sinnhafter Handlungsorientierungen offeriert und sich gleichsam Strategien zu deren Plausibilisierung identifizieren lassen. Derartige Zusammenhänge wurden von Weber in den methodologischen Schriften scheinbar nur am Rande thematisiert und die entsprechenden Passagen erwecken den Eindruck, dass die Frage nach der Geschichtlichkeit der sinnhaften Handlungsorientierungen lediglich als Prämisse bzw. als Begründung für die Notwendigkeit einer verstehenden Sinnerfassung Beachtung findet. Diese Beobachtung bestimmt den weiteren Gang der Untersuchung und führte zu einem argumentativen Aufbau, welcher sich als Dreischritt beschreiben lässt: a) Eine Diskussion des Erklärungsprofils von Webers Konzept einer verstehenden Soziologie sowie Beispiele für vermutete Mehrleistungen auf Seiten der PE dienen zunächst der genaueren Explikation der identifizierten Problemstellung (vgl. Abschnitt I). Hierauf aufbauend erweisen sich mit Blick auf den aktuellen Forschungsstand b) jene Argumentationszusammenhänge der materialen Forschung als problematisch bzw. in ihrer logischen Beziehung zur Methodologie Webers als weiterhin ungeklärt, welche in Abschnitt I zunächst auf eine Mehrleistung auf Seiten der PE hindeuten. Hierbei zeigt eine gegenüberstellende Untersuchung von Vertretern von Einheitsthesen (vgl. Prewo 1979, Schluchter 1998, Collins 1986a) sowie Vertretern von Differenzthesen (vgl. v. Schelting 1934, Bendix 1964, Kalberg 2001), dass der aktuelle Diskussionsstand weiterhin durch offene Fragen und Unstimmigkeiten charakterisiert ist (vgl. Abschnitt II). Implizite Antworten auf diese Probleme des aktuellen Diskussionsstands lassen sich über c) einen erneuten rekonstruierenden Blick auf die in der PE enthaltenen Zusammenhänge und Plausibilisierungsstrategien gewinnen. Hier ist die Strategie doppelseitig angelegt: Für einen Teil der identifizierten Probleme ist es von besonderer Bedeutung, einen systematischen Einblick in die in der PE enthaltenen Zusammenhänge zu gewinnen (vgl. Abschnitt III). Die hierbei gewonnenen Erträge dienen als Grundlage zur adäquaten Rekonstruktion der methodischen Umsetzung und ermöglichen ein Verständnis davon, wie Weber die in den Fokus der Forschung gestellten Phänomene zu erklären suchte (vgl. Abschnitt IV).
Gegenstand der vorliegenden Arbeit ist die Untersuchung der Lexik der spätmittelalterlichen Luxemburger Rechnungsbücher unter der Prämisse der Urbanität. Da auf keine ausgearbeitete Methodik zurückgegriffen werden konnte, anhand derer eine Einteilung in für die Analyse relevante bzw. irrelevante Lexik vorgenommen werden konnte, wurde im Rahmen der Arbeit unter Rückgriff auf sprachwissenschaftliche und geschichtswissenschaftliche Konzepte eine eigene Methodik entwickelt. Auf deren Basis erfolgte die Anlage des Untersuchungskorpus' auf der Grundlage der von 1388-1500 fast lückenlos überlieferten Rechnungsbücher der Stadt Luxemburg mit dem Ziel der Analyse spezifisch urbaner Lexik. Bei der Analyse wurde schließlich eine dreifache Zielsetzung verfolgt: Einerseits die Untersuchung der Lexik mit Blick auf die Verteilung von types und tokens in als spezifisch urban definierten Domänen, andererseits die Anlage eines Glossars, das als textphilologisches Werkzeug als Hilfsmittel bei der Erschließung der Rechnungsbücher dienen soll. Daneben wurde ebenfalls auf die geschichtswissenschaftlichen Erkenntnisgewinne eingegangen, die durch die jeweilige Wortschatzanalyse realisiert werden konnten.
Die Untersuchung widmet sich dem Verhältnis von Kunst und Fernsehen in Deutschland seit den 1960er Jahren bis heute unter Berücksichtigung des gesellschaftlichen und künstlerischen Diskurses. In den 1960er Jahren begann die Zusammenarbeit von Künstlern und dem Fernsehen mit Projekten wie "Black Gate Cologne" oder Gerry Schums "Fernsehgalerie" äußerst vielversprechend. In enger Zusammenarbeit mit den Fernsehverantwortlichen wurden Sendungen speziell für die Ausstrahlung im Fernsehen produziert und auch als Fernsehkunst gesendet. Die Akzeptanz und Resonanz auf diese Projekte waren jedoch nach anfänglicher Euphorie bescheiden bis ablehnend. Allerdings führte dies nicht zu einem Scheitern und einer Rückverlagerung der Kunst in den Präsentationsort Museum oder Galerie, sondern zu einer Weiterentwicklung der Fernsehkunst bis in die heutige Zeit. Fernsehkunst hat sich ihrem Aufführungs- und Produktionskontext, aber auch bei der Wahl ihrer Themen der jeweiligen Epoche mit ihren technischen und kommunikativen Möglichkeiten sowie dem gesellschaftlichen Diskurs zu öffentlichkeitsrelevanten Themen anpasst. Fernsehkunst ist stets ein Spiegel der aktuellen Diskurse in Kunst und Gesellschaft. In der bisherigen Forschung wurde Fernsehkunst als gescheitert und damit als nicht mehr existent angesehen. Die Stigmatisierung des Fernsehens als reines Unterhaltungs- und Informationsmedium führte dazu, dass Fernsehkunst als Begriff und als Kunstgattung im öffentlichen und im wissenschaftlichen Diskurs nicht vorkam. Die typologische und inhaltliche Analyse hat jedoch gezeigt, dass Fernsehkunst in klarer Abgrenzung zur Videokunst auch gegenwärtig existiert.
Die Kunstgewerbeschule Pforzheim nimmt innerhalb der Bildungsanstalten, die zur künstlerischen Förderung der Gewerbe im 19. Jahrhundert gegründet worden waren, eine Sonderstellung ein. Lehrplan und Ausbildungsgang orientierten sich vorrangig an den Bedürfnissen der in Pforzheim seit 1767 ansässigen Schmuckindustrie, die maßgeblich an der Gründung und Förderung der Kunstgewerbeschule beteiligt war. In der Dissertation werden die Rahmenbedingungen, die zur Gründung der Pforzheimer Kunstgewerbeschule im Jahr 1877 führten, sowie die Qualität und die Methoden der dort angebotenen künstlerisch-technischen Ausbildung unter Berücksichtigung zeitgenössischer Bildungsideale analysiert. Im Anschluss wird das Ansehen der Kunstgewerbeschule unter Zeitgenossen beurteilt sowie die Bedeutung dieser Institution für die Pforzheimer Schmuckindustrie herausgearbeitet. Der Betrachtungszeitraum erstreckt sich von 1877, dem Gründungsjahr der Kunstgewerbeschule, bis 1911, dem Todesjahr ihres ersten Direktors, Alfred Waag. Zeitgenössische Berichte und Archivmaterialien sowie der kontinuierlich erweiterte Lehrmittelbestand der Kunstgewerbeschule bilden die Grundlage für die Untersuchungen. Ein Großteil der Musterstücke, viele Bücher und Vorlagenwerke, die zur künstlerischen Ausbildung der Schüler angeschafft wurden, sind bis heute in Archiven und Museen erhalten und zeugen von der Qualität und der Fortschrittlichkeit der Ausbildungsstätte. Vor allem in den Bereichen Entwurf und Technik setzte man an der Kunstgewerbeschule Pforzheim Maßstäbe. Unter dem Einfluss der Schule entstanden Entwürfe für die lokale Schmuckindustrie, die speziell auf die serielle Fertigung zugeschnitten waren und damit beispielhaft für eine gelungene Allianz von Kunst, Technik und Wirtschaftlichkeit stehen. Die Zusammenarbeit der lokalen Schmuckhersteller mit Lehrern oder Absolventen der Kunstgewerbeschule ließ sich ebenso belegen wie die erfolgreiche Teilnahme verschiedener Schüler an überregionalen Wettbewerben für Schmuckentwürfe. Dank der quellengestützten Recherche konnten Beziehungen zwischen den als mustergültig empfundenen Vorbildern, der Entwurfsarbeit an der Schule und dem in Pforzheim industriell hergestellten Schmuck aufgezeigt werden. Der häufig geäußerte Vorwurf, Pforzheimer Firmen hätten vor allem fremde Schmuckentwürfe kopiert und durch maschinelle Fertigungstechniken billig produziert, verkennt den eigenen künstlerischen Anspruch einer Industrie, die zur ästhetisch-technischen Ausbildung ihrer Arbeiter und Lehrlinge eine Kunstgewerbeschule ins Leben rief, die bis heute unter dem Namen Hochschule Pforzheim - Gestaltung, Technik, Wirtschaft und Recht Bestand hat.
In sechs Primar- und zwei Sekundarschulen wurde eine dreimonatige leistungsmotivationsbezogene Intervention mit Schülerinnen und Schülern in sieben Jahrgangsstufen durchgeführt. Die Intervention umfasste 25,5 Zeitstunden und basierte auf einem Training, welches neben didaktischen Impulsen für Lehrpersonen vor allem die Stärkung der Schülerinnen und Schüler im Hinblick auf Selbstwahrnehmung, Selbstwirksamkeitserwartungen, Kausalattribuierung von Erfolgen bzw. Misserfolgen, soziale Beziehungen und Zielsetzung intendierte. Die beiden zugrundeliegenden Hypothesen der Studie formulieren die Erwartungen, dass nach Abschluss der Intervention erstens die Leistungsmotivation und zweitens auch das Wohlbefinden (Flourishing) der Schülerinnen und Schüler nachhaltig ansteigt. Es fanden Erhebungen zu drei Messzeitpunkten (Pre- und Posttest, Follow-Up sechs Monate nach Beendigung der Intervention) statt. Beide Hypothesen wurden in der empirischen Evaluation (RM-ANOVA) nicht bestätigt. Ergänzende explorative Untersuchungen (t-Tests und Clusteranalysen) zeigten vereinzelte Tendenzen in Richtung der Hypothesen, sind jedoch nicht aussagekräftig. Aufgrund dieser Befunde wurde im Anschluss an die Studie eine qualitative Inhaltsanalyse des schriftlichen Feedbacks der beteiligten Lehrpersonen durchgeführt. Hierbei konnten fünf erfolgskritische Faktoren (Commitment der Lehrpersonen, Anstrengungsgrad, Rolle der Schülerinnen und Schüler, Projektorganisation, sowie Inhalt und Methodik der Intervention) identifiziert werden, deren Beachtung für das Gelingen von positiv-psychologischen Interventionen in Organisationen unerlässlich erscheinen. Die Befunde der qualitativen Inhaltsanalyse führen schließlich zu der Annahme, dass aufgrund fehlender Programmintegrität keine Aussage über die tatsächliche Wirksamkeit des Trainings getroffen werden kann. Die Arbeit endet mit Empfehlungen zur optimalen Gestaltung positiv-psychologischer Interventionen in Bildungsorganisationen.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.
Erschöpfung ist ein prominentes, unspezifisches Symptom mit vielfältigen Begleitsymptomen (z. B. Schmerzen, Schlafstörungen, Reizbarkeit, Niedergeschlagenheit). Gängige Konzepte erschöpfungsbezogener Erkrankungen und Syndrome werden häufig in Bezug auf ihre Differenzierungskraft oder Struktur kritisiert. Die Ursachen für die Entstehung von Erschöpfung sind vielfältig und die Behandlung kann nur mit gründlicher Differentialdiagnostik erfolgen. Anhand adaptionsbezogener Stressmodelle kann die Entstehung von Erschöpfung beschrieben und in drei Formen eingeteilt werden (I: reversibel, II: prädispositioniert und III: emotional-dysregulativ). Poststress-Symptome (z. B. "Wochenend-Migräne", "UrlaubsInfekte") stellen möglicherweise eine Erschöpfungsform dar, welche durch eine zentrale Entleerung der Noradrenalin-Spiegel bedingt ist. In der vorliegenden Arbeit wurden die Verlässlichkeit der Neuropattern-Erschöpfungsskala, sowie der Zusammenhang von Erschöpfung, Stress, dem Langzeit-Gesundheitsstatus und Poststress-Symptomen geprüft. Hierzu wurden Daten ambulanter und stationärer Patienten und Mitarbeitern verwendet, die an einer randomisierten klinischen Studie zur Neuropattern-Diagnostik teilnahmen. Zusätzlich wurden Daten von gesunden Personen zur Erhebung einer Normstichprobe verwendet. Die Neuropattern-Erschöpfungsskala zeigte sich als reliables und valides Maß. Sie war Indikator für direkte, indirekte und intangible Gesundheitskosten (z. B. erhöhte Arzt-, Therapeutenbesuche, Medikamenteneinnahme und Arbeitsunfähigkeit, reduziertes psychisches und physisches Wohlbefinden). Es zeigte sich, dass sowohl Stress, als auch Erschöpfung den Gesundheitszustand über den Verlauf von zwölf Monaten vorhersagt. Bemerkenswert ist, dass der Zusammenhang zwischen Stress und dem Langzeit-Gesundheitszustand vornehmlich durch Erschöpfung vermittelt wurde. Schließlich wurde die Prävalenz von Poststress-Symptomen bei gesunden Personen (2.9%), ambulanten (20%) und stationären Patienten (34,7%) bestimmt. Auch hier war nicht Stress der stärkste Prädiktor für das Auftreten von Poststress-Symptomen, sondern Erschöpfung. Modelle der psychophysiologischen Stressreaktion können die Entstehung von Erschöpfung erklären und die Diagnostik und Behandlung stressbezogener Gesundheitsstörungen verbessern. Die vorgestellte Neuropattern-Erschöpfungsskala ist dabei ein verlässliches und für die Praxis gut geeignetes Instrument, welches zur Indikation und Validierung präventiver und therapeutischer Maßnahmen eingesetzt werden kann. Je nach Erschöpfungsform bieten sich verschiedene Maßnahmen des regenerativen, instrumentellen oder kognitiven Stressmanagements, Nahrungsergänzungsmittel und Pharmakotherapie an.
Pränatal, postnatal und aktuell auftretende chronische Stressbelastung sind bedeutsame Risikofaktoren für mentale und körperliche Beeinträchtigungen im Erwachsenenalter. Ziel dieser Dissertationsschrift ist es, den Einfluss von Stress im Lebenslauf (pränatale, postnatale, aktuelle Stressbelastung) auf verschiedene Erschöpfungsvariablen und Depressivität zu analysieren und mögliche Mediatoreffekte von aktuell auftretendem Stress auf Assoziationen zwischen pränatalem bzw. postnatalem Stress und Erschöpfung bzw. Depressivität zu bestimmen. Zur Prüfung dieser Fragestellung wurden Daten von chronisch gestressten Lehrpersonen (N = 186; 67,70% weiblich) ohne Diagnose für eine psychische Erkrankung sowie von Hausarzt- (N = 473; 59% weiblich) und Klinikpatienten (N = 284; 63,7% weiblich) mit mindestens einer stressbezogenen mentalen Gesundheitsstörung erhoben. Prä-postnataler Stress, subjektive Erschöpfung und Depressivität wurden in allen Stichproben erfasst, aktuelle Stressbelastung und Poststresssymptome in den Patientenstichproben. Zusätzlich wurden konzeptuelle Endophänotypen als psychobiologisches Erschöpfungsmaß in beiden Patientenstichproben sowie Übernachtaktivität des parasympathischen Nervensystems als Maß vagaler Erholung in der Hausarztstichprobe operationalisiert. Bei den Lehrpersonen wurde anhand univariater Varianzanalysen analysiert, ob Lehrkräfte mit frühkindlicher Belastung unterschiedliche Erschöpfungs- bzw. Depressionswerte aufwiesen im Vergleich zu Lehrkräften ohne frühkindliche Belastung. In den Patientenstichproben wurden multiple und binärlogistische Regressionsmodelle verwendet, um Assoziationen zwischen pränatalem, postnatalem sowie aktuellem Stress mit Erschöpfung, Depressivität, den konzeptuellen Endophänotypen der Neuropattern-Diagnostik sowie Übernachtaktivität des parasympathischen Nervensystems (nur bei Hausarztpatienten) zu prüfen. Mögliche Mediatoreffekte aktueller Stressbelastung auf Assoziationen zwischen pränatalem und postnatalem Stress mit Erschöpfung, Depressivität, der konzeptuellen Endophänotypen bzw. der Übernachtaktivität des parasympathischen Nervensystems (nur bei Hausarztpatienten) wurden bestimmt. Ad hoc wurde mittels zusätzlich ein möglicher Moderatoreffekt von pränatalem Stress auf die Assoziation zwischen aktuellem Stress und der Übernachtherzrate getestet. Pränataler Stress war bei sonst gesunden Lehrkräften mit einer stärker ausgeprägten Gratifikationskrise und höherer emotionaler Erschöpfung assoziiert. Postnataler Stress ging mit höheren Werten für Depressivität, Anstrengungs-Belohnungs-Ungleichgewicht, der MBI Gesamtskala, emotionaler Erschöpfung und vitaler Erschöpfung einher. Sowohl bei Hausarzt- als auch bei Klinikpatienten waren aktuelle psychosoziale Belastung und aktuelle Beeinträchtigung durch Lebensereignisse mit Depressivität, Erschöpfung und Poststress assoziiert. Bei Hausarztpatienten sagte aktuelle Stressbelastung eine erhöhte Odds Ratio der Noradrenalin-Hypoaktivität sowie Serotonin-Hyperreaktivität vorher; bei Klinikpatienten für Noradrenalin-Hypoaktivität. Des Weiteren zeigten Hausarztpatienten mit starker psychosozialer Belastung erhöhte parasympathische Aktivität über Nacht. Bei Hausarztpatienten ist hoher pränataler Stress assoziiert mit wahrgenommener psychosozialer Belastung, aktuellen Lebensereignissen und Poststresssymptomen. Pränataler Stress ging mit einer verringerten vagalen Aktivität einher. Weiter ist postnataler Stress assoziiert mit Depressivität, wahrgenommener psychosozialer Belastung, aktuellen Lebensereignissen, Erschöpfung und Poststresssymptomen sowie einem erhöhten Odds Ratio für die Noradrenalin-Hypoaktivität sowie mit CRH-Hyperaktivität. Die Assoziationen zwischen pränatalem bzw. postnatalem Stress und Poststress, Erschöpfung, Depressivität und Noradrenalin-Hypoaktivität wurden signifikant durch aktuelle Stressbelastung mediiert. Die Assoziation zwischen aktuellem Stress und parasympathischer Aktivität über Nacht wurde durch pränatalen Stress moderiert: Bei geringer bis mittlerer nicht aber bei hoher pränataler Belastung ging eine hohe psychosoziale Belastung mit erhöhter Übernachtaktivität des parasympathischen Nervensystems einher. Bei Klinikpatienten zeigten sich keine signifikanten Zusammenhänge zwischen pränatalem bzw. postnatalem Stress und Erschöpfung bzw. Depressivität. Pränataler Stress kann trophotrope Funktionen beeinträchtigen und damit die Vulnerabilität für Erschöpfung und Depressivität erhöhen. Fortgesetzte postnatale und aktuelle Stressbelastung erhöhen den kumulativen Stress im Lebenslauf einer Person und tragen zu psychobiologischen Dysfunktionen sowie Erschöpfung und Depressivität bei.
Diese Arbeit konzentriert sich auf die Darstellung gemeinsamer Projekte von Hotelunternehmen und Hochschulen mit hotelspezifischen Studienangeboten. Infolge der demografischen Entwicklungen sowie des Wertewandels gewinnen Personalgewinnung und Mitarbeiterloyalisierung zunehmend an Bedeutung und werden zu einem Wettbewerbsparameter der Hotellerie. Für diese essentielle Herausforderung sind Hotelbetriebe mit engagierter Mitarbeiterförderung gefragt. Viele Hochschulen haben neue Studiengänge im Tourismus, Event- oder Hotelmanagementbereich praxisorientiert aufgelegt, um der Skepsis der Hotellerie entgegen zu wirken und um zugleich den Erwartungen der Studenten gerecht zu werden. Viele der Studenten wären potenzielle Auszubildende, die sich bei der Abwägung allerdings für die Studienoption entschieden haben. Daher ist es wichtig, in enger Kooperation mit den hierzu passenden Institutionen und Bildungsträgern, vor allem praxisnahe Studienmodelle für sich verändernde Erwartungen der Bewerber mit modernen Lehrinhalten zu entwickeln und erfolgreich am Markt zu platzieren. Daher verfolgt diese Arbeit den Ansatz, adäquate Kriterien und Faktoren für den Erfolg vertraglich vereinbarter Kooperationen zwischen Hotelketten und Hochschulen zu analysieren und daraus Handlungsempfehlungen abzuleiten. Die große Anzahl an Kooperationen macht deutlich, dass die Notwendigkeit für die Hotellerie, sich im Bereich der Mitarbeitergewinnung, -bindung und -entwicklung mit akademischen Partnern zusammen zu schließen, bei einer ansteigenden Zahl von Hotelgruppen nachvollzogen wird. Durch die zurückhaltende Vermarktung vieler Kooperationsmodelle ist deren Bekanntheit jedoch begrenzt und dadurch auch deren positive Auswirkungen auf das Image der Hotellerie. Gleichwohl sind in der Bildungslandschaft steigende Studentenzahlen und eine Vermehrung der Studiengänge bei gleichzeitig gravierender Abnahme der Zahl berufsfachlich Ausgebildeter erkennbar. Die Kooperationsmodelle sind daher ein sinnvolles Instrument, um auf diese Marktentwicklungen zu reagieren, wobei ihre Bedeutung primär von Unternehmen mit strategischer Personalpolitik erkannt wird. Daraus wurde die "Typologie privilegierter Bildungspartnerschaften" mit einer Bandbreite von zehn Kooperationstypen entwickelt. Damit werden unterschiedliche Intensitäten der partnerschaftlichen Bildungselemente ebenso deutlich wie ein individualisiertes "Faktoren-Phasenmodell", dass die Prozessstruktur der Kooperationsentwicklung abbildet. Je nach Enge der Zusammenarbeit, nach Unternehmens- und Hochschulphilosophie und entsprechend der Erfahrungen mit Kooperationen entstehen vor allem Verpflichtungen und Herausforderungen in der aktiven Gestaltung und verlässlichen Kommunikation in einem Kooperationsmodell. Eine Schlüsselrolle nimmt der persönlich verantwortliche Koordinator ein, der als Garant für eine effiziente Organisation und Professionalität angesehen wird. Daraus ableitend sind die Erfolgsfaktoren im ASP-Modell herausgefiltert worden: Attraktivität, Sicherheit und Persönlichkeit machen den Erfolg einer privilegierten Bildungspartnerschaft aus. Bestätigt wurde zudem, dass die Erfahrung der beiden Partner einer Kooperation zueinander passen muss und eine klare Zielvereinbarung mit Fixierung der Pflichten und Aufgaben erforderlich ist. Ein hoher Qualitätsanspruch, Transparenz und Prozesseffizienz ergänzen dies und machen deutlich, dass der Bildungsbereich als Teil der Personalpolitik eines Unternehmens sensibel und anspruchsvoll zugleich ist. Die Verankerung auf der Führungsebene eines Unternehmens ist entscheidend, um durch ein Signal nach innen und außen den Stellenwert einer Bildungsallianz zu verdeutlichen. Wenn aus Lernen und Wissen wirtschaftliche Vorteile erarbeitet werden können, wird Bildung noch mehr als Markenkern eines guten Arbeitgebers interpretiert. Auf dieser Grundlage wird der Gedanke der Personalentwicklung durch den Ansatz fortwährender Mitarbeiterbildung perfektioniert und der Lösungsansatz einer "privilegierten Bildungspartnerschaft" legt den Grundstein dafür. Nachwuchskräfteförderung wird zum strategischen Mittel der Mitarbeiterbindung und zur Vermeidung kostenintensiver Vakanzen, zudem sichern Netzwerke Fachwissen und stärken das Unternehmensimage. Mit privilegierten Bildungspartnerschaften werden geeignete Modelle vorgestellt, um einsatzfreudige Mitarbeiter zu halten und sie gleichzeitig auf den nächsten Karriereschritt vorzubereiten. Die vorliegende Ausarbeitung liefert einen Diskussionsbeitrag zum besseren gegenseitigen Verständnis einer Symbiose aus Hotelkette und Hochschule im Bildungsbereich sowie erfolgreiche Konzeptideen für vielfältige Netzwerkstrukturen.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
Die Arbeit untersucht den Zusammenhang zwischen dem sprachlichen Zeichen und den Begriffen. Das Lexikon mit seinen Bedeutungsdefinitionen ist der augenscheinlichste Schnittpunkt zwischen dem Sprach- und dem Begriffssystem. Die Bedeutungsdefinition wird dabei als ein empirisches Datum betrachtet, das formal beschrieben werden kann. Die Bedeutungsanalyse überführt die Bedeutungsdefinition in eine komplexe Ordnungsstruktur. Die Methode wurde aus verschiedenen Begriffstheorien entwickelt, hauptsächlich aus Raili Kauppis Begriffstheorie und der Formalen Begriffsanalyse. Im Ergebnis erhält man aus den Bedeutungen eines Lexikons ein komplexes System von ein- bis n-stelligen Begriffen. Dieses Begriffssystem unterscheidet sich von den bekannten Semantischen Netzen durch einen völligen Verzicht auf von außen auf das System projizierte Relationen, wie den sogenannten semantischen Relationen. Die einzigen Relationen in diesem System sind begrifflich.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Background: We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients. Methods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively). Conclusions: Low depression in conjunction with social integration at time of waitlisting is related to enhanced chances for survival after HTx. Both factors should be considered for inclusion in standardized assessments and interventions for HTx candidates. We evaluated depression and social isolation assessed at time of waitlisting as predictors of survival in heart transplant (HTx) recipients.\r\n\r\nMethods and Results: Between 2005 and 2006, 318 adult HTx candidates were enrolled in the Waiting for a New Heart Study, and 164 received transplantation. Patients were followed until February 2013. Psychosocial characteristics were assessed by questionnaires. Eurotransplant provided medical data at waitlisting, transplantation dates, and donor characteristics; hospitals reported medical data at HTx and date of death after HTx. During a median followâ€up of 70 months (<1"93 months postâ€HTx), 56 (38%) of 148 transplanted patients with complete data died. Depression scores were unrelated to social isolation, and neither correlated with disease severity. Higher depression scores increased the risk of dying (hazard ratio=1.07, 95% confidence interval, 1.01, 1.15, P=0.032), which was moderated by social isolation scores (significant interaction term; hazard ratio = 0.985, 95% confidence interval, 0.973, 0.998; P=0.022). These findings were maintained in multivariate models controlling for covariates (P values 0.020"0.039). Actuarial 1â€year/5â€year survival was best for patients with low depression who were not socially isolated at waitlisting (86% after 1 year, 79% after 5 years). Survival of those who were either depressed, or socially isolated or both, was lower, especially 5 years posttransplant (56%, 60%, and 62%, respectively).
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
This study aims to estimate the cotton yield at the field and regional level via the APSIM/OZCOT crop model, using an optimization-based recalibration approach based on the state variable of the cotton canopy - the leaf area index (LAI), derived from atmospherically corrected Landsat-8 OLI remote sensing images in 2014. First, a local sensitivity and global analysis approach was employed to test the sensitivity of cultivar, soil and agronomic parameters to the dynamics of the LAI. After sensitivity analyses, a series of sensitive parameters were obtained. Then, the APSIM/OZCOT crop model was calibrated by observations over a two-year span (2006-2007) at the Aksu station, combined with these sensitive cultivar parameters and the current understanding of cotton cultivar parameters. Third, the relationship between the observed in-situ LAI and synchronous perpendicular vegetation indices derived from six Landsat-8 OLI images covering the entire growth stage was modelled to generate LAI maps in time and space. Finally, the Particle Swarm Optimization (PSO) and general-purpose optimization approach (based on Nelder-Mead algorithm) were used to recalibrate four sensitive agronomic parameters (row spacing, sowing density per row, irrigation amount and total fertilization) according to the minimization of the root-mean-square deviation (RMSE) between the simulated LAI from the APSIM/OZCOT model and retrieved LAI from Landsat-8 OLI remote sensing images. After the recalibration, the best simulated results compared with observed cotton yield were obtained. The results showed that: (1) FRUDD, FLAI and DDISQ were the major cultivar parameters suitable for calibrating the cotton cultivar. (2) After the calibration, the simulated LAI performed well with an RMSE and mean absolute error (MAE) of 0.45 and 0.33, respectively, in 2006 and 0.46 and 0.41, respectively, in 2007. The coefficient of determination between the observed and simulated LAI was 0.83 and 0.97, respectively, in 2006 and 2007. The Pearson- correlation coefficient was 0.913 and 0.988 in 2006 and 2007, respectively, with a significant positive correlation between the simulated and observed LAI. The difference between the observed and simulated yield was 776.72 kg/ha and 259.98 kg/ha in 2006 and 2007, respectively. (3) Cotton cultivation in 2014 was obtained using three Landsat-8 OLI images - DOY136 (May), DOY 168 (June) and DOY 200 (July) - based on the phenological differences in cotton and other vegetation types. (4) The yield estimation after the assimilation closely approximated the field-observed values, and the coefficient of determination was as high as 0.82, after recalibration of the APSIM/OZCOT model for ten cotton fields. The difference between the observed and assimilated yields for the ten fields ranged from 18.2 to 939.7 kg/ha. The RMSE and MAE between the assimilated and observed yield was 417.5 and 303.1 kg/ha, respectively. These findings provide scientific evidence for the feasibility of coupled remote sensing and APSIM/OZCOT model at the field level. (5) Upscaling from field level to regional level, the assimilation algorithm and scheme are both especially important. Although the PSO method is very efficient, the computational efficiency is also the shortcoming of the assimilation strategy on a regional scale. Comparisons between the PSO and general-purpose optimization method (based on the Nelder-Mead algorithm) were implemented from the RSME, LAI curve and computational time. The general-purpose optimization method (based on the Nelder-Mead algorithm) was used for the regional assimilation between remote sensing and the APSIM/OZCOT model. Meanwhile, the basic unit for regional assimilation was also determined as cotton field rather than pixel. Moreover, the crop growth simulation was also divided into two phases (vegetative growth and reproductive growth) for regional assimilation. (6) The regional assimilation at the vegetative growth stage between the remote sensing derived and APSIM/OZCOT model-simulated LAI was implemented by adjusting two parameters: row spacing and sowing density per row. The results showed that the sowing density of cotton was higher in the southern part than in the northern part of the study area. The spatial pattern of cotton density was also consistent with the reclamation from 2001 to 2013. Cotton fields after early reclamation were mainly located in the southern part while the recent reclamation was located in the northern part. Poor soil quality, lack of irrigation facilities and woodland belts of cotton fields in the northern part caused the low density of cotton. Regarding the row spacing, the northern part was larger than the southern part due to the variation of two agronomic modes from military and private companies. (7) The irrigation and fertilization amount were both used as key parameters to be adjusted for regional assimilation during the reproductive growth period. The result showed that the irrigation per time ranged from 58.14 to 89.99 mm in the study area. The spatial distribution of the irrigation amount is higher in the northern part while lower in southern study area. The application of urea fertilization ranged from 500.35 to 1598.59 kg/ha in the study area. The spatial distribution of fertilization was lower in the northern part and higher in the southern part. More fertilization applied in the southern study area aims to increase the boll weight and number for pursuing higher yields of cotton. The frequency of the RSME during the second assimilation was mainly located in the range of 0.4-0.6 m2/m2. The estimated cotton yield ranged from 1489 to 8895 kg/ha. The spatial distribution of the estimated yield is also higher in the southern part than the northern study area.
Die Zielsetzung der vorliegenden Dissertation lag in der ausführlichen und systematischen Exploration, Deskription und Analyse von relational bedingten sozio-medialen Ungleichheiten zwischen jugendlichen Nutzern sozialer Netzwerkplattformen (WhatsApp, Facebook, Snapchat etc.). Im Rahmen der qualitativen Untersuchung wurden insgesamt sechs problemzentrierte Einzelinterviews und drei problemzentrierte Gruppendiskussionen mit Jugendlichen im Alter von 15 bis 20 Jahren sowie eine Gruppendiskussion mit pädagogischen Fachkräften durchgeführt und inhaltsanalytisch ausgewertet. Die vorliegende Arbeit konzentriert sich auf die Bedingungen und Wechselwirkungen zwischen den Individualmerkmalen von jugendlichen Nutzern (Interessen, Motive, Nutzungsweisen, Kompetenzen) und ihren relationalen Merkmalen (Beziehungen in sozialen Onlinenetzwerken) sowie den daraus resultierenden sozialen Ressourcen und Risiken. Die relationalen Merkmale sind laut den Ergebnissen der Dissertation gleich in zweifacher Hinsicht an der Reproduktion sozialer Ungleichheitsstrukturen beteiligt: Erstens nehmen sie Einfluss auf den Zugang zu und die Bewertung von medienvermittelten Informationen, da Informationen auf den Plattformen vorwiegend innerhalb des mediatisierten Beziehungsnetzwerkes kreisen. Zweitens bestimmen die relationalen Merkmale maßgeblich mit, welche Kompetenzen und Präferenzen Jugendliche im Umgang mit sozialen Netzwerkplattformen erwerben. Bezüglich der Mediennutzung und -wirkung können bei den Jugendlichen mit niedrigem Bildungsniveau folgende ungleichheitsrelevante Befunde konstatiert werden: verzögerte Adaption neuer Plattformen, intensivere Nutzung, geringere Nutzungskompetenzen, vermehrte Aufmerksamkeitserzeugung sowie eine primär unterhaltungsorientierte Nutzung. Zudem weisen sie überwiegend homogene Netzwerkstrukturen auf, was sich nachteilig auf ihren Zugang zu medienvermittelten Informationen und deren Bewertung auswirkt. Für Jugendliche mit hohem Bildungsniveau lassen sich hingegen deutlich positivere Verstärkungseffekte durch die Mediennutzung feststellen.
DNA methylation, through 5-methyl- and 5-hydroxymethylcytosine (5mC and 5hmC) is considered to be one of the principal interfaces between the genome and our environment and it helps explain phenotypic variations in human populations. Initial reports of large differences in methylation level in genomic regulatory regions, coupled with clear gene expression data in both imprinted genes and malignant diseases provided easily dissected molecular mechanisms for switching genes on or off. However, a more subtle process is becoming evident, where small (<10%) changes to intermediate methylation levels were associated with complex disease phenotypes. This has resulted in two clear methylation paradigms. The latter "subtle change" paradigm is rapidly becoming the epigenetic hallmark of complex disease phenotypes, although we were currently hampered by a lack of data addressing the true biological significance and meaning of these small differences. The initial expectation of rapidly identifying mechanisms linking environmental exposure to a disease phenotype led to numerous observational/association studies being performed. Although this expectation remains unmet, there is now a growing body of literature on specific genes, suggesting wide ranging transcriptional and translational consequences of such subtle methylation changes. Data from the glucocorticoid receptor (NR3C1) has shown that a complex interplay between DNA methylation, extensive 5"UTR splicing and microvariability gives rise to the overall level and relative distribution of total and N-terminal protein isoforms generated. Additionally, the presence of multiple AUG translation initiation codons throughout the complete, processed, mRNA enables translation variability, hereby enhancing the translational isoforms and the resulting protein isoform diversity; providing a clear link between small changes in DNA methylation and significant changes in protein isoforms and cellular locations. Methylation changes in the NR3C1 CpG island, alters the NR3C1 transcription and eventually protein isoforms in the tissues, resulting in subtle but visible physiological variability. Implying external environmental stimuli act through subtle methylation changes, with transcriptional microvariability as the underlying mechanism, to fine-tune the total NR3C1 protein levels. The ubiquitous distribution of genes with similar structure as NR3C1, combined with an increasing number of studies linking subtle methylation changes in specific genes with wide ranging transcriptional and translational consequences, suggested a more genome-wide spread of subtle DNA methylation changes and transcription variability. The subtle methylation paradigm and the biological relevance of such changes were supported by two epigenetic animal models, which linked small methylation changes to either psychopathological or immunological effects. The first model, rats subjected to maternal deprivation, showed long term behavioural and stress response changes. A second model, exposing mice to early life infection with H1N1, illustrated long-term immunological effects. Both models displayed subtle changes within the methylome. Suggesting/Indicating that early life adversity and early life viral infection "programmed" the CNS and innate immune response respectively, via subtle DNA methylation changes genome-wide. The research presented in this thesis investigated the ever-growing roles of DNA methylation; the physiological and functional relevance of subtle small DNA methylation changes genome-wide, in particular for the CNS (MD model) and the immune system (early life viral infection model) ; and the evidence available, particularly from the glucocorticoid of the cascade of events initiated by such subtle methylation changes, as well as addressing the underlying question as to what represents a genuine biologically significant difference in methylation.
Background: Psychotherapy is successful for the majority of patients , but not for every patient. Hence, further knowledge is needed on how treatments should be adapted for those who do not profit or deteriorate. In the last years prediction tools as well as feedback interventions were part of a trend to more personalized approaches in psychotherapy. Research on psychometric prediction and feedback into ongoing treatment has the potential to enhance treatment outcomes, especially for patients with an increased risk of treatment failure or drop-out.rnMethods/design: The research project investigates in a randomized controlled trial the effectiveness as well as moderating and mediating factors of psychometric feedback to therapists. In the intended study a total of 423 patients, who applied for a cognitive-behavioral therapy at the psychotherapy clinic of the University Trier and suffer from a depressive and/or an anxietyrndisorder (SCID interviews), will be included. The patients will be randomly assigned either to one therapist as well as to one of two intervention groups (CG, IG2). An additional intervention group (IG1) will be generated from an existing archival data set via propensity score matching. Patients of the control group (CG; n = 85) will be monitored concerning psychological impairment but therapists will not be provided with any feedback about the patients assessments. In both intervention groups (IG1: n = 169; IG2: n = 169) the therapists are provided with feedback about the patients self-evaluation in a computerized feedback portal. Therapists of the IG2 will additionally be provided with clinical support tools, which will be developed in thisrnproject, on the basis of existing systems. Therapists will also be provided with a personalized treatment recommendation based on similar patients (Nearest Neighbors) at the beginning of treatment. Besides the general effectiveness of feedback and the clinical support tools for negatively developing patients, further mediating and moderating variables on this feedback effectrnshould be examined: treatment length, frequency of feedback use, therapist effects, therapist- experience, attitude towards feedback as well as congruence of therapist-andpatient- evaluation concerning the progress. Additional procedures will be implemented to assess treatment adherence as well as the reliability of diagnosis and to include it into the analyses.rnDiscussion: The current trial tests a comprehensive feedback system which combines precision mental health predictions with routine outcome monitoring and feedback tools in routine outpatient psychotherapy. It also adds to previous feedback research a stricter design by investigating another repeated measurement CG as well as a stricter control of treatment integrity. It also includes a structured clinical interview (SCID) and controls for comorbidity (within depression and anxiety). This study also investigates moderators (attitudes towards, use of the feedback system, diagnoses) and mediators (therapists" awareness of negative change and treatment length) in one study.
This paper describes the concept of the hyperspectral Earth-observing thermal infrared (TIR) satellite mission HiTeSEM (High-resolution Temperature and Spectral Emissivity Mapping). The scientific goal is to measure specific key variables from the biosphere, hydrosphere, pedosphere, and geosphere related to two global problems of significant societal relevance: food security and human health. The key variables comprise land and sea surface radiation temperature and emissivity, surface moisture, thermal inertia, evapotranspiration, soil minerals and grain size components, soil organic carbon, plant physiological variables, and heat fluxes. The retrieval of this information requires a TIR imaging system with adequate spatial and spectral resolutions and with day-night following observation capability. Another challenge is the monitoring of temporally high dynamic features like energy fluxes, which require adequate revisit time. The suggested solution is a sensor pointing concept to allow high revisit times for selected target regions (1"5 days at off-nadir). At the same time, global observations in the nadir direction are guaranteed with a lower temporal repeat cycle (>1 month). To account for the demand of a high spatial resolution for complex targets, it is suggested to combine in one optic (1) a hyperspectral TIR system with ~75 bands at 7.2"12.5 -µm (instrument NEDT 0.05 K"0.1 K) and a ground sampling distance (GSD) of 60 m, and (2) a panchromatic high-resolution TIR-imager with two channels (8.0"10.25 -µm and 10.25"12.5 -µm) and a GSD of 20 m. The identified science case requires a good correlation of the instrument orbit with Sentinel-2 (maximum delay of 1"3 days) to combine data from the visible and near infrared (VNIR), the shortwave infrared (SWIR) and TIR spectral regions and to refine parameter retrieval.
Dry tropical forests undergo massive conversion and degradation processes. This also holds true for the extensive Miombo forests that cover large parts of Southern Africa. While the largest proportional area can be found in Angola, the country still struggles with food shortages, insufficient medical and educational supplies, as well as the ongoing reconstruction of infrastructure after 27 years of civil war. Especially in rural areas, the local population is therefore still heavily dependent on the consumption of natural resources, as well as subsistence agriculture. This leads, on one hand, to large areas of Miombo forests being converted for cultivation purposes, but on the other hand, to degradation processes due to the selective use of forest resources. While forest conversion in south-central rural Angola has already been quantitatively described, information about forest degradation is not yet available. This is due to the history of conflicts and the therewith connected research difficulties, as well as the remote location of this area. We apply an annual time series approach using Landsat data in south-central Angola not only to assess the current degradation status of the Miombo forests, but also to derive past developments reaching back to times of armed conflicts. We use the Disturbance Index based on tasseled cap transformation to exclude external influences like inter-annual variation of rainfall. Based on this time series, linear regression is calculated for forest areas unaffected by conversion, but also for the pre-conversion period of those areas that were used for cultivation purposes during the observation time. Metrics derived from linear regression are used to classify the study area according to their dominant modification processes.rnWe compare our results to MODIS latent integral trends and to further products to derive information on underlying drivers. Around 13% of the Miombo forests are affected by degradation processes, especially along streets, in villages, and close to existing agriculture. However, areas in presumably remote and dense forest areas are also affected to a significant extent. A comparison with MODIS derived fire ignition data shows that they are most likely affected by recurring fires and less by selective timber extraction. We confirm that areas that are used for agriculture are more heavily disturbed by selective use beforehand than those that remain unaffected by conversion. The results can be substantiated by the MODIS latent integral trends and we also show that due to extent and location, the assessment of forest conversion is most likely not sufficient to provide good estimates for the loss of natural resources.
Earnings functions are an important tool in labor economics as they allow to test a variety of labor market theories. Most empirical earnings functions research focuses on testing hypotheses about sign and magnitude for the variables of interest. In contrast, there is little attention for the explanation power of the econometric models employed. Measures for explanation power are of interest, however, for assessing how successful econometric models are in explaining the real world. Are researchers able to draw a complete picture of the determination of earnings or is there room for further theories leading to alternate econometric models? This article seeks to answer the question with a large microeconometric data set from Germany. Using linear regression estimated by OLS and R2 as well as adjusted R2 as measures for explanation power, the results show that up to 60 percent of wage variation can be explained using only observable variables.
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
Die organische Bodensubstanz (OBS) ist eine fundamentale Steuergröße aller biogeochemischen Prozesse und steht in engem Zusammenhang zu Kohlenstoffkreisläufen und globalem Klima. Die derzeitige Herausforderung der Ökosystemforschung ist die Identifizierung der für die Bodenqualität relevanten Bioindikatoren und deren Erfassung mit Methoden, die eine nachhaltige Nutzung der OBS in großem Maßstab überwachen und damit zu globalen Erderkundungsprogrammen beitragen können. Die fernerkundliche Technik der Vis-NIR Spektroskopie ist eine bewährte Methode für die Beurteilung und das Monitoring von Böden, wobei ihr Potential bezüglich der Erfassung biologischer und mikrobieller Bodenparameter bisher umstritten ist. Das Ziel der vorgestellten Arbeit war die quantitative und qualitative Untersuchung der OBS von Ackeroberböden mit unterschiedlichen Methoden und variierender raumzeitlicher Auflösung sowie die anschließende Bewertung des Potentials non-invasiver, spektroskopischer Methoden zur Erfassung ausgewählter Parameter dieser OBS. Dafür wurde zunächst eine umfassende lokale Datenbank aus chemischen, physikalischen und biologischen Bodenparametern und dazugehörigen Bodenspektren einer sehr heterogenen geologischen Region mit gemäßigten Klima im Südwesten Deutschlands erstellt. Auf dieser Grundlage wurde dann das Potential der Bodenspektroskopie zur Erfassung und Schätzung von Feld- und Geländedaten ausgewählter OBS Parameter untersucht. Zusätzlich wurde das Optimierungspotential der Vorhersagemodelle durch statistische Vorverarbeitung der spektralen Daten getestet. Die Güte der Vorhersagewahrscheinlichkeit gebräuchlicher fernerkundlicher Bodenparameter (OC, N) konnte für im Labor erhobene Hyperspektralmessungen durch statistische Optimierungstechniken wie Variablenselektion und Wavelet-Transformation verbessert werden. Ein zusätzliches Datenset mit mikrobiellen/labilen OBS Parametern und Felddaten wurde untersucht um zu beurteilen, ob Bodenspektren zur Vorhersage genutzt werden können. Hierzu wurden mikrobieller Kohlenstoff (MBC), gelöster organischer Kohlenstoff (DOC), heißwasserlöslicher Kohlenstoff (HWEC), Chlorophyll α (Chl α) und Phospholipid-Fettsäuren (PLFAs) herangezogen. Für MBC und DOC konnte abhängig von Tiefe und Jahreszeit eine mittlere Güte der Vorhersagewahrscheinlichkeit erreicht werden, wobei zwischen hohen und niedrigen Konzentration unterschieden werden konnte. Vorhersagen für OC und PLFAs (Gesamt-PLFA-Gehalt sowie die mikrobiellen Gruppen der Bakterien, Pilze und Algen) waren nicht möglich. Die beste Prognosewahrscheinlichkeit konnte für das Chlorophyll der Grünalgen an der Bodenoberfläche (0-1cm Bodentiefe) erzielt werden, welches durch Korrelation mit MBC vermutlich auch für dessen gute Vorhersagewahrscheinlichkeit verantwortlich war. Schätzungen des Gesamtgehaltes der OBS, abgeleitet durch OC, waren hingegen nicht möglich, was der hohen Dynamik der mikrobiellen OBS Parameter an der Bodenoberfläche zuzuschreiben ist. Das schränkt die Repräsentativität der spektralen Messung der Bodenoberfläche zeitlich ein. Die statistische Optimierungstechnik der Variablenselektion konnte für die Felddaten nur zu einer geringen Verbesserung der Vorhersagemodelle führen. Die Untersuchung zur Herkunft der organischen Bestandteile und ihrer Auswirkungen auf die Quantität und Qualität der OBS konnte die mikrobielle Nekromasse und die Gruppe der Bodenalgen als zwei mögliche weitere signifikante Quellen für die Entstehung und Beständigkeit der OBS identifizieren. Insgesamt wird der mikrobielle Beitrag zur OBS höher als gemeinhin angenommen eingestuft. Der Einfluss mikrobieller Bestandteile konnte für die OBS Menge, speziell in der mineralassoziierten Fraktion der OBS in Ackeroberböden, sowie für die OBS Qualität hinsichtlich der Korrelation von mikrobiellen Kohlenhydraten und OBS Stabilität gezeigt werden. Die genaue Quantifizierung dieser OBS Parameter und ihre Bedeutung für die OBS Dynamik sowie ihre Prognostizierbarkeit mittels spektroskopischer Methoden ist noch nicht vollständig geklärt. Für eine abschließende Beurteilung sind deshalb weitere Studien notwendig.
Avoiding aerial microfibre contamination of environmental samples is essential for reliable analyses when it comes to the detection of ubiquitous microplastics. Almost all laboratories have contamination problems which are largely unavoidable without investments in clean-air devices. Therefore, our study supplies an approach to assess background microfibre contamination of samples in the laboratory under particle-free air conditions. We tested aerial contamination of samples indoor, in a mobile laboratory, within a laboratory fume hood and on a clean bench with particles filtration during the examining process of a fish. The used clean bench reduced aerial microfibre contamination in our laboratory by 96.5%. This highlights the value of suitable clean-air devices for valid microplastic pollution data. Our results indicate, that pollution levels by microfibres have been overestimated and actual pollution levels may be many times lower. Accordingly, such clean-air devices are recommended for microplastic laboratory applications in future research work to significantly lower error rates.
Die Publikation, die sich primär an Forschende aus den Geisteswissenschaften wendet, bietet eine praxisbezogene kurze Einführung in das Forschungsdatenmanagement. Sie ist als Planungsinstrument für ein Forschungsprojekt konzipiert und bietet Hilfestellung bei der Erarbeitung eines digitalen Forschungskonzepts und der Erstellung eines Datenmanagementplans. Ausgehend von der Analyse ausgewählter Arbeitssituationen (Projektplanung und Antrag-stellung, Quellenbearbeitung, Publikation und Archivierung) und deren Veränderung in einer zunehmend digital organisierten Forschungspraxis werden die Zusammenhänge zwischen Forschungs- und Datenmanagementprozess thematisiert. Eine Checkliste in Form eines Fragenkatalogs und eine kommentierte Mustervorlage für einen Daten-managementplan helfen bei der Projektplanung und -beantragung.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
Erosion durch Regen und Wind schädigt fruchtbare Bodensubstanz irreversibel, verursacht weltweit riesige ökologische und sozio-ökonomische Schäden und ist eines der Hauptanliegen bezüglich Ökosystemdienstleistungen und Nahrungsmittelsicherheit. Die Quantifizierung von Abtragsraten ist immer noch höchst spekulativ, und fehlende empirische Daten führen zu großen Unsicherheiten von Risikoanalysemodellen. Als ein wesentlicher Grund für diese Unsicherheiten wird in dieser Arbeit die Prozesse der Beeinflussung von Wassererosion durch Wind und, im Speziellen, die Erosionsleistung von windbeeinflussten Regentropfen im Gegensatz zu windlosen Tropfen inklusive unterschiedlicher Oberflächenparameter beleuchtet. Der Forschungsansatz war experimentell-empirisch und beinhaltete die Entwicklung und Formulierung der Forschungshypothesen, die Konzeption und Durchführung von Experimenten mit einem mobilen Wind-Regenkanal, die Probenverarbeitung und Analyse sowie Interpretation der Daten. Die Arbeit gliedert sich in die Teile 1. "Bodenerosionsexperimente zu windbeeinflusstem Regen auf autochthonen und naturähnlichen Böden", 2. "Experimente zu Substratpartikeltransport durch windbeeinflussten Tropfenschlag" und 3. "Zusammenführung der Freiland- und Labortests". 1. Tests auf autochthonen degradierten Böden im semiariden Südspanien sowie auf kohäsionslosem sandigen Substrat wurden durchgeführt, um die relativen Auswirkungen von windbeeinflusstem Regen auf Oberflächenabflussbildung und Erosion zu untersuchen und zu quantifizieren. In der überwiegenden Anzahl der Versuche wurde klar eine Erhöhung der Erosionsraten festgestellt, was die Forschungshypothese, windbeeinflusster Regen sei erosiver als windloser Regen, deutlich bestätigte. Neben den stark erhöhten wurden auch niedrigere Abtragswerte gemessen, was zum einen die ausnehmende Relevanz der Oberflächenstrukturen und damit von in-situ- Experimenten belegte, zum anderen auf eine Erhöhung der Variabilität der Erosionsprozesse deutete. Diese Variabilität scheint zuzunehmen mit der Erhöhung der beteiligten Faktoren. 2. Ein sehr spezialisiertes Versuchsdesign wurde entwickelt und eingesetzt, um explizite Messungen der Tropfenschlagprozesse mit und ohne Windeinfluss durchzuführen. Getestet wurden die Erosionsagenzien Regen, Wind und windbeeinflusster Regen sowie drei Neigungen, drei Rauheiten und zwei Substrate. Alle Messergebnisse zeigten eine klare windinduzierte Erhöhung der Erosion um bis zu zwei Größenordnungen gegenüber windlosem Tropfenschlag und Wind. Windbeeinflusster Regen wird durch die gesteigerte Transportmenge und Weite als wesentlicher Erosionsfaktor bestätigt und ist damit ein Schlüsselparameter bei der Quantifizierung von globaler Bodenerosion, Erstellung von Sedimentbudgets und bei der Erforschung von Connectivity. Die Daten sind von hervorragender Qualität und sowohl für anspruchsvollere Analysemethoden (multivariate Statistik) als auch für Modellierungsansätze geeignet. 3. Eine Synthese aus Feld- und Laborversuchen (darunter auch ein bis dato unveröffentlichtes Versuchsset) inklusive einer statistischen Analyse bestätigt WDR als den herausragenden Faktor, der alle anderen Faktoren überlagert. Die Zusammenführung der beiden komplementären Experimentgruppen bringt die Forschungsreihe zu windbeeinflusstem Regen auf eine weiterführende Ebene, indem die Messergebnisse in einen ökologischen Zusammenhang gesetzt werden. Eine vorsichtige Projektion auf Landschaftsebene ermöglicht einen Einblick in die Risikobewertung von Bodenerosion durch windbeeinflussten Regen. Es wird deutlich, dass er sich gerade auch im Zusammenhang mit den durch den Klimawandel verstärkt auftretenden Regensturmereignissen katastrophal auf Bodenerosionsraten auszuwirken kann und dringend in die Bodenerosionsmodellierung integriert werden muss.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation). This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011"2012) within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations.
Academic self-concept (ASC) is comprised of individual perceptions of one- own academic ability. In a cross-sectional quasi-representative sample of 3,779 German elementary school children in grades 1 to 4, we investigated (a) the structure of ASC, (b) ASC profile formation, an aspect of differentiation that is reflected in lower correlations between domain-specific ASCs with increasing grade level, (c) the impact of (internal) dimensional comparisons of one- own ability in different school subjects for profile formation of ASC, and (d) the role played by differences in school grades between subjects for these dimensional comparisons. The nested Marsh/Shavelson model, with general ASC at the apex and math, writing, and reading ASC as specific factors nested under general ASC fitted the data at all grade levels. A first-order factor model with math, writing, reading, and general ASCs as correlated factors provided a good fit, too. ASC profile formation became apparent during the first two to three years of school. Dimensional comparisons across subjects contributed to ASC profile formation. School grades enhanced these comparisons, especially when achievement profiles were uneven. In part, findings depended on the assumed structural model of ASCs. Implications for further research are discussed with special regard to factors influencing and moderating dimensional comparisons.
Dysfunctional eating behavior is a major risk factor for developing all sorts of eating disorders. Food craving is a concept that may help to understand better why and how these and other eating disorders become chronic conditions through non homeastatically-driven mechanisms. As obesity affects people worldwide, cultural differences must be acknowledged to apply proper therapeutic strategies. In this work, we adapted the Food Craving Inventory (FCI) to the German population. We performed a factor analysis of an adaptation of the original FCI in a sample of 326 men and women. We could replicate the factor structure of the FCI on a German population.rnThe factor extraction procedure produced a factor solution that reproduces the fourfactors described in the original inventory, the FCI. Our instrument presents high internal consistency, as well as a significant correlation with measures of convergent and discriminant validity. The FCI-Deutsch (FCI-DE) is a valid instrument to assess craving for particular foods in Germany, and it could, therefore, prove useful in the clinical and research practice in the field of obesity and eating behaviors.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
In dieser Arbeit untersuchen wir das Optimierungsproblem der optimalen Materialausrichtung orthotroper Materialien in der Hülle von dreidimensionalen Schalenkonstruktionen. Ziel der Optimierung ist dabei die Minimierung der Gesamtnachgiebigkeit der Konstruktion, was der Suche nach einem möglichst steifen Design entspricht. Sowohl die mathematischen als auch die mechanischen Grundlagen werden in kompakter Form zusammengetragen und basierend darauf werden sowohl gradientenbasierte als auch auf mechanischen Prinzipien beruhende, neue Erweiterungen punktweise formulierter Optimierungsverfahren entwickelt und implementiert. Die vorgestellten Verfahren werden anhand des Beispiels des Modells einer Flugzeugtragfläche mit praxisrelevanter Problemgröße getestet und verglichen. Schließlich werden die untersuchten Methoden in ihrer Koppelung mit einem Verfahren zur Topologieoptimierung, basierend auf dem topologischen Gradienten untersucht.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
It is generally assumed that the temperature increase associated with global climate change will lead to increased thunderstorm intensity and associated heavy precipitation events. In the present study it is investigated whether the frequency of thunderstorm occurrences will in- or decrease and how the spatial distribution will change for the A1B scenario. The region of interest is Central Europe with a special focus on the Saar-Lor-Lux region (Saarland, Lorraine, Luxembourg) and Rhineland-Palatinate.Daily model data of the COSMO-CLM with a horizontal resolution of 4.5 km is used. The simulations were carried out for two different time slices: 1971"2000 (C20), and 2071"2100 (A1B). Thunderstorm indices are applied to detect thunderstorm-prone conditions and differences in their frequency of occurrence in the two thirty years timespans. The indices used are CAPE (Convective Available Potential Energy), SLI (Surface Lifted Index), and TSP (Thunderstorm Severity Potential).The investigation of the present and future thunderstorm conducive conditions show a significant increase of non-thunderstorm conditions. The regional averaged thunderstorm frequencies will decrease in general, but only in the Alps a potential increase in thunderstorm occurrences and intensity is found. The comparison between time slices of 10 and 30 years length show that the number of gridpoints with significant signals increases only slightly. In order to get a robust signal for severe thunderstorm, an extension to more than 75 years would be necessary.
The Firepower of Work Craving: When Self-Control Is Burning under the Rubble of Self-Regulation
(2017)
Work craving theory addresses how work-addicted individuals direct great emotion-regulatory efforts to weave their addictive web of working. They crave work for two main emotional incentives: to overcompensate low self-worth and to escape (i.e., reduce) negative affect, which is strategically achieved through neurotic perfectionism and compulsive working. Work-addicted individuals" strong persistence and self-discipline with respect to work-related activities suggest strong skills in volitional action control. However, their inability to disconnect from work implies low volitional skills. How can work-addicted individuals have poor and strong volitional skills at the same time? To answer this paradox, we elaborated on the relevance of two different volitional modes in work craving: self-regulation (self-maintenance) and self-control (goal maintenance). Four hypotheses were derived from Wojdylo- work craving theory and Kuhl- self-regulation theory: (H1) Work craving is associated with a combination of low self-regulation and high self-control. (H2) Work craving is associated with symptoms of psychological distress. (H3) Low self-regulation is associated with psychological distress symptoms. (H4) Work craving mediates the relationships between self-regulation deficits and psychological distress symptoms at high levels of self-control. Additionally, we aimed at supporting the discriminant validity of work craving with respect to work engagement by showing their different volitional underpinnings. Results of the two studies confirmed our hypotheses: whereas work craving was predicted by high self-control and low self-regulation and associated with higher psychological distress, work engagement was predicted by high self-regulation and high self-control and associated with lower symptoms of psychological distress. Furthermore, work styles mediated the relationship between volitional skills and symptoms of psychological distress. Based on these new insights, several suggestions for prevention and therapeutic interventions for work-addicted individuals are proposed.