Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (844)
- Wissenschaftlicher Artikel (220)
- Buch (Monographie) (113)
- Beitrag zu einer (nichtwissenschaftlichen) Zeitung oder Zeitschrift (108)
- Arbeitspapier (62)
- Ausgabe (Heft) zu einer Zeitschrift (24)
- Teil eines Buches (Kapitel) (22)
- Konferenzveröffentlichung (17)
- Sonstiges (15)
- Rezension (10)
Sprache
- Deutsch (846)
- Englisch (525)
- Französisch (75)
- Mehrsprachig (15)
- Russisch (1)
Volltext vorhanden
- ja (1462) (entfernen)
Schlagworte
- Deutschland (90)
- Luxemburg (53)
- Stress (40)
- Schule (37)
- Schüler (33)
- Modellierung (29)
- Politischer Unterricht (29)
- Demokratie (28)
- Fernerkundung (25)
- Geschichte (24)
Institut
- Psychologie (212)
- Raum- und Umweltwissenschaften (212)
- Politikwissenschaft (132)
- Universitätsbibliothek (83)
- Rechtswissenschaft (77)
- Wirtschaftswissenschaften (66)
- Mathematik (65)
- Fachbereich 4 (63)
- Medienwissenschaft (57)
- Fachbereich 6 (51)
Information in der vorvertraglichen Phase – das heißt, Informationspflichten sowie Rechtsfolgen von Informationserteilung und -nichterteilung – in Bezug auf Kaufvertrag und Wahl des optionalen Instruments hat im Vorschlag der Europäischen Kommission für ein Gemeinsames Europäisches Kaufrecht (GEK; KOM(2011) 635) vielfältige Regelungen erfahren. Die vorliegende Arbeit betrachtet diese Regelungen auch in ihrem Verhältnis zu den Textstufen des Europäischen Privatrechts – Modellregeln und verbraucherschützende EU-Richtlinien – und misst sie an ökonomischen Rahmenbedingungen, die die Effizienz von Transaktionen gebieten und Grenzen des Nutzens von (Pflicht-)Informationen aufzeigen.
Vom Grundsatz der Vertragsfreiheit ausgehend ist jeder Partei das Risiko zugewiesen, unzureichend informiert zu sein, während die Gegenseite nur punktuell zur Information verpflichtet ist. Zwischen Unternehmern bleibt es auch nach dem GEK hierbei, doch zwischen Unternehmer und Verbraucher wird dieses Verhältnis umgekehrt. Dort gelten, mit Differenzierung nach Vertragsschlusssituationen, umfassende Kataloge von Informationspflichten hinsichtlich des Kaufvertrags. Als Konzept ist dies grundsätzlich sinnvoll; die Pflichten dienen dem Verbraucherschutz, insbesondere der Informiertheit und Transparenz vor der Entscheidung über den Vertragsschluss. Teilweise gehen die Pflichten aber zu weit. Die Beeinträchtigung der Vertragsfreiheit des Unternehmers durch die Pflichten und die Folgen ihrer Verletzung lässt sich nicht vollständig mit dem Ziel des Verbraucherschutzes rechtfertigen. Durch das Übermaß an Information fördern die angeordneten Pflichten den Verbraucherschutz nur eingeschränkt; sie genügen nicht verhaltensökonomischen Maßstäben. Es empfiehlt sich daher, zwischen Unternehmern und Verbrauchern bestimmte verpflichtende Informationsinhalte ganz zu streichen, auf im konkreten Fall nicht erforderliche Information zu verzichten, erst nach Vertragsschluss relevante Informationen auf diese Zeit zu verschieben und die verbleibenden vorvertraglichen Pflichtinformationen in einer für den Verbraucher besser zu verarbeitenden Weise zu präsentieren. Von den einem Verbraucher zu erteilenden Informationen sollte stets verlangt werden, dass sie klar und verständlich sind; die Beweislast für ihre ordnungsgemäße Erteilung sollte generell dem Unternehmer obliegen.
Neben die ausdrücklich angeordneten Informationspflichten treten ungeachtet der Verbraucher- oder Unternehmereigenschaft sowie der Käufer- oder Verkäuferrolle stark einzelfallabhängige Informationspflichten nach Treu und Glauben, die im Recht der Willensmängel niedergelegt sind. Hier ist der Grundsatz verwirklicht, dass mangelnde Information zunächst das eigene Risiko jeder Partei ist; berechtigtes Vertrauen und freie Willensbildung werden geschützt. Diese Pflichten berücksichtigen auch das Ziel der Effizienz und achten die Vertragsfreiheit. Das Vertrauen auf jegliche erteilten Informationen wird zudem dadurch geschützt, dass sie den Vertragsinhalt – allerdings in Verbraucherverträgen nicht umfassend genug – mitbestimmen können und dass ihre Unrichtigkeit sanktioniert wird.
Die Verletzung jeglicher Arten von Informationspflichten kann insbesondere einen Schadensersatzanspruch sowie – über das Recht der Willensmängel – die Möglichkeit zur Lösung vom Vertrag nach sich ziehen. Das Zusammenspiel der unterschiedlichen Mechanismen führt allerdings zu Friktionen sowie zu Lücken in den Rechtsfolgen von Informationspflichtverletzungen. Daher empfiehlt sich die Schaffung eines Schadensersatzanspruchs für jede treuwidrig unterlassene Informationserteilung; hierdurch wird das Gebot von Treu und Glauben auch außerhalb des Rechts der Willensmängel zu einer eigentlichen einzelfallabhängigen Informationspflicht aufgewertet.
Sozialunternehmen haben mindestens zwei Ziele: die Erfüllung ihrer sozialen bzw. ökologischen Mission und finanzielle Ziele. Zwischen diesen Zielen können Spannungen entstehen. Wenn sie sich in diesem Spannungsfeld wiederholt zugunsten der finanziellen Ziele entscheiden, kommt es zum Mission Drift. Die Priorisierung der finanziellen Ziele überlagert dabei die soziale Mission. Auch wenn das Phänomen in der Praxis mehrfach beobachtet und in Einzelfallanalysen beschrieben wurde, gibt es bislang wenig Forschung zu Mission Drift. Der Fokus der vorliegenden Arbeit liegt darauf, diese Forschungslücke zu schließen und eigene Erkenntnisse für die Auslöser und Treiber des Mission Drifts von Sozialunternehmen zu ermitteln. Ein Augenmerk liegt auf den verhaltensökonomischen Theorien und der Mixed-Gamble-Logik. Dieser Logik zufolge liegt bei Entscheidungen immer eine Gleichzeitigkeit von Gewinnen und Verlusten vor, sodass Entscheidungsträger die Furcht vor Verlusten gegenüber der Aussicht auf Gewinne abwägen müssen. Das Modell wird genutzt, um eine neue theoretische Betrachtungsweise auf die Abwägung zwischen sozialen und finanziellen Zielen bzw. Mission Drift zu erhalten. Mit einem Conjoint Experiment werden Daten über das Entscheidungsverhalten von Sozialunternehmern generiert. Im Zentrum steht die Abwägung zwischen sozialen und finanziellen Zielen in verschiedenen Szenarien (Krisen- und Wachstumssituationen). Mithilfe einer eigens erstellten Stichprobe von 1.222 Sozialunternehmen aus Deutschland, Österreich und der Schweiz wurden 187 Teilnehmende für die Studie gewonnen. Die Ergebnisse dieser Arbeit zeigen, dass eine Krisensituation Auslöser für Mission Drift von Sozialunternehmen sein kann, weil in diesem Szenario den finanziellen Zielen die größte Bedeutung zugemessen wird. Für eine Wachstumssituation konnten hingegen keine solche Belege gefunden werden. Hinzu kommen weitere Einflussfaktoren, welche die finanzielle Orientierung verstärken können, nämlich die Gründeridentitäten der Sozialunternehmer, eine hohe Innovativität der Unternehmen und bestimmte Stakeholder. Die Arbeit schließt mit einer ausführlichen Diskussion der Ergebnisse. Es werden Empfehlungen gegeben, wie Sozialunternehmen ihren Zielen bestmöglich treu bleiben können. Außerdem werden die Limitationen der Studie und Wege für zukünftige Forschung im Bereich Mission Drift aufgezeigt.
Social entrepreneurship is a successful activity to solve social problems and economic challenges. Social entrepreneurship uses for-profit industry techniques and tools to build financially sound businesses that provide nonprofit services. Social entrepreneurial activities also lead to the achievement of sustainable development goals. However, due to the complex, hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel determinants. To expand our knowledge of how beneficial macro-level determinants can be, this work examines empirical evidence about the impact of macro-level determinants on social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning, the introductory section is explained in Chapter 1, which contains the motivation for the research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship. Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that a variety of determinants at the micro and macro levels are essential for the emergence of social entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the determinants and consequences of social entrepreneurship from different methodological perspectives. The theoretical foundations of the micro- and macro-level determinants influencing social entrepreneurial activities were discussed in Chapter 3. The purpose of reproducibility in research is to confirm previously published results (Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of transparency of methodology, reluctance to publish, and lack of interest from researchers, there is a lack of promoting replication of the existing research study (Baker, 2016; Hedges & Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that provide replicability of the reported results are considered rare in previous research (Burman et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand macro-level determinants on social inclusion, although it is still unclear whether these studies accurately reflect reality. It is important to provide conceptual underpinnings to the field through a reassessment of published results (Bettis et al., 2016). The results of their research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention, Chapter 5 considered the reproducibility of previous results, particularly on the topic of social entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of reproducibility and validate the specific conclusions they drew. The literal and constructive replication in the dissertation inspired us to explore technical replication research on social entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the growth of social ventures. The current debate reviews and references literature that has specifically focused on the development of social entrepreneurship. An empirical analysis of factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter 6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as consequences at the micro level. This study examined many characteristics of social and commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the growth of social entrepreneurship differs from commercial entrepreneurial activities due to objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative research has been used in studies to support the evidence on related topics, including Gupta et al (2020) emphasized that research needs to focus on specific concepts of social entrepreneurship for the field to advance. Therefore, this study provides a quantitative, analysis-based assessment of facts and data. For this purpose, a data set from the Global Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from 38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence of various social and commercial characteristics of entrepreneurship on economic growth in developing countries. Chapter 7 briefly explains future directions and practical/theoretical implications.
Sowohl national als auch international wird die zunehmende Digitalisierung von Prozessen gefordert. Die Heterogenität und Komplexität der dabei entstehenden Systeme erschwert die Partizipation für reguläre Nutzergruppen, welche zum Beispiel kein Expertenwissen in der Programmierung oder einen informationstechnischen Hintergrund aufweisen. Als Beispiel seien hier Smart Contracts genannt, deren Programmierung komplex ist und bei denen etwaige Fehler unmittelbar mit monetärem Verlust durch die direkte Verknüpfung der darunterliegenden Kryptowährung verbunden sind. Die vorliegende Arbeit stellt ein alternatives Protokoll für cyber-physische Verträge vor, das sich besonders gut für die menschliche Interaktion eignet und auch von regulären Nutzergruppen verstanden werden kann. Hierbei liegt der Fokus auf der Transparenz der Übereinkünfte und es wird weder eine Blockchain noch eine darauf beruhende digitale Währung verwendet. Entsprechend kann das Vertragsmodell der Arbeit als nachvollziehbare Verknüpfung zwischen zwei Parteien verstanden werden, welches die unterschiedlichen Systeme sicher miteinander verbindet und so die Selbstorganisation fördert. Diese Verbindung kann entweder computergestützt automatisch ablaufen, oder auch manuell durchgeführt werden. Im Gegensatz zu Smart Contracts können somit Prozesse Stück für Stück digitalisiert werden. Die Übereinkünfte selbst können zur Kommunikation, aber auch für rechtlich bindende Verträge genutzt werden. Die Arbeit ordnet das neue Konzept in verwandte Strömungen wie Ricardian oder Smart Contracts ein und definiert Ziele für das Protokoll, welche in Form der Referenzimplementierung umgesetzt werden. Sowohl das Protokoll als auch die Implementierung werden im Detail beschrieben und durch eine Erweiterung der Anwendung ergänzt, welche es Nutzenden in Regionen ohne direkte Internetverbindung ermöglicht, an ebenjenen Verträgen teilnehmen zu können. Weiterhin betrachtet die Evaluation die rechtlichen Rahmenbedinungen, die Übertragung des Protokolls auf Smart Contracts und die Performanz der Implementierung.
Physically-based distributed rainfall-runoff models as the standard analysis tools for hydro-logical processes have been used to simulate the water system in detail, which includes spa-tial patterns and temporal dynamics of hydrological variables and processes (Davison et al., 2015; Ek and Holtslag, 2004). In general, catchment models are parameterized with spatial information on soil, vegetation and topography. However, traditional approaches for eval-uation of the hydrological model performance are usually motivated with respect to dis-charge data alone. This may thus cloud model realism and hamper understanding of the catchment behavior. It is necessary to evaluate the model performance with respect to in-ternal hydrological processes within the catchment area as well as other components of wa-ter balance rather than runoff discharge at the catchment outlet only. In particular, a consid-erable amount of dynamics in a catchment occurs in the processes related to interactions of the water, soil and vegetation. Evapotranspiration process, for instance, is one of those key interactive elements, and the parameterization of soil and vegetation in water balance mod-eling strongly influences the simulation of evapotranspiration. Specifically, to parameterize the water flow in unsaturated soil zone, the functional relationships that describe the soil water retention and hydraulic conductivity characteristics are important. To define these functional relationships, Pedo-Transfer Functions (PTFs) are common to use in hydrologi-cal modeling. Opting the appropriate PTFs for the region under investigation is a crucial task in estimating the soil hydraulic parameters, but this choice in a hydrological model is often made arbitrary and without evaluating the spatial and temporal patterns of evapotran-spiration, soil moisture, and distribution and intensity of runoff processes. This may ulti-mately lead to implausible modeling results and possibly to incorrect decisions in regional water management. Therefore, the use of reliable evaluation approaches is continually re-quired to analyze the dynamics of the current interactive hydrological processes and predict the future changes in the water cycle, which eventually contributes to sustainable environ-mental planning and decisions in water management.
Remarkable endeavors have been made in development of modelling tools that provide insights into the current and future of hydrological patterns in different scales and their im-pacts on the water resources and climate changes (Doell et al., 2014; Wood et al., 2011). Although, there is a need to consider a proper balance between parameter identifiability and the model's ability to realistically represent the response of the natural system. Neverthe-less, tackling this issue entails investigation of additional information, which usually has to be elaborately assembled, for instance, by mapping the dominant runoff generation pro-cesses in the intended area, or retrieving the spatial patterns of soil moisture and evapotran-spiration by using remote sensing methods, and evaluation at a scale commensurate with hydrological model (Koch et al., 2022; Zink et al., 2018). The present work therefore aims to give insights into the modeling approaches to simulate water balance and to improve the soil and vegetation parameterization scheme in the hydrological model subject to producing more reliable spatial and temporal patterns of evapotranspiration and runoff processes in the catchment.
An important contribution to the overall body of work is a book chapter included among publications. The book chapter provides a comprehensive overview of the topic and valua-ble insights into the understanding the water balance and its estimation methods.
Moreover, the first paper aimed to evaluate the hydrological model behavior with re-spect to contribution of various sources of information. To do so, a multi-criteria evaluation metric including soft and hard data was used to define constraints on outputs of the 1-D hydrological model WaSiM-ETH. Applying this evaluation metric, we could identify the optimal soil and vegetation parameter sets that resulted in a “behavioral” forest stand water balance model. It was found out that even if simulations of transpiration and soil water con-tent are consistent with measured data, but still the dominant runoff generation processes or total water balance might be wrongly calculated. Therefore, only using an evaluation scheme which looks over different sources of data and embraces an understanding of the local controls of water loss through soil and plant, allowed us to exclude the unrealistic modeling outputs. The results suggested that we may need to question the generally accept-ed soil parameterization procedures that apply default parameter sets.
The second paper attempts to tackle the pointed model evaluation hindrance by getting down to the small-scale catchment (in Bavaria). Here, a methodology was introduced to analyze the sensitivity of the catchment water balance model to the choice of the Pedo-Transfer Functions (PTF). By varying the underlying PTFs in a calibrated and validated model, we could determine the resulting effects on the spatial distribution of soil hydraulic properties, total water balance in catchment outlet, and the spatial and temporal variation of the runoff components. Results revealed that the water distribution in the hydrologic system significantly differs amongst various PTFs. Moreover, the simulations of water balance components showed high sensitivity to the spatial distribution of soil hydraulic properties. Therefore, it was suggested that opting the PTFs in hydrological modeling should be care-fully tested by looking over the spatio-temporal distribution of simulated evapotranspira-tion and runoff generation processes, whether they are reasonably represented.
To fulfill the previous studies’ suggestions, the third paper then aims to focus on evalu-ating the hydrological model through improving the spatial representation of dominant run-off processes. It was implemented in a mesoscale catchment in southwestern Germany us-ing the hydrological model WaSiM-ETH. Dealing with the issues of inadequate spatial ob-servations for rigorous spatial model evaluation, we made use of a reference soil hydrologic map available for the study area to discern the expected dominant runoff processes across a wide range of hydrological conditions. The model was parameterized by applying 11 PTFs and run by multiple synthetic rainfall events. To compare the simulated spatial patterns to the patterns derived by digital soil map, a multiple-component spatial performance metric (SPAEF) was applied. The simulated DRPs showed a large variability with regard to land use, topography, applied rainfall rates, and the different PTFs, which highly influence the rapid runoff generation under wet conditions.
The three published manuscripts proceeded towards the model evaluation viewpoints that ultimately attain the behavioral model outputs. It was performed through obtaining information about internal hydrological processes that lead to certain model behaviors, and also about the function and sensitivity of some of the soil and vegetation parameters that may primarily influence those internal processes in a catchment. Accordingly, using this understanding on model reactions, and by setting multiple evaluation criteria, it was possi-ble to identify which parameterization could lead to behavioral model realization. This work, in fact, will contribute to solving some of the issues (e.g., spatial variability and modeling methods) identified as the 23 unsolved problems in hydrology in the 21st century (Blöschl et al., 2019). The results obtained in the present work encourage the further inves-tigations toward a comprehensive model calibration procedure considering multiple data sources simultaneously. This will enable developing the new perspectives to the current parameter estimation methods, which in essence, focus on reproducing the plausible dy-namics (spatio-temporal) of the other hydrological processes within the watershed.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
This dissertation focusses on research into the personality construct of action vs. state orientation. Derived from the Personality-Systems-Interaction Theory (PSI Theory), state orientation is defined as a low ability to self-regulate emotions and associated with many adverse consequences – especially under stress. Because of the high prevalence of state orientation, it is a very important topic to investigate factors that help state-oriented people to buffer these adverse consequences. Action orientation, in contrast, is defined as a high ability to self-regulate own emotions in a very specific way: through accessing the self. The present dissertation demonstrates this theme in five studies, using a total of N = 1251 participants with a wide age range, encompassing different populations (students, non-student population (people from the coaching and therapy sector), applying different operationalisations to investigate self-access as a mediator or an outcome variable. Furthermore, it is tested whether the popular technique of mindfulness - that is advertised as a potent remedy for bringing people closer to the self -really works for everybody. The findings show that the presumed remedy is rather harmful for state-oriented individuals. Finally, an attempt to ameliorate these alienating effects, the present dissertation attempts to find theory-driven, and easy-to-apply solution how mindfulness exercises can be adapted.
Representation Learning techniques play a crucial role in a wide variety of Deep Learning applications. From Language Generation to Link Prediction on Graphs, learned numerical vector representations often build the foundation for numerous downstream tasks.
In Natural Language Processing, word embeddings are contextualized and depend on their current context. This useful property reflects how words can have different meanings based on their neighboring words.
In Knowledge Graph Embedding (KGE) approaches, static vector representations are still the dominant approach. While this is sufficient for applications where the underlying Knowledge Graph (KG) mainly stores static information, it becomes a disadvantage when dynamic entity behavior needs to be modelled.
To address this issue, KGE approaches would need to model dynamic entities by incorporating situational and sequential context into the vector representations of entities. Analogous to contextualised word embeddings, this would allow entity embeddings to change depending on their history and current situational factors.
Therefore, this thesis provides a description of how to transform static KGE approaches to contextualised dynamic approaches and how the specific characteristics of different dynamic scenarios are need to be taken into consideration.
As a starting point, we conduct empirical studies that attempt to integrate sequential and situational context into static KG embeddings and investigate the limitations of the different approaches. In a second step, the identified limitations serve as guidance for developing a framework that enables KG embeddings to become truly dynamic, taking into account both the current situation and the past interactions of an entity. The two main contributions in this step are the introduction of the temporally contextualized Knowledge Graph formalism and the corresponding RETRA framework which realizes the contextualisation of entity embeddings.
Finally, we demonstrate how situational contextualisation can be realized even in static environments, where all object entities are passive at all times.
For this, we introduce a novel task that requires the combination of multiple context modalities and their integration with a KG based view on entity behavior.
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
Im Rahmen psychologischer Wissenschaftskommunikation werden Plain Language Summaries (PLS, Kerwer et al., 2021) zunehmend bedeutsamer. Es handelt sich hierbei um
zugängliche, überblicksartige Zusammenfassungen, welche das Verständnis von Lai:innen
potenziell unterstützen und ihr Vertrauen in wissenschaftliche Forschung fördern können.
Dies erscheint speziell vor dem Hintergrund der Replikationskrise (Wingen et al., 2019) sowie Fehlinformationen in Online-Kontexten (Swire-Thompson & Lazer, 2020) relevant. Die
positiven Auswirkungen zweier Effekte auf Vertrauen sowie ihre mögliche Interaktion fanden im Kontext von PLS bisher kaum Berücksichtigung: Zum einen die einfache Darstellung von Informationen (Easiness-Effekt, Scharrer et al., 2012), zum anderen ein möglichst wissenschaftlicher Stil (Scientificness-Effekt, Thomm & Bromme, 2012). Diese Dissertation hat zum Ziel, im Kontext psychologischer PLS genauere Bestandteile beider Effekte zu identifizieren und den Einfluss von Einfachheit und Wissenschaftlichkeit auf Vertrauen zu beleuchten. Dazu werden drei Artikel zu präregistrierten Online-Studien mit deutschsprachigen Stichproben vorgestellt.
Im ersten Artikel wurden in zwei Studien verschiedene Textelemente psychologischer PLS systematisch variiert. Es konnte ein signifikanter Einfluss von Fachtermini, Informationen zur
Operationalisierung, Statistiken und dem Grad an Strukturierung auf die von Lai:innen berichtete Einfachheit der PLS beobachtet werden. Darauf aufbauend wurden im zweiten Artikel vier PLS, die von Peer-Review-Arbeiten abgeleitet wurden, in ihrer Einfachheit und
Wissenschaftlichkeit variiert und Lai:innen zu ihrem Vertrauen in die Texte und Autor:innen befragt. Hier ergab sich zunächst nur ein positiver Einfluss von Wissenschaftlichkeit auf
Vertrauen, während der Easiness-Effekt entgegen der Hypothesen ausblieb. Exploratorische Analysen legten jedoch einen positiven Einfluss der von Lai:innen subjektiv wahrgenommenen Einfachheit auf ihr Vertrauen sowie eine signifikante Interaktion mit der
wahrgenommenen Wissenschaftlichkeit nahe. Diese Befunde lassen eine vermittelnde Rolle der subjektiven Wahrnehmung von Lai:innen für beide Effekte vermuten. Im letzten Artikel
wurde diese Hypothese über Mediationsanalysen geprüft. Erneut wurden zwei PLS
präsentiert und sowohl die Wissenschaftlichkeit des Textes als auch die der Autor:in manipuliert. Der Einfluss höherer Wissenschaftlichkeit auf Vertrauen wurde durch die
subjektiv von Lai:innen wahrgenommene Wissenschaftlichkeit mediiert. Zudem konnten
dimensionsübergreifende Mediationseffekte beobachtet werden.
Damit trägt diese Arbeit über bestehende Forschung hinaus zur Klärung von Rahmenbedingungen des Easiness- und Scientificness-Effektes bei. Theoretische
Implikationen zur zukünftigen Definition von Einfachheit und Wissenschaftlichkeit, sowie
praktische Konsequenzen hinsichtlich unterschiedlicher Zielgruppen von
Wissenschaftskommunikation und dem Einfluss von PLS auf die Entscheidungsbildung von
Lai:innen werden diskutiert.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Der vorliegende Beitrag greift die öffentliche Diskussion um den rechtspolitischen Umgang mit Hass, Hetze und Antisemitismus auf, die insbesondere nach dem Terroranschlag der Hamas am 07.10.2023 an Intensität und Dringlichkeit zugenommen hat. Dabei beleuchtet er einerseits das Straf- und Zivilrecht, legt andererseits einen besonderen Fokus auf öffentlich-rechtliche Konstellationen. Auf jedem dieser Gebiete werden Schwächen und Potenziale des Rechts und der Rechtsprechung aufgezeigt, zugleich aber auch die Grenzen staatlicher Gewalt verdeutlicht. Denn letztlich handelt es sich um ein gesellschaftliches Problem, dem – trotz aller Notwendigkeit staatlichen Handelns – in erster Linie durch Information, und erst in zweiter Linie durch das Recht begegnet werden muss.
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
Allocating scarce resources efficiently is a major task in mechanism design. One of the most fundamental problems in mechanism design theory is the problem of selling a single indivisible item to bidders with private valuations for the item. In this setting, the classic Vickrey auction of~\citet{vickrey1961} describes a simple mechanism to implement a social welfare maximizing allocation.
The Vickrey auction for a single item asks every buyer to report its valuation and allocates the item to the highest bidder for a price of the second highest bid. This auction features some desirable properties, e.g., buyers cannot benefit from misreporting their true value for the item (incentive compatibility) and the auction can be executed in polynomial time.
However, when there is more than one item for sale and buyers' valuations for sets of items are not additive or the set of feasible allocations is constrained, then constructing mechanisms that implement efficient allocations and have polynomial runtime might be very challenging. Consider a single seller selling $n\in \N$ heterogeneous indivisible items to several bidders. The Vickrey-Clarke-Groves auction generalizes the idea of the Vickrey auction to this multi-item setting. Naturally, every bidder has an intrinsic value for every subset of items. As in in the Vickrey auction, bidders report their valuations (Now, for every subset of items!). Then, the auctioneer computes a social welfare maximizing allocation according to the submitted bids and charges buyers the social cost of their winning that is incurred by the rest of the buyers. (This is the analogue to charging the second highest bid to the winning bidder in the single item Vickrey auction.) It turns out that the Vickrey-Clarke-Groves auction is also incentive compatible but it poses some problems: In fact, say for $n=40$, bidders would have to submit $2^{40}-1$ values (one value for each nonempty subset of the ground set) in total. Thus, asking every bidder for its valuation might be impossible due to time complexity issues. Therefore, even though the Vickrey-Clarke-Groves auction implements a social welfare maximizing allocation in this multi-item setting it might be impractical and there is need for alternative approaches to implement social welfare maximizing allocations.
This dissertation represents the results of three independent research papers all of them tackling the problem of implementing efficient allocations in different combinatorial settings.
Das Ziel dynamischer Mikrosimulationen ist es, die Entwicklung von Systemen über das Verhalten der einzelnen enthaltenen Bestandteile zu simulieren, um umfassende szenariobasierte Analysen zu ermöglichen. Im Bereich der Wirtschafts- und Sozialwissenschaften wird der Fokus üblicherweise auf Populationen bestehend aus Personen und Haushalten gelegt. Da politische und wirtschaftliche Entscheidungsprozesse meist auf lokaler Ebene getroffen werden, bedarf es zudem kleinräumiger Informationen, um gezielte Handlungsempfehlungen ableiten zu können. Das stellt Forschende wiederum vor große Herausforderungen im Erstellungsprozess regionalisierter Simulationsmodelle. Dieser Prozess reicht von der Generierung geeigneter Ausgangsdatensätze über die Erfassung und Umsetzung der dynamischen Komponenten bis hin zur Auswertung der Ergebnisse und Quantifizierung von Unsicherheiten. Im Rahmen dieser Arbeit werden ausgewählte Komponenten, die für regionalisierte Mikrosimulationen von besonderer Relevanz sind, beschrieben und systematisch analysiert.
Zunächst werden in Kapitel 2 theoretische und methodische Aspekte von Mikrosimulationen vorgestellt, um einen umfassenden Überblick über verschiedene Arten und Möglichkeiten der Umsetzung dynamischer Modellierungen zu geben. Im Fokus stehen dabei die Grundlagen der Erfassung und Simulation von Zuständen und Zustandsänderungen sowie die damit verbundenen strukturellen Aspekte im Simulationsprozess.
Sowohl für die Simulation von Zustandsänderungen als auch für die Erweiterung der Datenbasis werden primär logistische Regressionsmodelle zur Erfassung und anschließenden wahrscheinlichkeitsbasierten Vorhersage der Bevölkerungsstrukturen auf Mikroebene herangezogen. Die Schätzung beruht insbesondere auf Stichprobendaten, die in der Regel neben einem eingeschränktem Stichprobenumfang keine oder nur unzureichende regionale Differenzierungen zulassen. Daher können bei der Vorhersage von Wahrscheinlichkeiten erhebliche Differenzen zu bekannten Totalwerten entstehen. Um eine Harmonisierung mit den Totalwerten zu erhalten, lassen sich Methoden zur Anpassung von Wahrscheinlichkeiten – sogenannte Alignmentmethoden – anwenden. In der Literatur werden zwar unterschiedliche Möglichkeiten beschrieben, über die Auswirkungen dieser Verfahren auf die Güte der Modelle ist jedoch kaum etwas bekannt. Zur Beurteilung verschiedener Techniken werden diese im Rahmen von Kapitel 3 in umfassenden Simulationsstudien unter verschiedenen Szenarien umgesetzt. Hierbei kann gezeigt werden, dass durch die Einbindung zusätzlicher Informationen im Modellierungsprozess deutliche Verbesserungen sowohl bei der Schätzung der Parameter als auch bei der Vorhersage der Wahrscheinlichkeiten erzielt werden können. Zudem lassen sich dadurch auch bei fehlenden regionalen Identifikatoren in den Modellierungsdaten kleinräumige Wahrscheinlichkeiten erzeugen. Insbesondere die Maximierung der Likelihood des zugrundeliegenden Regressionsmodells unter der Nebenbedingung, dass die bekannten Totalwerte eingehalten werden, weist in allen Simulationsstudien überaus gute Ergebnisse auf.
Als eine der einflussreichsten Komponenten in regionalisierten Mikrosimulationen erweist sich die Umsetzung regionaler Mobilität. Gleichzeitig finden Wanderungen in vielen Mikrosimulationsmodellen keine oder nur unzureichende Beachtung. Durch den unmittelbaren Einfluss auf die gesamte Bevölkerungsstruktur führt ein Ignorieren jedoch bereits bei einem kurzen Simulationshorizont zu starken Verzerrungen. Während für globale Modelle die Integration von Wanderungsbewegungen über Landesgrenzen ausreicht, müssen in regionalisierten Modellen auch Binnenwanderungsbewegungen möglichst umfassend nachgebildet werden. Zu diesem Zweck werden in Kapitel 4 Konzepte für Wanderungsmodule erstellt, die zum einen eine unabhängige Simulation auf regionalen Subpopulationen und zum anderen eine umfassende Nachbildung von Wanderungsbewegungen innerhalb der gesamten Population zulassen. Um eine Berücksichtigung von Haushaltsstrukturen zu ermöglichen und die Plausibilität der Daten zu gewährleisten, wird ein Algorithmus zur Kalibrierung von Haushaltswahrscheinlichkeiten vorgeschlagen, der die Einhaltung von Benchmarks auf Individualebene ermöglicht. Über die retrospektive Evaluation der simulierten Migrationsbewegungen wird die Funktionalität der Wanderdungskonzepte verdeutlicht. Darüber hinaus werden über die Fortschreibung der Population in zukünftige Perioden divergente Entwicklungen der Einwohnerzahlen durch verschiedene Konzepte der Wanderungen analysiert.
Eine besondere Herausforderung in dynamischen Mikrosimulationen stellt die Erfassung von Unsicherheiten dar. Durch die Komplexität der gesamten Struktur und die Heterogenität der Komponenten ist die Anwendung klassischer Methoden zur Messung von Unsicherheiten oft nicht mehr möglich. Zur Quantifizierung verschiedener Einflussfaktoren werden in Kapitel 5 varianzbasierte Sensitivitätsanalysen vorgeschlagen, die aufgrund ihrer enormen Flexibilität auch direkte Vergleiche zwischen unterschiedlichsten Komponenten ermöglichen. Dabei erweisen sich Sensitivitätsanalysen nicht nur für die Erfassung von Unsicherheiten, sondern auch für die direkte Analyse verschiedener Szenarien, insbesondere zur Evaluation gemeinsamer Effekte, als überaus geeignet. In Simulationsstudien wird die Anwendung im konkreten Kontext dynamischer Modelle veranschaulicht. Dadurch wird deutlich, dass zum einen große Unterschiede hinsichtlich verschiedener Zielwerte und Simulationsperioden auftreten, zum anderen aber auch immer der Grad an regionaler Differenzierung berücksichtigt werden muss.
Kapitel 6 fasst die Erkenntnisse der vorliegenden Arbeit zusammen und gibt einen Ausblick auf zukünftige Forschungspotentiale.
This cumulative thesis encompass three studies focusing on the Weddell Sea region in the Antarctic. The first study produces and evaluates a high quality data set of wind measurements for this region. The second study produces and evaluates a 15 year regional climate simulation for the Weddell Sea region. And the third study produces and evaluates a climatology of low level jets (LLJs) from the simulation data set. The evaluations were done in the attached three publications and the produced data sets are published online.
In 2015/2016, the RV Polarstern undertook an Antarctic expedition in the Weddell Sea. We operated a Doppler wind lidar on board during that time running different scan patterns. The resulting data was evaluated, corrected, processed and we derived horizontal wind speed and directions for vertical profiles with up to 2 km height. The measurements cover 38 days with a temporal resolution of 10-15 minutes. A comparisons with other radio sounding data showed only minor differences.
The resulting data set was used alongside other measurements to evaluate temperature and wind of simulation data. The simulation data was produced with the regional climate model CCLM for the period of 2002 to 2016 for the Weddell Sea region. Only smaller biases were found except for a strong warm bias during winter near the surface of the Antarctic Plateau. Thus we adapted the model setup and were able to remove the bias in a second simulation.
This new simulation data was then used to derive a climatology of low level jets (LLJs). Statistics of occurrence frequency, height and wind speed of LLJs for the Weddell Sea region are presented along other parameters. Another evaluation with measurements was also performed in the last study.
This study scrutinizes press photographs published during the first 6 weeks of the Russian War in Ukraine, beginning February 24th, 2022. Its objective is to shed light on the emotions evoked in Internet-savvy audiences. This empirical research aims to contribute to the understanding of emotional media effects that shape attitudes and actions of ordinary citizens. Main research questions are: What kind of empathic reactions are observed during the Q-sort study? Which visual patterns are relevant for which emotional evaluations and attributions? The assumption is that the evaluations and attributions of empathy are not random, but follow specific patterns. The empathic reactions are based on visual patterns which, in turn, influence the type of empathic reaction. The identification of specific categories for visual and emotional reaction patterns are arrived at in different methodological processes. Visual pattern categories were developed inductively, using the art history method of iconography-iconology to identify six distinct types of visual motifs in a final sample of 33 war photographs. The overarching categories for empathic reactions—empty empathy, vicarious traumatization and witnessing—were applied deductively, building on E. Ann Kaplan's pivotal distinctions. The main result of this research are three novel categories that combine visual patterns with empathic reaction patterns. The labels for these categories are a direct result of the Q-factorial analysis, interpreted through the lense of iconography-iconology. An exploratory nine-scale forced-choice Q-sort study (Nstimuli = 33) was implemented, followed by self-report interviews with a total of 25 participants [F = 16 (64%), M = 9 (36%), Mage = 26.4 years]. Results from this exploratory research include motivational statements on the meanings of war photography from semi-structured post-sort-interviews. The major result of this study are three types of visual patterns (“factors”) that govern distinct empathic reactions in participants: Factor 1 is “veiled empathy” with highest empathy being attributed to photos showing victims whose corpses or faces were veiled. Additional features of “veiled empathy” are a strong anti-politician bias and a heightened awareness of potential visual manipulation. Factor 2 is “mirrored empathy” with highest empathy attributions to photos displaying human suffering openly. Factor 3 focused on the context. It showed a proclivity for documentary style photography. This pattern ranked photos without clear contextualization lower in empathy than those photos displaying the fully contextualized setting. To the best of our knowledge, no study has tested empathic reactions to war photography empirically. In this respect, the study is novel, but also exploratory. Findings like the three patterns of visual empathy might be helpful for photo selection processes in journalism, for political decision-making, for the promotion of relief efforts, and for coping strategies in civil society to deal with the potentially numbing or traumatizing visual legacy of the War in Ukraine.
Anpassung an den Klimawandel stellt eine komplexe gesellschaftliche Herausforderung dar und hat Bezug zu steuerungstheoretischen Fragen um Governance. Klimaanpassung zeichnet sich aus durch die Zusammenarbeit staatlicher und nicht-staatlicher Akteure, netzwerkartige Strukturen, flexible Steuerungsmechanismen sowie formelle und informelle Koordinationsstrukturen. Für die erfolgreiche Gestaltung von Klimaanpassungspolitik müssen vielfältige Akteurs- und Interessenskonstellationen berücksichtigt werden.
Ziel der vorliegenden Studie ist es, das Traben-Trarbacher Akteurs- und Stakeholdernetzwerk aus Perspektive der Klimaanpassung zu analysieren. Ein besonderer Fokus liegt hierbei auf den regionalwirtschaftlich bedeutenden Sektoren Weinbau und Tourismus, die integriert und im Kontext von kommunalen, regionalen und überregionalen Strukturen betrachtet werden. Im Rahmen der Analyse wurden das Beziehungsgeflecht, die Reichweite und Diversität des Netzwerks sowie die Zusammensetzung der Akteurslandschaft dargestellt. Darüber hinaus konnten wichtige Schlüsselakteure, potenzielle Multiplikatoren, Interdependenzen zwischen Weinbau und Tourismus sowie Informations- und Wissensquellen identifiziert werden.
Die Ergebnisse der Stakeholderanalyse geben wichtige Hinweise darauf, welche Akteure in Steuerungsprozesse von Klimaanpassung einbezogen und welche lokalen Gegebenheiten und Beziehungen hierbei berücksichtig werden müssen. Besonders die Zusammensetzung der Akteure hat entscheidenden Einfluss auf den Verlauf und Erfolg der Steuerung von Klimaanpassung. Die vorliegende Stakeholderanalyse schafft also eine wichtige Grundlage zur Etablierung eines Governance-Netzwerks für die Erarbeitung und Erprobung von Klimawandelanpassungsmaßnahmen in Traben-Trarbach und der Moselregion. Damit dient die Analyse der langfristigen Verankerung von Klimaanpassung in der Region und kann auch als Anregung für weitere Kommunen genutzt werden, die vor ähnlichen Herausforderungen stehen wie Traben-Trarbach.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
Trotz des Rückgangs der Einwohner*innenzahl, kommunaler wohnungspolitischer Maßnahmen und der Pandemie steigen die Göttinger Mieten weiterhin. Besonders Menschen mit geringen Einkommen haben nach wie vor große Probleme, eine bezahlbare Wohnung in Göttingen zu finden. In diesem Wohnraumatlas zeigen wir die Entwicklung der Angebotsmieten auf. Zudem verdeutlichen wir, dass der Mietwohnungsmarkt in Teilmärkte segmentiert ist, für deren Identifizierung wir Ansätze liefern. Damit wollen wir stadtpolitisch Aktiven und anderen Interessierten Materialien an die Hand geben, um die Wohnungspolitik der Stadt einordnen zu können.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.
Modern decision making in the digital age is highly driven by the massive amount of
data collected from different technologies and thus affects both individuals as well as
economic businesses. The benefit of using these data and turning them into knowledge
requires appropriate statistical models that describe the underlying observations well.
Imposing a certain parametric statistical model goes along with the need of finding
optimal parameters such that the model describes the data best. This often results in
challenging mathematical optimization problems with respect to the model’s parameters
which potentially involve covariance matrices. Positive definiteness of covariance matrices
is required for many advanced statistical models and these constraints must be imposed
for standard Euclidean nonlinear optimization methods which often results in a high
computational effort. As Riemannian optimization techniques proved efficient to handle
difficult matrix-valued geometric constraints, we consider optimization over the manifold
of positive definite matrices to estimate parameters of statistical models. The statistical
models treated in this thesis assume that the underlying data sets used for parameter
fitting have a clustering structure which results in complex optimization problems. This
motivates to use the intrinsic geometric structure of the parameter space. In this thesis,
we analyze the appropriateness of Riemannian optimization over the manifold of positive
definite matrices on two advanced statistical models. We establish important problem-
specific Riemannian characteristics of the two problems and demonstrate the importance
of exploiting the Riemannian geometry of covariance matrices based on numerical studies.
Even though proper research on Cauchy transforms has been done, there are still a lot of open questions. For example, in the case of representation theorems, i.e. the question when a function can be represented as a Cauchy transform, there is 'still no completely satisfactory answer' ([9], p. 84). There are characterizations for measures on the circle as presented in the monograph [7] and for general compactly supported measures on the complex plane as presented in [27]. However, there seems to exist no systematic treatise of the Cauchy transform as an operator on $L_p$ spaces and weighted $L_p$ spaces on the real axis.
This is the point where this thesis draws on and we are interested in developing several characterizations for the representability of a function by Cauchy transforms of $L_p$ functions. Moreover, we will attack the issue of integrability of Cauchy transforms of functions and measures, a topic which is only partly explored (see [43]). We will develop different approaches involving Fourier transforms and potential theory and investigate into sufficient conditions and characterizations.
For our purposes, we shall need some notation and the concept of Hardy spaces which will be part of the preliminary Chapter 1. Moreover, we introduce Fourier transforms and their complex analogue, namely Fourier-Laplace transforms. This will be of extraordinary usage due to the close connection of Cauchy and Fourier(-Laplace) transforms.
In the second chapter we shall begin our research with a discussion of the Cauchy transformation on the classical (unweighted) $L_p$ spaces. Therefore, we start with the boundary behavior of Cauchy transforms including an adapted version of the Sokhotski-Plemelj formula. This result will turn out helpful for the determination of the image of the Cauchy transformation under $L_p(\R)$ for $p\in(1,\infty).$ The cases $p=1$ and $p=\infty$ are playing special roles here which justifies a treatise in separate sections. For $p=1$ we will involve the real Hardy space $H_{1}(\R)$ whereas the case $p=\infty$ shall be attacked by an approach incorporating intersections of Hardy spaces and certain subspaces of $L_{\infty}(\R).$
The third chapter prepares ourselves for the study of the Cauchy transformation on subspaces of $L_{p}(\R).$ We shall give a short overview of the basic facts about Cauchy transforms of measures and then proceed to Cauchy transforms of functions with support in a closed set $X\subset\R.$ Our goal is to build up the main theory on which we can fall back in the subsequent chapters.
The fourth chapter deals with Cauchy transforms of functions and measures supported by an unbounded interval which is not the entire real axis. For convenience we restrict ourselves to the interval $[0,\infty).$ Bringing once again the Fourier-Laplace transform into play, we deduce complex characterizations for the Cauchy transforms of functions in $L_{2}(0,\infty).$ Moreover, we analyze the behavior of Cauchy transform on several half-planes and shall use these results for a fairly general geometric characterization. In the second section of this chapter, we focus on Cauchy transforms of measures with support in $[0,\infty).$ In this context, we shall derive a reconstruction formula for these Cauchy transforms holding under pretty general conditions as well as results on the behaviur on the left half-plane. We close this chapter by rather technical real-type conditions and characterizations for Cauchy transforms of functions in $L_p(0,\infty)$ basing on an approach in [82].
The most common case of Cauchy transforms, those of compactly supported functions or measures, is the subject of Chapter 5. After complex and geometric characterizations originating from similar ideas as in the fourth chapter, we adapt a functional-analytic approach in [27] to special measures, namely those with densities to a given complex measure $\mu.$ The chapter is closed with a study of the Cauchy transformation on weighted $L_p$ spaces. Here, we choose an ansatz through the finite Hilbert transform on $(-1,1).$
The sixth chapter is devoted to the issue of integrability of Cauchy transforms. Since this topic has no comprehensive treatise in literature yet, we start with an introduction of weighted Bergman spaces and general results on the interaction of the Cauchy transformation in these spaces. Afterwards, we combine the theory of Zen spaces with Cauchy transforms by using once again their connection with Fourier transforms. Here, we shall encounter general Paley-Wiener theorems of the recent past. Lastly, we attack the issue of integrability of Cauchy transforms by means of potential theory. Therefore, we derive a Fourier integral formula for the logarithmic energy in one and multiple dimensions and give applications to Fourier and hence Cauchy transforms.
Two appendices are annexed to this thesis. The first one covers important definitions and results from measure theory with a special focus on complex measures. The second appendix contains Cauchy transforms of frequently used measures and functions with detailed calculations.
The COVID-19 pandemic has affected schooling worldwide. In many places, schools closed for weeks or months, only part of the student body could be educated at any one time, or students were taught online. Previous research discloses the relevance of schooling for the development of cognitive abilities. We therefore compared the intelligence test performance of 424 German secondary school students in Grades 7 to 9 (42% female) tested after the first six months of the COVID-19 pandemic (i.e., 2020 sample) to the results of two highly comparable student samples tested in 2002 (n = 1506) and 2012 (n = 197). The results revealed substantially and significantly lower intelligence test scores in the 2020 sample than in both the 2002 and 2012 samples. We retested the 2020 sample after another full school year of COVID-19-affected schooling in 2021. We found mean-level changes of typical magnitude, with no signs of catching up to previous cohorts or further declines in cognitive performance. Perceived stress during the pandemic did not affect changes in intelligence test results between the two measurements.
COVID-19 was a harsh reminder that diseases are an aspect of human existence and mortality. It was also a live experiment in the formation and alteration of disease-related attitudes. Not only are these attitudes relevant to an individual’s self-protective behavior, but they also seem to be associated with social and political attitudes more broadly. One of these attitudes is Social Darwinism, which holds that a pandemic benefits society by enabling nature “to weed out the weak”. In two countries (N = 300, N = 533), we introduce and provide evidence for the reliability, validity, and usefulness of the Disease-Related Social Darwinism (DRSD) Short Scale measuring this concept. Results indicate that DRSD is meaningful related to other central political attitudes like Social Dominance Orientation, Authoritarianism and neoliberalism. Importantly, the scale significantly predicted people’s protective behavior during the Pandemic over and above general social Darwinism. Moreover, it significantly predicted conservative attitudes, even after controlling for Social Dominance Orientation.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Intensiv diskutierte Aspekte der Politikwissenschaft heben zunehmend die Bedeutung von Strategiefähigkeit zur erfolgreichen Durchführung von Wahlkämpfen für Parteien hervor. Der Widerspruch der mit den Implikationen der modernen Mediengesellschaft eingehergehenden unterstellten Akteursfähigkeit der Parteien und ihrer kollektiven heterogenen Interessens- und Organisationsvielfalt bleibt dabei bestehen. Die Fokussierung der Parteien auf das Ziel der Stimmenmaximierung bringt unter den sich wandelnden Rahmenbedingungen Veränderungen der Binnenstrukturen mit sich. So diskutieren Parteienforscher seit Längerem die Notwendigkeit eines vierten Parteitypus als Nachfolger von Kirchheimers Volkspartei (1965). Verschiedene dieser Ansätze berücksichtigen primär die Wahlkampffokussierung der Parteien, während andere vor allem auf den gesteigerten Strategiebedarf abzielen. Auch die Wechselwirkungen mit den Erfordernissen der Mediengesellschaft sowie Auswirkungen des gesellschaftlichen Wandels stehen im Vordergrund zahlreicher Untersuchungen. Die Arbeit von Uwe Jun (2004), der mit dem Modell der professionalisierten Medienkommunikationspartei auch die organisatorischen und programmatischen Transformationsaspekte des Parteiwandels beleuchtet, liefert einen bemerkenswerten Beitrag zur Party-Change-Debatte und bietet durch die angeschlossene vergleichende exemplarische Fallstudie eine praxisnahe Einordnung. Die geringe empirische Relevanz, die Jun seinem Parteityp anhand der Untersuchung von SPD und New Labor zwischen 1995 und 2005 bestätigt, soll in dieser Arbeit versucht werden zu relativieren, in dem der Parteiwandel der deutschen Großparteien seit der Wiedervereinigung durch die Untersuchung ihrer Wahlkampffähigkeit aufgezeigt wird. Anhand eines längsschnittlichen Vergleiches der Bundestagswahlkämpfe von SPD und CDU zwischen 1990 und 2013 soll die Plausibilität dieses vierten Parteitypus überprüft werden. Hierdurch soll die Entwicklung der Strategie- und Wahlkampffähigkeit beider Großparteien in den Bundestagswahlkämpfen seit 1990 untersucht und die Ergebnisse miteinander verglichen und in Bezug auf den Parteiwandel eingeordnet werden.
Dass sich Parteien genau wie ihre gesellschaftliche und politische Umwelt im Wandel befinden, ist nicht zu bestreiten und seit Langem viel diskutierter Gegenstand der Parteienforschung. „Niedergangsdiskussion“, Mitgliederschwund, Nicht- und Wechselwähler, Politik- und Parteienverdrossenheit, Kartellisierung und Institutionalisierung von Parteien sind nur einige der in diesem Kontext geläufigen Schlagwörter. Prozesse der Individualisierung, Globalisierung und Mediatisierung führen zu veränderten Rahmenbedingungen, unter denen Parteien sich behaupten müssen. Diese Veränderungen in der äußeren Umwelt wirken sich nachhaltig auf das parteipolitische Binnenleben, auf Organisationsstrukturen und Programmatik aus. Die Parteienforschung hat daher schon vor zwanzig Jahren begonnen, ein typologisches Nachfolgemodell der Volkspartei zu diskutieren, das diesen Wandel berücksichtigt. Verschiedene typologische Konstruktionen von z. B. Panebianco (1988), Katz und Mair (1995) oder von Beyme erfassen (2000) wichtige Facetten des Strukturwandels politischer Parteien und stellen mehrheitlich plausible typologische Konzepte vor, die die Parteien in ihrem Streben nach Wählerstimmen und Regierungsmacht zutreffend charakterisieren. Die Parteienforschung stimmt bezüglich des Endes der Volksparteiära mehrheitlich überein. Bezüglich der Nachfolge konnte sich unter den neueren vorgeschlagenen Typen jedoch kein vierter Typ als verbindliches Leitmodell etablieren. Bei genauerer Betrachtung weichen die in den verschiedenen Ansätzen für einen vierten Parteitypen hervorgehobenen Merkmale (namentlich Professionalisierung des Parteiapparates, die Berufspolitikerdominanz, Verstaatlichung und Kartellbildung sowie die Fixierung auf die Medien) wenig von jüngeren Modellvorschlägen ab und bedürfen daher mehr einer Ergänzung. Die in der Regel mehrdimensionalen entwicklungstypologischen Verlaufstypen setzten seit den 1980er Jahren unterschiedliche Schwerpunkte und warten mit vielen Vorschlägen der Einordnung auf. Einer der jüngsten Ansätze von Uwe Jun aus dem Jahr 2004, der das typologische Konzept der professionalisierten Medienkommunikationspartei einführt, macht deutlich, dass die Diskussion um Gestalt und Ausprägungen des vierten Parteityps noch in vollem Gang und für weitere Vorschläge offen ist – der „richtige“ Typ also noch nicht gefunden wurde. Jun bleibt in seiner Untersuchung den zentralen Transformationsleitfragen nach der Ausgestaltung der Parteiorganisation, der ideologisch-programmatischen Orientierung und der strategisch-elektoralen Wählerorientierung verhaftet und setzt diese Elemente in den Fokus sich wandelnder Kommunikationsstrategien. Die bisher in parteitypologischen Arbeiten mitunter vernachlässigte Komponente der strukturellen Strategiefähigkeit als Grundlage zur Entwicklung ebensolcher Reaktionsstrategien wird bei Jun angestoßen und soll in dieser Arbeit aufgegriffen und vertieft werden.
Der aktuellen Partychange-Diskussion zum Trotz scheint die Annahme, dass Parteien, die sich verstärkt der Handlungslogik der Massenmedien unterwerfen, deren strategischen Anforderungen durch interne Adaptionsverfahren auch dauerhaft gerecht zu werden vermögen, nicht immer zutreffend. Die Veränderungen der Kommunikationsstrategien als Reaktion auf gesamtgesellschaftliche Wandlungsprozesse stehen zwar im Zentrum der Professionalisierungsbemühungen der politischen Akteure, bleiben aber in ihrer Wirkung eingeschränkt. Wenngleich das Wissen in den Parteien um die Notwendigkeiten (medialer) Strategiefähigkeit besteht und die Parteien hierauf mit Professionalisierung, organisatorischen und programmatischen Anpassungsleistungen und der Herausbildung strategischer Zentren reagieren, so ist mediengerechtes strategisches Agieren noch lange keine natürliche Kernkompetenz der Parteien. Vor allem in Wahlkampfzeiten, die aufgrund abnehmender Parteibindungen und zunehmender Wählervolatilität für die Parteien zum eigentlich zentralen Moment der Parteiendemokratie werden, wird mediengerechtes Handeln zum wesentlichen Erfolgsfaktor. Strategiefähigkeit wird hierbei zur entscheidenden Voraussetzung und scheint zudem in diesen Phasen von den Parteien erfolgreicher umgesetzt zu werden als im normalen politischen Alltag. Die wahlstrategische Komponente findet in Juns typologischer Konstruktion wenig Beachtung und soll in dieser Arbeit daher als ergänzendes Element hinzugefügt werden. Arbeitshypothese Die beiden deutschen Großparteien berufen sich auf unterschiedliche Entstehungsgeschichten, die sich bis in die Gegenwart auf die Mitglieder-, Issue- und Organisationsstrukturen von SPD und CDU auswirken und die Parteien in ihren Anpassungsleistungen an die sich wandelnde Gesellschaft beeinflussen. Beide Parteien versuchen, auf die veränderten sozialen und politischen Rahmenbedingungen und den daraus resultierenden Bedeutungszuwachs von politischer Kommunikationsplanung mit einem erhöhten Maß an Strategiefähigkeit und kommunikativer Kompetenz zu reagieren. Diese Entwicklung tritt seit der deutschen Wiedervereinigung umso stärker in Augenschein, als dass nach 1990 die Bindekraft der Volksparteien nochmals nachließ, sodass die Parteien sich zunehmend gezwungen sehen, die „lose verkoppelten Anarchien“ in wahlstrategische Medienkommunikationsparteien zu transformieren. Diesen vierten Parteityp kennzeichnet vor allem die zunehmende Bemühung um Strategiefähigkeit, die mittels Organisationsstrukturen und programmatischer Anpassungsleistungen die Effizienz der elektoralen Ausrichtung verbessern soll. Insgesamt geht die Party-Change-Forschung davon aus, dass die Parteien sich zunehmend angleichen. Dies gilt es in dieser Studie zu überprüfen. Unter Berücksichtigung unterschiedlicher Entwicklungspfade kann vermutet werden, dass auch die Transformationsprozesse bei SPD und CDU in unterschiedlicher Weise verlaufen. Wenngleich die SPD über einen höheren Strategiebedarf und die größere Innovationsbereitschaft zu verfügen scheint, werden auf Seiten der Union potentiell strategiefähigere Strukturen vermutet, die die erfolgreiche Umsetzung von Wahlkampfstrategien erleichtern. Die historische Entwicklung und der Aspekt der Historizität spielen in diesem Kontext eine Rolle.
Zusätzlich spielen individuelle Führungspersönlichkeiten eine zentrale Rolle in innerparteilichen Transformationsprozessen, welche für die Ausprägung strategiefähiger Strukturen oftmals von größerer Bedeutung sind als institutionalisierte Strukturen. Im Vordergrund steht die Untersuchung des Parteiwandels anhand der Veränderung der Kommunikationsstrategien der Parteien im Allgemeinen sowie der Strategiefähigkeit in Wahlkämpfen im Besonderen, da diese als zentrale Merkmale für den vierten Parteityp in Anlehnung an die Professionelle Medienkommunikationspartei (Jun 2004) gewertet werden sollen. Strategiefähigkeit soll dabei anhand der Kriterien des Umgangs der Parteien mit Programmatik, Organisation und externen Einflussfaktoren in Wahlkämpfen operationalisiert werden. Die Analyse untersucht sowohl das Handeln einzelner Personen wie auch die Rolle der Partei als Gesamtorganisation. Die Arbeit besteht aus zehn Kapiteln und gliedert sich in zwei Blöcke: einen theoretisch konzeptionellen Teil, der die in der Perspektive dieser Arbeit zentralen Grundlagen und Rahmenbedingungen zusammenführt sowie die sich daran anschließende Untersuchung der Konzeption und Implementation von Kommunikationskampagnen im Wahlkampf seit 1990. Das aktuell in die politikwissenschaftliche Diskussion eingebrachte Feld der politischen Strategiefähigkeit (Raschke/Tils 2007) wird in ausführlicher theoretischer Grundlegung bisher zwar mit den Implikationen der Medienkommunikation und damit einhergehend auch den organisatorischen und programmatischen Strukturmerkmalen der Parteien verknüpft, diese erfolgte allerdings oft ohne vertiefte Berücksichtigung des Parteiwandels. Dies soll in diesem Beitrag daher versucht werden. Der Diskursanalyse des Strategiebegriffes in Wahlkampfsituationen folgt die detaillierte Darstellung der drei Operationalisierungsparameter, die in die Festlegung des Parteityps münden. Die Diskussion idealtypischer Wahlkampfmodelle als theoretischer Bezugsrahmen für die Bewertung der Wahlkampagnen ergänzt den theoretisch-konzeptionellen Bezugsrahmen. Die insgesamt in der Literatur in ihren Ausführungen oftmals normativ gestalteten Darstellungen idealtypischer politischer Strategie sollen im letzten Teil der Arbeit auf ihre Umsetzbarkeit im parteipolitischen Alltag überprüft werden und dies nicht nur anhand einzelner, mit einander nicht in Zusammenhang stehender Ereignisse, sondern anhand der sich periodisch unter vergleichbaren Bedingungen wiederholenden Wahlkämpfe. Dafür werden die jeweiligen Ausgangs- und Rahmenbedingungen der einzelnen Wahlkämpfe sowie die zuvor dargelegten Elemente professionalisierter Wahlkampagnen für die Wahlkampagnen von SPD und CDU seit 1990 dargestellt. Aus diesen Gegenüberstellungen soll im Anschluss der längsschnittliche Vergleich der Strategiefähigkeit und Kommunikationskompetenz von SPD und CDU abgeleitet werden
The forensic application of phonetics relies on individuality in speech. In the forensic domain, individual patterns of verbal and paraverbal behavior are of interest which are readily available, measurable, consistent, and robust to disguise and to telephone transmission. This contribution is written from the perspective of the forensic phonetic practitioner and seeks to establish a more comprehensive concept of disfluency than previous studies have. A taxonomy of possible variables forming part of what can be termed disfluency behavior is outlined. It includes the “classical” fillers, but extends well beyond these, covering, among others, additional types of fillers as well as prolongations, but also the way in which fillers are combined with pauses. In the empirical section, the materials collected for an earlier study are re-examined and subjected to two different statistical procedures in an attempt to approach the issue of individuality. Recordings consist of several minutes of spontaneous speech by eight speakers on three different occasions. Beyond the established set of hesitation markers, additional aspects of disfluency behavior which fulfill the criteria outlined above are included in the analysis. The proportion of various types of disfluency markers is determined. Both statistical approaches suggest that these speakers can be distinguished at a level far above chance using the disfluency data. At the same time, the results show that it is difficult to pin down a single measure which characterizes the disfluency behavior of an individual speaker. The forensic implications of these findings are discussed.
Redox-driven biogeochemical cycling of iron plays an integral role in the complex process network of ecosystems, such as carbon cycling, the fate of nutrients and greenhouse gas emissions. We investigate Fe-(hydr)oxide (trans)formation pathways from rhyolitic tephra in acidic topsoils of South Patagonian Andosols to evaluate the ecological relevance of terrestrial iron cycling for this sensitive fjord ecosystem. Using bulk geochemical analyses combined with micrometer-scale-measurements on individual soil aggregates and tephra pumice, we document biotic and abiotic pathways of Fe released from the glassy tephra matrix and titanomagnetite phenocrysts. During successive redox cycles that are controlled by frequent hydrological perturbations under hyper-humid climate, (trans)formations of ferrihydrite-organic matter coprecipitates, maghemite and hematite are closely linked to tephra weathering and organic matter turnover. These Fe-(hydr)oxides nucleate after glass dissolution and complexation with organic ligands, through maghemitization or dissolution-(re)crystallization processes from metastable precursors. Ultimately, hematite represents the most thermodynamically stable Fe-(hydr)oxide formed under these conditions and physically accumulates at redox interfaces, whereas the ferrihydrite coprecipitates represent a so far underappreciated terrestrial source of bio-available iron for fjord bioproductivity. The insights into Fe-(hydr)oxide (trans)formation in Andosols have implications for a better understanding of biogeochemical cycling of iron in this unique Patagonian fjord ecosystem.
We use a novel sea-ice lead climatology for the winters of 2002/03 to 2020/21 based on satellite observations with 1 km2 spatial resolution to identify predominant patterns in Arctic wintertime sea-ice leads. The causes for the observed spatial and temporal variabilities are investigated using ocean surface current velocities and eddy kinetic energies from an ocean model (Finite Element Sea Ice–Ice-Shelf–Ocean Model, FESOM) and winds from a regional climate model (CCLM) and ERA5 reanalysis, respectively. The presented investigation provides evidence for an influence of ocean bathymetry and associated currents on the mechanic weakening of sea ice and the accompanying occurrence of sea-ice leads with their characteristic spatial patterns. While the driving mechanisms for this observation are not yet understood in detail, the presented results can contribute to opening new hypotheses on ocean–sea-ice interactions. The individual contribution of ocean and atmosphere to regional lead dynamics is complex, and a deeper insight requires detailed mechanistic investigations in combination with considerations of coastal geometries. While the ocean influence on lead dynamics seems to act on a rather long-term scale (seasonal to interannual), the influence of wind appears to trigger sea-ice lead dynamics on shorter timescales of weeks to months and is largely controlled by individual events causing increased divergence. No significant pan-Arctic trends in wintertime leads can be observed.
Regional climate models are a valuable tool for the study of the climate processes and climate change in polar regions, but the performance of the models has to be evaluated using experimental data. The regional climate model CCLM was used for simulations for the MOSAiC period with a horizontal resolution of 14 km (whole Arctic). CCLM was used in a forecast mode (nested in ERA5) and used a thermodynamic sea ice model. Sea ice concentration was taken from AMSR2 data (C15 run) and from a high-resolution data set (1 km) derived from MODIS data (C15MOD0 run). The model was evaluated using radiosonde data and data of different profiling systems with a focus on the winter period (November–April). The comparison with radiosonde data showed very good agreement for temperature, humidity, and wind. A cold bias was present in the ABL for November and December, which was smaller for the C15MOD0 run. In contrast, there was a warm bias for lower levels in March and April, which was smaller for the C15 run. The effects of different sea ice parameterizations were limited to heights below 300 m. High-resolution lidar and radar wind profiles as well as temperature and integrated water vapor (IWV) data from microwave radiometers were used for the comparison with CCLM for case studies, which included low-level jets. LIDAR wind profiles have many gaps, but represent a valuable data set for model evaluation. Comparisons with IWV and temperature data of microwave radiometers show very good agreement.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
Family firms play a crucial role in the DACH region (Germany, Austria, Switzerland). They are characterized by a long tradition, a strong connection to the region, and a well-established network. However, family firms also face challenges, especially in finding a suitable successor. Wealthy entrepreneurial families are increasingly opting to establish Single Family Offices (SFOs) as a solution to this challenge. An SFO takes on the management and protection of family wealth. Its goal is to secure and grow the wealth over generations. In Germany alone, there are an estimated 350 to 450 SFOs, with 70% of them being established after the year 2000. However, research on SFOs is still in its early stages, particularly regarding the role of SFOs as firm owners. This dissertation delves into an exploration of SFOs through four quantitative empirical studies. The first study provides a descriptive overview of 216 SFOs from the DACH-region. Findings reveal that SFOs exhibit a preference for investing in established companies and real estate. Notably, only about a third of SFOs engage in investments in start-ups. Moreover, SFOs as a group are heterogeneous. Categorizing them into three groups based on their relationship with the entrepreneurial family and the original family firm reveals significant differences in their asset allocation strategies. Subsequent studies in this dissertation leverage a hand-collected sample of 173 SFO-owned firms from the DACH region, meticulously matched with 684 family-owned firms from the same region. The second study focusing on financial performance indicates that SFO-owned firms tend to exhibit comparatively poorer financial performance than family-owned firms. However, when members of the SFO-owning family hold positions on the supervisory or executive board of the firm, there's a notable improvement. The third study, concerning cash holdings, reveals that SFO-owned firms maintain a higher cash holding ratio compared to family-owned firms. Notably, this effect is magnified when the SFO has divested its initial family firms. Lastly, the fourth study regarding capital structure highlights that SFO-owned firms tend to display a higher long-term debt ratio than family-owned firms. This suggests that SFO-owned firms operate within a trade-off theory framework, like private equity-owned firms. Furthermore, this effect is stronger for SFOs that sold their original family firm. The outcomes of this research are poised to provide entrepreneurial families with a practical guide for effectively managing and leveraging SFOs as a strategic long-term instrument for succession and investment planning.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.