Refine
Year of publication
Document Type
- Doctoral Thesis (844) (remove)
Language
- German (493)
- English (340)
- Multiple languages (7)
- French (4)
Keywords
- Stress (37)
- Deutschland (33)
- Modellierung (19)
- Optimierung (18)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (15)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (181)
- Raum- und Umweltwissenschaften (148)
- Mathematik (62)
- Wirtschaftswissenschaften (61)
- Fachbereich 4 (59)
- Fachbereich 1 (31)
- Geschichte, mittlere und neuere (28)
- Germanistik (26)
- Informatik (26)
- Kunstgeschichte (22)
- Politikwissenschaft (18)
- Anglistik (17)
- Soziologie (16)
- Fachbereich 2 (14)
- Fachbereich 6 (14)
- Fachbereich 3 (9)
- Philosophie (9)
- Romanistik (9)
- Computerlinguistik und Digital Humanities (7)
- Medienwissenschaft (6)
- Geschichte, alte (5)
- Allgemeine Sprach- und Literaturwissenschaft (4)
- Fachbereich 5 (4)
- Klassische Philologie (4)
- Pädagogik (4)
- Ethnologie (3)
- Japanologie (3)
- Sinologie (3)
- Archäologie (2)
- Rechtswissenschaft (2)
- Phonetik (1)
- Slavistik (1)
- Universitätsbibliothek (1)
Im Rahmen psychologischer Wissenschaftskommunikation werden Plain Language Summaries (PLS, Kerwer et al., 2021) zunehmend bedeutsamer. Es handelt sich hierbei um
zugängliche, überblicksartige Zusammenfassungen, welche das Verständnis von Lai:innen
potenziell unterstützen und ihr Vertrauen in wissenschaftliche Forschung fördern können.
Dies erscheint speziell vor dem Hintergrund der Replikationskrise (Wingen et al., 2019) sowie Fehlinformationen in Online-Kontexten (Swire-Thompson & Lazer, 2020) relevant. Die
positiven Auswirkungen zweier Effekte auf Vertrauen sowie ihre mögliche Interaktion fanden im Kontext von PLS bisher kaum Berücksichtigung: Zum einen die einfache Darstellung von Informationen (Easiness-Effekt, Scharrer et al., 2012), zum anderen ein möglichst wissenschaftlicher Stil (Scientificness-Effekt, Thomm & Bromme, 2012). Diese Dissertation hat zum Ziel, im Kontext psychologischer PLS genauere Bestandteile beider Effekte zu identifizieren und den Einfluss von Einfachheit und Wissenschaftlichkeit auf Vertrauen zu beleuchten. Dazu werden drei Artikel zu präregistrierten Online-Studien mit deutschsprachigen Stichproben vorgestellt.
Im ersten Artikel wurden in zwei Studien verschiedene Textelemente psychologischer PLS systematisch variiert. Es konnte ein signifikanter Einfluss von Fachtermini, Informationen zur
Operationalisierung, Statistiken und dem Grad an Strukturierung auf die von Lai:innen berichtete Einfachheit der PLS beobachtet werden. Darauf aufbauend wurden im zweiten Artikel vier PLS, die von Peer-Review-Arbeiten abgeleitet wurden, in ihrer Einfachheit und
Wissenschaftlichkeit variiert und Lai:innen zu ihrem Vertrauen in die Texte und Autor:innen befragt. Hier ergab sich zunächst nur ein positiver Einfluss von Wissenschaftlichkeit auf
Vertrauen, während der Easiness-Effekt entgegen der Hypothesen ausblieb. Exploratorische Analysen legten jedoch einen positiven Einfluss der von Lai:innen subjektiv wahrgenommenen Einfachheit auf ihr Vertrauen sowie eine signifikante Interaktion mit der
wahrgenommenen Wissenschaftlichkeit nahe. Diese Befunde lassen eine vermittelnde Rolle der subjektiven Wahrnehmung von Lai:innen für beide Effekte vermuten. Im letzten Artikel
wurde diese Hypothese über Mediationsanalysen geprüft. Erneut wurden zwei PLS
präsentiert und sowohl die Wissenschaftlichkeit des Textes als auch die der Autor:in manipuliert. Der Einfluss höherer Wissenschaftlichkeit auf Vertrauen wurde durch die
subjektiv von Lai:innen wahrgenommene Wissenschaftlichkeit mediiert. Zudem konnten
dimensionsübergreifende Mediationseffekte beobachtet werden.
Damit trägt diese Arbeit über bestehende Forschung hinaus zur Klärung von Rahmenbedingungen des Easiness- und Scientificness-Effektes bei. Theoretische
Implikationen zur zukünftigen Definition von Einfachheit und Wissenschaftlichkeit, sowie
praktische Konsequenzen hinsichtlich unterschiedlicher Zielgruppen von
Wissenschaftskommunikation und dem Einfluss von PLS auf die Entscheidungsbildung von
Lai:innen werden diskutiert.
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Representation Learning techniques play a crucial role in a wide variety of Deep Learning applications. From Language Generation to Link Prediction on Graphs, learned numerical vector representations often build the foundation for numerous downstream tasks.
In Natural Language Processing, word embeddings are contextualized and depend on their current context. This useful property reflects how words can have different meanings based on their neighboring words.
In Knowledge Graph Embedding (KGE) approaches, static vector representations are still the dominant approach. While this is sufficient for applications where the underlying Knowledge Graph (KG) mainly stores static information, it becomes a disadvantage when dynamic entity behavior needs to be modelled.
To address this issue, KGE approaches would need to model dynamic entities by incorporating situational and sequential context into the vector representations of entities. Analogous to contextualised word embeddings, this would allow entity embeddings to change depending on their history and current situational factors.
Therefore, this thesis provides a description of how to transform static KGE approaches to contextualised dynamic approaches and how the specific characteristics of different dynamic scenarios are need to be taken into consideration.
As a starting point, we conduct empirical studies that attempt to integrate sequential and situational context into static KG embeddings and investigate the limitations of the different approaches. In a second step, the identified limitations serve as guidance for developing a framework that enables KG embeddings to become truly dynamic, taking into account both the current situation and the past interactions of an entity. The two main contributions in this step are the introduction of the temporally contextualized Knowledge Graph formalism and the corresponding RETRA framework which realizes the contextualisation of entity embeddings.
Finally, we demonstrate how situational contextualisation can be realized even in static environments, where all object entities are passive at all times.
For this, we introduce a novel task that requires the combination of multiple context modalities and their integration with a KG based view on entity behavior.
This dissertation focusses on research into the personality construct of action vs. state orientation. Derived from the Personality-Systems-Interaction Theory (PSI Theory), state orientation is defined as a low ability to self-regulate emotions and associated with many adverse consequences – especially under stress. Because of the high prevalence of state orientation, it is a very important topic to investigate factors that help state-oriented people to buffer these adverse consequences. Action orientation, in contrast, is defined as a high ability to self-regulate own emotions in a very specific way: through accessing the self. The present dissertation demonstrates this theme in five studies, using a total of N = 1251 participants with a wide age range, encompassing different populations (students, non-student population (people from the coaching and therapy sector), applying different operationalisations to investigate self-access as a mediator or an outcome variable. Furthermore, it is tested whether the popular technique of mindfulness - that is advertised as a potent remedy for bringing people closer to the self -really works for everybody. The findings show that the presumed remedy is rather harmful for state-oriented individuals. Finally, an attempt to ameliorate these alienating effects, the present dissertation attempts to find theory-driven, and easy-to-apply solution how mindfulness exercises can be adapted.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
Sowohl national als auch international wird die zunehmende Digitalisierung von Prozessen gefordert. Die Heterogenität und Komplexität der dabei entstehenden Systeme erschwert die Partizipation für reguläre Nutzergruppen, welche zum Beispiel kein Expertenwissen in der Programmierung oder einen informationstechnischen Hintergrund aufweisen. Als Beispiel seien hier Smart Contracts genannt, deren Programmierung komplex ist und bei denen etwaige Fehler unmittelbar mit monetärem Verlust durch die direkte Verknüpfung der darunterliegenden Kryptowährung verbunden sind. Die vorliegende Arbeit stellt ein alternatives Protokoll für cyber-physische Verträge vor, das sich besonders gut für die menschliche Interaktion eignet und auch von regulären Nutzergruppen verstanden werden kann. Hierbei liegt der Fokus auf der Transparenz der Übereinkünfte und es wird weder eine Blockchain noch eine darauf beruhende digitale Währung verwendet. Entsprechend kann das Vertragsmodell der Arbeit als nachvollziehbare Verknüpfung zwischen zwei Parteien verstanden werden, welches die unterschiedlichen Systeme sicher miteinander verbindet und so die Selbstorganisation fördert. Diese Verbindung kann entweder computergestützt automatisch ablaufen, oder auch manuell durchgeführt werden. Im Gegensatz zu Smart Contracts können somit Prozesse Stück für Stück digitalisiert werden. Die Übereinkünfte selbst können zur Kommunikation, aber auch für rechtlich bindende Verträge genutzt werden. Die Arbeit ordnet das neue Konzept in verwandte Strömungen wie Ricardian oder Smart Contracts ein und definiert Ziele für das Protokoll, welche in Form der Referenzimplementierung umgesetzt werden. Sowohl das Protokoll als auch die Implementierung werden im Detail beschrieben und durch eine Erweiterung der Anwendung ergänzt, welche es Nutzenden in Regionen ohne direkte Internetverbindung ermöglicht, an ebenjenen Verträgen teilnehmen zu können. Weiterhin betrachtet die Evaluation die rechtlichen Rahmenbedinungen, die Übertragung des Protokolls auf Smart Contracts und die Performanz der Implementierung.
Die chinesische und westliche Forschung, die sich mit der Beziehung zwischen chinesischer Kultur und katholischer Kirche befasst, konzentriert sich in der Regel auf die katholische Kirche in China vor dem Verbot des Christentums. Die einzigartige Perspektive dieser Arbeit besteht darin, die Veränderungen in der Beziehung zwischen den beiden vom Ende der Ming-Dynastie bis zur ersten Hälfte des 20. Jahrhunderts zu untersuchen. Vor dem Verbot nährten die katholischen Missionare den konfuzianischen Gelehrten und verbanden die katholische Lehre mit dem Konfuzianismus, um ihren Einfluss in der Oberschicht der chinesischen Gesellschaft auszuüben. Nach dem Verbot achteten die katholischen Missionare nicht so sehr auf ihre Beziehung zur chinesischen Kultur wie ihre Vorgänger im 17. und 18. Jahrhundert. Einige Missionare sowie chinesische Katholiken wollten die Situation ändern und förderten gemeinsam die Gründung der Fu-Jen-Universität, die großen Wert auf die chinesische Kultur legte und die Beziehung zwischen der Katholischen Kirche und der chinesischen Kultur Anfang des 20. Jahrhunderts widerspiegeln konnte. Die Professoren der Abteilung Chinesisch und Geschichte leisteten den größten Beitrag zur Forschung der chinesischen Kultur an der Universität. Im Vergleich zu anderen wichtigen Universitäten in Peking, wo die chinesische Literatur im Fachbereich Chinesisch eine zentrale Stellung einnahm, legte die Fu-Jen-Universität mehr Wert auf die chinesische Sprache und Schriftzeichen. Anfang des 20. Jahrhunderts erlangten Frauen unter dem Einfluss der globalen feministischen Bewegung das Recht auf Hochschulbildung. Bis 1920 waren jedoch die katholischen Universitäten in Bezug auf die Hochschulbildung von Frauen Jahrzehnte hinter den protestantischen und nichtkirchlichen Universitäten zurückgefallen. Die Fu-Jen-Universität verbesserte diese Situation, indem sie nicht nur eine große Anzahl von Studentinnen annahm, sondern ihnen eine Vielzahl von Fächern einschließlich Chinesisch und Geschichte anbot. Im Allgemeinen konnte die Universität als Verbindung zwischen dem Katholizismus und der chinesischen Kultur in der ersten Hälfte des 20. Jahrhunderts angesehen werden. Sie spielte eine wichtige Rolle nicht nur bei der Erforschung und Verbreitung der chinesischen Kultur, sondern auch bei der Ausweitung des Einflusses der katholischen Kirche zu dieser Zeit.
Physically-based distributed rainfall-runoff models as the standard analysis tools for hydro-logical processes have been used to simulate the water system in detail, which includes spa-tial patterns and temporal dynamics of hydrological variables and processes (Davison et al., 2015; Ek and Holtslag, 2004). In general, catchment models are parameterized with spatial information on soil, vegetation and topography. However, traditional approaches for eval-uation of the hydrological model performance are usually motivated with respect to dis-charge data alone. This may thus cloud model realism and hamper understanding of the catchment behavior. It is necessary to evaluate the model performance with respect to in-ternal hydrological processes within the catchment area as well as other components of wa-ter balance rather than runoff discharge at the catchment outlet only. In particular, a consid-erable amount of dynamics in a catchment occurs in the processes related to interactions of the water, soil and vegetation. Evapotranspiration process, for instance, is one of those key interactive elements, and the parameterization of soil and vegetation in water balance mod-eling strongly influences the simulation of evapotranspiration. Specifically, to parameterize the water flow in unsaturated soil zone, the functional relationships that describe the soil water retention and hydraulic conductivity characteristics are important. To define these functional relationships, Pedo-Transfer Functions (PTFs) are common to use in hydrologi-cal modeling. Opting the appropriate PTFs for the region under investigation is a crucial task in estimating the soil hydraulic parameters, but this choice in a hydrological model is often made arbitrary and without evaluating the spatial and temporal patterns of evapotran-spiration, soil moisture, and distribution and intensity of runoff processes. This may ulti-mately lead to implausible modeling results and possibly to incorrect decisions in regional water management. Therefore, the use of reliable evaluation approaches is continually re-quired to analyze the dynamics of the current interactive hydrological processes and predict the future changes in the water cycle, which eventually contributes to sustainable environ-mental planning and decisions in water management.
Remarkable endeavors have been made in development of modelling tools that provide insights into the current and future of hydrological patterns in different scales and their im-pacts on the water resources and climate changes (Doell et al., 2014; Wood et al., 2011). Although, there is a need to consider a proper balance between parameter identifiability and the model's ability to realistically represent the response of the natural system. Neverthe-less, tackling this issue entails investigation of additional information, which usually has to be elaborately assembled, for instance, by mapping the dominant runoff generation pro-cesses in the intended area, or retrieving the spatial patterns of soil moisture and evapotran-spiration by using remote sensing methods, and evaluation at a scale commensurate with hydrological model (Koch et al., 2022; Zink et al., 2018). The present work therefore aims to give insights into the modeling approaches to simulate water balance and to improve the soil and vegetation parameterization scheme in the hydrological model subject to producing more reliable spatial and temporal patterns of evapotranspiration and runoff processes in the catchment.
An important contribution to the overall body of work is a book chapter included among publications. The book chapter provides a comprehensive overview of the topic and valua-ble insights into the understanding the water balance and its estimation methods.
Moreover, the first paper aimed to evaluate the hydrological model behavior with re-spect to contribution of various sources of information. To do so, a multi-criteria evaluation metric including soft and hard data was used to define constraints on outputs of the 1-D hydrological model WaSiM-ETH. Applying this evaluation metric, we could identify the optimal soil and vegetation parameter sets that resulted in a “behavioral” forest stand water balance model. It was found out that even if simulations of transpiration and soil water con-tent are consistent with measured data, but still the dominant runoff generation processes or total water balance might be wrongly calculated. Therefore, only using an evaluation scheme which looks over different sources of data and embraces an understanding of the local controls of water loss through soil and plant, allowed us to exclude the unrealistic modeling outputs. The results suggested that we may need to question the generally accept-ed soil parameterization procedures that apply default parameter sets.
The second paper attempts to tackle the pointed model evaluation hindrance by getting down to the small-scale catchment (in Bavaria). Here, a methodology was introduced to analyze the sensitivity of the catchment water balance model to the choice of the Pedo-Transfer Functions (PTF). By varying the underlying PTFs in a calibrated and validated model, we could determine the resulting effects on the spatial distribution of soil hydraulic properties, total water balance in catchment outlet, and the spatial and temporal variation of the runoff components. Results revealed that the water distribution in the hydrologic system significantly differs amongst various PTFs. Moreover, the simulations of water balance components showed high sensitivity to the spatial distribution of soil hydraulic properties. Therefore, it was suggested that opting the PTFs in hydrological modeling should be care-fully tested by looking over the spatio-temporal distribution of simulated evapotranspira-tion and runoff generation processes, whether they are reasonably represented.
To fulfill the previous studies’ suggestions, the third paper then aims to focus on evalu-ating the hydrological model through improving the spatial representation of dominant run-off processes. It was implemented in a mesoscale catchment in southwestern Germany us-ing the hydrological model WaSiM-ETH. Dealing with the issues of inadequate spatial ob-servations for rigorous spatial model evaluation, we made use of a reference soil hydrologic map available for the study area to discern the expected dominant runoff processes across a wide range of hydrological conditions. The model was parameterized by applying 11 PTFs and run by multiple synthetic rainfall events. To compare the simulated spatial patterns to the patterns derived by digital soil map, a multiple-component spatial performance metric (SPAEF) was applied. The simulated DRPs showed a large variability with regard to land use, topography, applied rainfall rates, and the different PTFs, which highly influence the rapid runoff generation under wet conditions.
The three published manuscripts proceeded towards the model evaluation viewpoints that ultimately attain the behavioral model outputs. It was performed through obtaining information about internal hydrological processes that lead to certain model behaviors, and also about the function and sensitivity of some of the soil and vegetation parameters that may primarily influence those internal processes in a catchment. Accordingly, using this understanding on model reactions, and by setting multiple evaluation criteria, it was possi-ble to identify which parameterization could lead to behavioral model realization. This work, in fact, will contribute to solving some of the issues (e.g., spatial variability and modeling methods) identified as the 23 unsolved problems in hydrology in the 21st century (Blöschl et al., 2019). The results obtained in the present work encourage the further inves-tigations toward a comprehensive model calibration procedure considering multiple data sources simultaneously. This will enable developing the new perspectives to the current parameter estimation methods, which in essence, focus on reproducing the plausible dy-namics (spatio-temporal) of the other hydrological processes within the watershed.
Sozialunternehmen haben mindestens zwei Ziele: die Erfüllung ihrer sozialen bzw. ökologischen Mission und finanzielle Ziele. Zwischen diesen Zielen können Spannungen entstehen. Wenn sie sich in diesem Spannungsfeld wiederholt zugunsten der finanziellen Ziele entscheiden, kommt es zum Mission Drift. Die Priorisierung der finanziellen Ziele überlagert dabei die soziale Mission. Auch wenn das Phänomen in der Praxis mehrfach beobachtet und in Einzelfallanalysen beschrieben wurde, gibt es bislang wenig Forschung zu Mission Drift. Der Fokus der vorliegenden Arbeit liegt darauf, diese Forschungslücke zu schließen und eigene Erkenntnisse für die Auslöser und Treiber des Mission Drifts von Sozialunternehmen zu ermitteln. Ein Augenmerk liegt auf den verhaltensökonomischen Theorien und der Mixed-Gamble-Logik. Dieser Logik zufolge liegt bei Entscheidungen immer eine Gleichzeitigkeit von Gewinnen und Verlusten vor, sodass Entscheidungsträger die Furcht vor Verlusten gegenüber der Aussicht auf Gewinne abwägen müssen. Das Modell wird genutzt, um eine neue theoretische Betrachtungsweise auf die Abwägung zwischen sozialen und finanziellen Zielen bzw. Mission Drift zu erhalten. Mit einem Conjoint Experiment werden Daten über das Entscheidungsverhalten von Sozialunternehmern generiert. Im Zentrum steht die Abwägung zwischen sozialen und finanziellen Zielen in verschiedenen Szenarien (Krisen- und Wachstumssituationen). Mithilfe einer eigens erstellten Stichprobe von 1.222 Sozialunternehmen aus Deutschland, Österreich und der Schweiz wurden 187 Teilnehmende für die Studie gewonnen. Die Ergebnisse dieser Arbeit zeigen, dass eine Krisensituation Auslöser für Mission Drift von Sozialunternehmen sein kann, weil in diesem Szenario den finanziellen Zielen die größte Bedeutung zugemessen wird. Für eine Wachstumssituation konnten hingegen keine solche Belege gefunden werden. Hinzu kommen weitere Einflussfaktoren, welche die finanzielle Orientierung verstärken können, nämlich die Gründeridentitäten der Sozialunternehmer, eine hohe Innovativität der Unternehmen und bestimmte Stakeholder. Die Arbeit schließt mit einer ausführlichen Diskussion der Ergebnisse. Es werden Empfehlungen gegeben, wie Sozialunternehmen ihren Zielen bestmöglich treu bleiben können. Außerdem werden die Limitationen der Studie und Wege für zukünftige Forschung im Bereich Mission Drift aufgezeigt.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Information in der vorvertraglichen Phase – das heißt, Informationspflichten sowie Rechtsfolgen von Informationserteilung und -nichterteilung – in Bezug auf Kaufvertrag und Wahl des optionalen Instruments hat im Vorschlag der Europäischen Kommission für ein Gemeinsames Europäisches Kaufrecht (GEK; KOM(2011) 635) vielfältige Regelungen erfahren. Die vorliegende Arbeit betrachtet diese Regelungen auch in ihrem Verhältnis zu den Textstufen des Europäischen Privatrechts – Modellregeln und verbraucherschützende EU-Richtlinien – und misst sie an ökonomischen Rahmenbedingungen, die die Effizienz von Transaktionen gebieten und Grenzen des Nutzens von (Pflicht-)Informationen aufzeigen.
Vom Grundsatz der Vertragsfreiheit ausgehend ist jeder Partei das Risiko zugewiesen, unzureichend informiert zu sein, während die Gegenseite nur punktuell zur Information verpflichtet ist. Zwischen Unternehmern bleibt es auch nach dem GEK hierbei, doch zwischen Unternehmer und Verbraucher wird dieses Verhältnis umgekehrt. Dort gelten, mit Differenzierung nach Vertragsschlusssituationen, umfassende Kataloge von Informationspflichten hinsichtlich des Kaufvertrags. Als Konzept ist dies grundsätzlich sinnvoll; die Pflichten dienen dem Verbraucherschutz, insbesondere der Informiertheit und Transparenz vor der Entscheidung über den Vertragsschluss. Teilweise gehen die Pflichten aber zu weit. Die Beeinträchtigung der Vertragsfreiheit des Unternehmers durch die Pflichten und die Folgen ihrer Verletzung lässt sich nicht vollständig mit dem Ziel des Verbraucherschutzes rechtfertigen. Durch das Übermaß an Information fördern die angeordneten Pflichten den Verbraucherschutz nur eingeschränkt; sie genügen nicht verhaltensökonomischen Maßstäben. Es empfiehlt sich daher, zwischen Unternehmern und Verbrauchern bestimmte verpflichtende Informationsinhalte ganz zu streichen, auf im konkreten Fall nicht erforderliche Information zu verzichten, erst nach Vertragsschluss relevante Informationen auf diese Zeit zu verschieben und die verbleibenden vorvertraglichen Pflichtinformationen in einer für den Verbraucher besser zu verarbeitenden Weise zu präsentieren. Von den einem Verbraucher zu erteilenden Informationen sollte stets verlangt werden, dass sie klar und verständlich sind; die Beweislast für ihre ordnungsgemäße Erteilung sollte generell dem Unternehmer obliegen.
Neben die ausdrücklich angeordneten Informationspflichten treten ungeachtet der Verbraucher- oder Unternehmereigenschaft sowie der Käufer- oder Verkäuferrolle stark einzelfallabhängige Informationspflichten nach Treu und Glauben, die im Recht der Willensmängel niedergelegt sind. Hier ist der Grundsatz verwirklicht, dass mangelnde Information zunächst das eigene Risiko jeder Partei ist; berechtigtes Vertrauen und freie Willensbildung werden geschützt. Diese Pflichten berücksichtigen auch das Ziel der Effizienz und achten die Vertragsfreiheit. Das Vertrauen auf jegliche erteilten Informationen wird zudem dadurch geschützt, dass sie den Vertragsinhalt – allerdings in Verbraucherverträgen nicht umfassend genug – mitbestimmen können und dass ihre Unrichtigkeit sanktioniert wird.
Die Verletzung jeglicher Arten von Informationspflichten kann insbesondere einen Schadensersatzanspruch sowie – über das Recht der Willensmängel – die Möglichkeit zur Lösung vom Vertrag nach sich ziehen. Das Zusammenspiel der unterschiedlichen Mechanismen führt allerdings zu Friktionen sowie zu Lücken in den Rechtsfolgen von Informationspflichtverletzungen. Daher empfiehlt sich die Schaffung eines Schadensersatzanspruchs für jede treuwidrig unterlassene Informationserteilung; hierdurch wird das Gebot von Treu und Glauben auch außerhalb des Rechts der Willensmängel zu einer eigentlichen einzelfallabhängigen Informationspflicht aufgewertet.
Social entrepreneurship is a successful activity to solve social problems and economic challenges. Social entrepreneurship uses for-profit industry techniques and tools to build financially sound businesses that provide nonprofit services. Social entrepreneurial activities also lead to the achievement of sustainable development goals. However, due to the complex, hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel determinants. To expand our knowledge of how beneficial macro-level determinants can be, this work examines empirical evidence about the impact of macro-level determinants on social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning, the introductory section is explained in Chapter 1, which contains the motivation for the research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship. Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that a variety of determinants at the micro and macro levels are essential for the emergence of social entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the determinants and consequences of social entrepreneurship from different methodological perspectives. The theoretical foundations of the micro- and macro-level determinants influencing social entrepreneurial activities were discussed in Chapter 3. The purpose of reproducibility in research is to confirm previously published results (Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of transparency of methodology, reluctance to publish, and lack of interest from researchers, there is a lack of promoting replication of the existing research study (Baker, 2016; Hedges & Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that provide replicability of the reported results are considered rare in previous research (Burman et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand macro-level determinants on social inclusion, although it is still unclear whether these studies accurately reflect reality. It is important to provide conceptual underpinnings to the field through a reassessment of published results (Bettis et al., 2016). The results of their research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention, Chapter 5 considered the reproducibility of previous results, particularly on the topic of social entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of reproducibility and validate the specific conclusions they drew. The literal and constructive replication in the dissertation inspired us to explore technical replication research on social entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the growth of social ventures. The current debate reviews and references literature that has specifically focused on the development of social entrepreneurship. An empirical analysis of factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter 6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as consequences at the micro level. This study examined many characteristics of social and commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the growth of social entrepreneurship differs from commercial entrepreneurial activities due to objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative research has been used in studies to support the evidence on related topics, including Gupta et al (2020) emphasized that research needs to focus on specific concepts of social entrepreneurship for the field to advance. Therefore, this study provides a quantitative, analysis-based assessment of facts and data. For this purpose, a data set from the Global Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from 38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence of various social and commercial characteristics of entrepreneurship on economic growth in developing countries. Chapter 7 briefly explains future directions and practical/theoretical implications.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
Der zentrale Gegenstand der Untersuchung ist die Rechtsfigur des Indigenats im Kontext der württembergischen und preußischen Staatenlandschaft. Das Indigenat lässt sich als ein Recht bestimmen, das seine potenziellen Rechtsträger maßgeblich über das Abstammungsprinzip definiert und ein Verhältnis zwischen Rechtsträger und einem übergeordneten Rechtssubjekt zum Ausdruck bringt, sei es lehns- oder standes-, staats- oder auch bundes- beziehungsweise reichsrechtlicher Natur. Der zeitliche Schwerpunkt der Betrachtung liegt auf dem 19. Jahrhundert. Es werden jedoch auch Rückblicke in die Frühe Neuzeit geworfen, weil Wandel und Kontinuität in der Entwicklung des Indigenats in einer solch langen Perspektive besonders klar hervortreten können. Die zentrale These dieser Arbeit ist, dass ein enger Zusammenhang zwischen der im 19. Jahrhundert entstehenden und bis heute geläufigen Form der Zuordnung von Menschen zum Staat und den aus diesem Verhältnis entspringenden Rechten einerseits und dem frühneuzeitlichen Indigenat andererseits besteht. Dabei kann gezeigt werden, dass Gesellschaften ihre politischen Machtpositionen gegenüber „fremdstämmigen“, etwa zuwandernden Personen abschirmten, indem sie sich auf indigenatrechtliche, ethnische Bestimmungen beriefen.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
Left ventricular assist devices (LVADs) have become a valuable treatment for patients with advanced heart failure. Women appear to be disadvantaged in the usage of LVADs and concerning clinical outcomes such as death and adverse events after LVAD implant. Contrary to typical clinical characteristics (e.g., disease severity), device-related factors such as the intended device strategy, bridge to a heart transplantation or destination therapy, are often not considered in research on gender differences. In addition, the relevance of pre-implant psychosocial risk factors, such as substance abuse and limited social support, for LVAD outcomes is currently unclear. Thus, the aim of this dissertation is to explore the role of pre-implant psychosocial risk factors for gender differences in clinical outcomes, accounting for clinical and device-related risk factors.
In the first article, gender differences in pre-implant characteristics of patients registered in The European Registry for Patients with Mechanical Circulatory Support (EUROMACS) were investigated. It was found that women and men differed in multiple pre-implant characteristics depending on device strategy. In the second article, gender differences in major clinical outcomes (i.e., death, heart transplant, device explant due to cardiac recovery, device replacement due to complications) were evaluated for patients in the device strategy destination therapy in the Interagency Registry for Mechanically Assisted Circulation (INTERMACS). Additionally, the association of gender and psychosocial risk factors with the major outcomes were analyzed. Women had similar probabilities to die on LVAD support, and even higher probabilities to experience explant of the device due to cardiac recovery compared with men in the destination therapy subgroup. Pre-implant psychosocial risk factors were not associated with major outcomes. The third article focused on gender differences in 10 adverse events (e.g., device malfunction, bleeding) after LVAD implant in INTERMACS. The association of a psychosocial risk indicator with gender and adverse events after LVAD implantation was evaluated. Women were less likely to have psychosocial risk pre-implant but more likely to experience seven out of 10 adverse events compared with men. Pre-implant psychosocial risk was associated with adverse events, even suggesting a dose response-relationship. These associations appeared to be more pronounced in women.
In conclusion, women appear to have similar survival to men when accounting for device strategy. They have higher probabilities of recovery, but higher probabilities of device replacement and adverse events compared with men. Regarding these adverse events, women may be more susceptible to psychosocial risk factors than men. The results of this dissertation illustrate the importance of gender-sensitive research and suggest considering device strategy when studying gender differences in LVAD recipients. Further research is warranted to elucidate the role of specific psychosocial risk factors that lead to higher probabilities of adverse events, to intervene early and improve patient care in both, women and men
Anmerkung: Es handelt sich um die 2. überarbeitete Auflage der Dissertation.
1. Auflage siehe:
"https://ubt.opus.hbz-nrw.de/frontdoor/index/index/docId/2083".
Ausgangspunkt der politisch-ikonographischen Untersuchung, in deren
Zentrum zwei Staatsporträts König Maximilians II. von Bayern stehen, ist die Beobachtung, dass diese beiden Bildnisse grundsätzlich unterschiedliche Inszenierungsformen wählen. Das erste von Max Hailer gefertigte Werk zeigt Maximilian II. im vollen bayerischen Krönungsornat und greift eine tradierte Darstellungsweise im Staatsporträt auf. Es entstand zwei Jahre nach Maximilians II. Thronbesteigung und damit nach den revolutionären Unruhen der Jahre 1848/49 im Jahr 1850. Das zweite wurde von Joseph Bernhardt 1857 bis 1858 gemalt und im Jahr 1858 zum zehnjährigen Thronjubiläum des Monarchen erstmals präsentiert. Die Inszenierung ändert sich im zweiten Bildnis: Das bayerische Krönungsornat ist der Generalsuniform gewichen, ebenso weitere Details, die sich noch in der ersten Darstellung finden: Draperie und Wappen fehlen, der übliche bayerisch-königliche Thronsessel ist durch einen anderen ersetzt. In den Hintergrund gedrängt ist die Verfassung, immerhin seit 1818 staatliche Rechtsgrundlage des bayerischen Königreichs. Die beiden Staatsporträts Maximilians II. leiten offensichtlich von den Herrscherbildnissen im vollen bayerischen Krönungsornat seines Großvaters Maximilian I. und Vaters Ludwig I. über zu einer solchen in Uniform mit Krönungsmantel wie sie sich bei Napoleon III. und Friedrich Wilhelm IV. finden und wie sie sein Sohn Ludwig II. weiterführte. Es stellt sich somit die Frage, welche Faktoren zu diesem prägnanten Wandel in der Inszenierung Maximilians II. als König von Bayern führten. Die Arbeit geht der These nach, dass beide Darstellungen grundlegend auf eine reaktionäre, gegen die Revolution 1848/49 gerichtete Politik ausgelegt sind, wobei dieser reaktionäre Charakter in Maximilians II. Bildnis von 1858 noch eine Steigerung im Vergleich zu derjenigen von 1850 erfährt. Zudem wandelt sich die innenpolitisch-historische Ausrichtung des ersten Porträts bei der zweiten Darstellung des bayerischen Monarchen in eine außenpolitisch-progressive. Die Legitimation Maximilians II. begründet sich nicht mehr, wie bei ersterem, in der Geschichte und der Herrschaft der Wittelsbacher, sondern in seinen eigenen Errungenschaften und seiner eigenen Herrschaft. Dieser Wechsel der politischen Bildaussage fußt sowohl auf den politischen Veränderungen und Entwicklungen innerhalb und außerhalb Bayerns als auch auf der Entwicklung des Staatsporträts in der Mitte des 19. Jahrhunderts. Nach nur zehn Jahren wird so eine veränderte Botschaft über Maximilians II. Position und Machtanspruch ausgesendet.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
While humans find it easy to process visual information from the real world, machines struggle with this task due to the unstructured and complex nature of the information. Computer vision (CV) is the approach of artificial intelligence that attempts to automatically analyze, interpret, and extract such information. Recent CV approaches mainly use deep learning (DL) due to its very high accuracy. DL extracts useful features from unstructured images in a training dataset to use them for specific real-world tasks. However, DL requires a large number of parameters, computational power, and meaningful training data, which can be noisy, sparse, and incomplete for specific domains. Furthermore, DL tends to learn correlations from the training data that do not occur in reality, making DNNs poorly generalizable and error-prone.
Therefore, the field of visual transfer learning is seeking methods that are less dependent on training data and are thus more applicable in the constantly changing world. One idea is to enrich DL with prior knowledge. Knowledge graphs (KG) serve as a powerful tool for this purpose because they can formalize and organize prior knowledge based on an underlying ontological schema. They contain symbolic operations such as logic, rules, and reasoning, and can be created, adapted, and interpreted by domain experts. Due to the abstraction potential of symbols, KGs provide good prerequisites for generalizing their knowledge. To take advantage of the generalization properties of KG and the ability of DL to learn from large-scale unstructured data, attempts have long been made to combine explicit graph and implicit vector representations. However, with the recent development of knowledge graph embedding methods, where a graph is transferred into a vector space, new perspectives for a combination in vector space are opening up.
In this work, we attempt to combine prior knowledge from a KG with DL to improve visual transfer learning using the following steps: First, we explore the potential benefits of using prior knowledge encoded in a KG for DL-based visual transfer learning. Second, we investigate approaches that already combine KG and DL and create a categorization based on their general idea of knowledge integration. Third, we propose a novel method for the specific category of using the knowledge graph as a trainer, where a DNN is trained to adapt to a representation given by prior knowledge of a KG. Fourth, we extend the proposed method by extracting relevant context in the form of a subgraph of the KG to investigate the relationship between prior knowledge and performance on a specific CV task. In summary, this work provides deep insights into the combination of KG and DL, with the goal of making DL approaches more generalizable, more efficient, and more interpretable through prior knowledge.
Family firms play a crucial role in the DACH region (Germany, Austria, Switzerland). They are characterized by a long tradition, a strong connection to the region, and a well-established network. However, family firms also face challenges, especially in finding a suitable successor. Wealthy entrepreneurial families are increasingly opting to establish Single Family Offices (SFOs) as a solution to this challenge. An SFO takes on the management and protection of family wealth. Its goal is to secure and grow the wealth over generations. In Germany alone, there are an estimated 350 to 450 SFOs, with 70% of them being established after the year 2000. However, research on SFOs is still in its early stages, particularly regarding the role of SFOs as firm owners. This dissertation delves into an exploration of SFOs through four quantitative empirical studies. The first study provides a descriptive overview of 216 SFOs from the DACH-region. Findings reveal that SFOs exhibit a preference for investing in established companies and real estate. Notably, only about a third of SFOs engage in investments in start-ups. Moreover, SFOs as a group are heterogeneous. Categorizing them into three groups based on their relationship with the entrepreneurial family and the original family firm reveals significant differences in their asset allocation strategies. Subsequent studies in this dissertation leverage a hand-collected sample of 173 SFO-owned firms from the DACH region, meticulously matched with 684 family-owned firms from the same region. The second study focusing on financial performance indicates that SFO-owned firms tend to exhibit comparatively poorer financial performance than family-owned firms. However, when members of the SFO-owning family hold positions on the supervisory or executive board of the firm, there's a notable improvement. The third study, concerning cash holdings, reveals that SFO-owned firms maintain a higher cash holding ratio compared to family-owned firms. Notably, this effect is magnified when the SFO has divested its initial family firms. Lastly, the fourth study regarding capital structure highlights that SFO-owned firms tend to display a higher long-term debt ratio than family-owned firms. This suggests that SFO-owned firms operate within a trade-off theory framework, like private equity-owned firms. Furthermore, this effect is stronger for SFOs that sold their original family firm. The outcomes of this research are poised to provide entrepreneurial families with a practical guide for effectively managing and leveraging SFOs as a strategic long-term instrument for succession and investment planning.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
Diese Dissertationsschrift befasst sich mit der Erforschung des motorischen Gedächtnisses. Wir gehen der Frage nach, ob sich dort Analogien zu im deklarativen Gedächtnis bekannten kontextuellen und inhibitorischen Effekten finden lassen.
Der erste von drei peer reviewed Artikeln setzt sich mit der generellen Bedeutung von externen Kontextmerkmalen für einen motorischen Gedächtnisabruf auseinander. Wir veränderten zwei verschiedene Sätze motorischer Sequenzen entlang einer hohen Zahl entsprechender Merkmale. Signifikant unterschiedliche Erinnerungsleistungen wiesen auf eine Kontextabhängigkeit motorischer Inhalte hin. Die Erinnerungsleistung variierte entlang der seriellen Output-Position. Bei einem Kontextwechsel blieb die Erinnerungsleistung über den Abrufverlauf nahezu stabil, bei Kontextbeibehaltung fiel diese schnell signifikant ab.
Beide weiteren peer reviewed Artikel wenden sich dann der Inhibition motorischer Sequenzen zu. Im zweiten Artikel begutachten wir drei Sätze motorischer Sequenzen, die wir mit verschiedenen Händen ausführen ließen, auf ein selektives gerichtetes Vergessen. Die Vergessen-Gruppe zeigte dies nur, wenn für Satz Zwei und Drei dieselbe Hand benutzt wurde und somit ein hohes Interferenzpotenzial zwischen diesen Listen bestand. War dieses im Vergleich niedrig, indem beide Sätze mit verschiedenen Händen auszuführen waren, trat kein selektives gerichtetes Vergessen auf. Das deutet auf kognitive Inhibition als wirkursächlichen Prozess.
Im dritten Artikel schließlich untersuchen wir Effekte willentlicher kognitiver Unterdrückung sowohl des Gedächtnisabrufs als auch des Ausführens in einer motorischen Adaptation des TNT (think/no-think) – Paradigmas (Anderson & Green, 2001). Waren die Sequenzen in Experiment 1 anfänglich stärker trainiert worden, so zeigten willentlich unterdrückte (no-think) motorische Repräsentationen eine deutliche Verlangsamung in deren Zugänglichkeit und tendenziell auch in der Ausführung, - im Vergleich zu Basisraten-Sequenzen. Waren die Sequenzen in Experiment 2 dagegen nur moderat trainiert, wurden diese auch schlechter erinnert und deutlich verlangsamt ausgeführt. Willentliche kognitive Unterdrückung kann motorische Gedächtnisrepräsentation und deren Ausführung beeinflussen.
Unsere drei Artikel bestätigen motorische Analogien bekannter Kontext- und Inhibitionseffekte im deklarativen Gedächtnis. Wir führen ein selektives gerichtetes Vergessen motorischer Inhalte eindeutig auf Inhibition zurück und bestätigen darüber hinaus Effekte der willentlichen Unterdrückung motorischer Gedächtnisrepräsentation.
Anmerkung: Es handelt sich um die 1. Auflage der Dissertation.
2. überarbeitete Auflage siehe:
"https://ubt.opus.hbz-nrw.de/frontdoor/index/index/docId/2166".
Ausgangspunkt der politisch-ikonographischen Untersuchung, in deren Zentrum zwei Staatsporträts König Maximilians II. von Bayern stehen, ist die Beobachtung, dass diese beiden Bildnisse grundsätzlich unterschiedliche Inszenierungsformen wählen. Das erste von Max Hailer gefertigte Werk zeigt Maximilian II. im vollen bayerischen Krönungsornat und greift eine tradierte Darstellungsweise im Staatsporträt auf. Es entstand zwei Jahre nach Maximilians II. Thronbesteigung und damit nach den revolutionären Unruhen der Jahre 1848/49 im Jahr 1850. Das zweite wurde von Joseph Bernhardt 1857 bis 1858 gemalt und im Jahr 1858 zum zehnjährigen Thronjubiläum des Monarchen erstmals präsentiert. Die Inszenierung ändert sich im zweiten Bildnis: Das bayerische Krönungsornat ist der Generalsuniform gewichen, ebenso weitere Details, die sich noch in der ersten Darstellung finden: Draperie und Wappen fehlen, der übliche bayerisch-königliche Thronsessel ist durch einen anderen ersetzt. In den Hintergrund gedrängt ist die Verfassung, immerhin seit 1818 staatliche Rechtsgrundlage des bayerischen Königreichs. Die beiden Staatsporträts Maximilians II. leiten offensichtlich von den Herrscherbildnissen im vollen bayerischen Krönungsornat seines Großvaters Maximilian I. und Vaters Ludwig I. über zu einer solchen in Uniform mit Krönungsmantel wie sie sich bei Napoleon III. und Friedrich Wilhelm IV. finden und wie sie sein Sohn Ludwig II. weiterführte. Es stellt sich somit die Frage, welche Faktoren zu diesem prägnanten Wandel in der Inszenierung Maximilians II. als König von Bayern führten. Die Arbeit geht der These nach, dass beide Darstellungen grundlegend auf eine reaktionäre, gegen die Revolution 1848/49 gerichtete Politik ausgelegt sind, wobei dieser reaktionäre Charakter in Maximilians II. Bildnis von 1858 noch eine Steigerung im Vergleich zu derjenigen von 1850 erfährt. Zudem wandelt sich die innenpolitisch-historische Ausrichtung des ersten Porträts bei der zweiten Darstellung des bayerischen Monarchen in eine außenpolitisch-progressive. Die Legitimation Maximilians II. begründet sich nicht mehr, wie bei ersterem, in der Geschichte und der Herrschaft der Wittelsbacher, sondern in seinen eigenen Errungenschaften und seiner eigenen Herrschaft. Dieser Wechsel der politischen Bildaussage fußt sowohl auf den politischen Veränderungen und Entwicklungen innerhalb und außerhalb Bayerns als auch auf der Entwicklung des Staatsporträts in der Mitte des 19. Jahrhunderts. Nach nur zehn Jahren wird so eine veränderte Botschaft über Maximilians II. Position und Machtanspruch ausgesendet.
Debatten führen nicht immer zu einem Konsens. Selbst die Vorlage von Beweisen bewirkt nicht immer eine Überzeugung der Gegenseite. Dies zeigt sich nicht nur in der Geschichte der Wissenschaften (vgl. Ludwik Fleck, Bruno Latour), sondern auch in der in unterschiedlichen Disziplinen geführten zeitgenössischen Debatte unter dem Label ‚science wars‘ zwischen einem Realismus und Konstruktivismus beziehungsweise Relativismus. Unterschiede in ihren Legitimierungen zeigen systematisch verschiedene Wirklichkeits- und Wahrheitsverständnisse, die sich aus den vom Seinsstandort der Perspektive abhängigen Grundannahmen konstituieren. Über einen wissenssoziologischen Zugriff wird es möglich die (sozio-)strukturlogische Konstitution von Perspektivität zu analysieren, die eine epistemologisch vorstrukturierte Revolvierung untereinander inkommensurabler Beiträge in der Debatte aufdeckt, was als Erklärung für ungelöste Debatten in Wissenschaft, Politik und Alltag überhaupt fungieren kann.
Die vorliegende Arbeit orientiert sich in ihrem Vorgehen an dem von Paul Boghossian veröffentlichten Werk ‚Angst vor der Wahrheit‘ als zeitgenössischen Vertreter eines Neuen Realismus. Hierbei werden zum einen den direkten Bezügen von Boghossian die Aussagen der kritisierten Perspektiven (v.a. Latour und Goodman) gegenübergestellt, als auch zum anderen weitere Spielarten eines Konstruktivismus (kognitionstheoretischer Konstruktivismus nach Maturana und Varela, soziologischer Konstruktivismus nach Berger und Luckmann, Wissenschaftssoziologie am Beispiel von Bloor und Latour, die Systemtheorie von Luhmann sowie postkonstruktivistische Positionen) in den Dimensionen ‚Wissensverständnis‘, ‚Subjektrelevanz‘ und ‚Einstellung zu einer naturalistischen Grundlage‘ vorgestellt. Es wird eine systematische und beidseitige Fehlinterpretation in der Debatte zwischen Realismus und Konstruktivismus sichtbar. Diese wird auf die Seinsgebundenheit von Perspektiven nach dem Verständnis einer mannheimschen Wissenssoziologie zurückgeführt. Anhand einer Rekonstruktion der Erkenntnistheorie des frühen Mannheims (1922: ‚Strukturanalyse der Erkenntnistheorie‘) wird die (sozio-)strukturlogische Konstitution erkenntnistheoretischer Elemente von Grundwissenschaften herausgearbeitet, wodurch denkstilgemäße Objektivierungen (und damit Wahrheitsverständnisse) unterschieden werden können. Diese Unterschiede erklären nicht nur die Inkommensurabilität von heterogenen Perspektiven in Debatten, sondern zeigen auf, dass das Aufeinandertreffen der Debattierenden vorstrukturiert sind. Der Ablauf einer Debatte ist soziostrukturell determiniert. Abschließend wird in der vorliegenden Arbeit diskutiert, inwiefern der verfahrenen Situation einer Debatte entgegengewirkt werden kann und auf welche Weise eine wissenssoziologische Analyse zu einem gegenseitigen Verständnis zwischen debattierenden Parteien beitragen kann.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
Every action we perform, no matter how simple or complex, has a cognitive representation. It is commonly assumed that these are organized hierarchically. Thus, the representation of a complex action consists of multiple simpler actions. The representation of a simple action, in turn, consists of stimulus, response, and effect features. These are integrated into one representation upon the execution of an action and can be retrieved if a feature is repeated. Depending on whether retrieved features match or only partially match the current action episode, this might benefit or impair the execution of a subsequent action. This pattern of costs and benefits results in binding effects that indicate the strength of common representation between features. Binding effects occur also in more complex actions: Multiple simple actions seem to form representations on a higher level through the integration and retrieval of sequentially given responses, resulting in so-called response-response binding effects. This dissertation aimed to investigate what factors determine whether simple actions form more complex representations. The first line of research (Articles 1-3) focused on dissecting the internal structure of simple actions. Specifically, I investigated whether the spatial relation of stimuli, responses, or effects, that are part of two different simple actions, influenced whether these simple actions are represented as one more complex action. The second line of research (Articles 2, 4, and 5) investigated the role of context on the formation and strength of more complex action representations. Results suggest that spatial separation of responses as well as context might affect the strength of more complex action representations. In sum, findings help to specify assumptions on the structure of complex action representations. However, it may be important to distinguish factors that influence the strength and structure of action representations from factors that terminate action representations.
This thesis comprises of four research papers on the economics of education and industrial relations, which contribute to the field of empirical economic research. All of the corresponding papers focus on analysing how much time individuals spend on specific activities. The allocation of available time resources is a decision that individuals make throughout their lifetime. In this thesis, we consider individuals at different stages of their lives - students at school, university students, and dependent employees at the workplace.
Part I includes two research studies on student's behaviour in secondary and tertiary education.
Chapter 2 explores whether students who are relatively younger or older within the school year exhibit differential time allocation. Building on previous findings showing that relatively younger students perform worse in school, the study shows that relatively younger students are aware of their poor performance in school and feel more strain as a result. Nevertheless, there are no clear differences to be found in terms of time spent on homework, while relatively younger students spend more time watching television and less time on sports activities. Thus, the results suggest that the lower learning outcomes are not associated with different time allocations between school-related activities and non-school-related activities.
Chapter 3 analyses how individual ability and labour market prospects affect study behaviour. The theoretical modelling predicts that both determinants increase study effort. The empirical investigation is based on cross-sectional data from the National Educational Panel Study (NEPS) and includes thousands of students in Germany. The analyses show that more gifted students exhibit lower subjective effort levels and invest less time in self-study. In contrast, very good labour market prospects lead to more effort exerted by the student, both qualitatively and quantitatively. The potential endogeneity problem is taken into account by using regional unemployment data as an instrumental variable.
Part II includes two labour economic studies on determinants of overtime. Both studies belong to the field of industrial relations, as they focus on union membership on the one hand and the interplay of works councils and collective bargaining coverage on the other.
Chapter 4 shows that union members work less overtime than non-members do. The econometric approach takes the problem of unobserved heterogeneity into account; but provides no evidence that this issue affects the results. Different channels that could lead to this relationship are analysed by examining relevant subgroups separately. For example, this effect of union membership can also be observed in establishments with works councils and for workers who are very likely to be covered by collective bargaining agreements. The study concludes that the observed effect is due to the fact that union membership can protect workers from corresponding increased working time demands by employers.
Chapter 5 builds on previous studies showing a negative effect of works councils on overtime. In addition to co-determination by works councils at the firm level, collective bargaining coverage is an important factor in the German industrial relations system. Corresponding data was not available in the SOEP for quite some time. Therefore, the study uses recent SOEP data, which also contains information on collective bargaining coverage. A cross-sectional analysis is conducted to examine the effects of works councils in establishments with and without collective bargaining coverage. Similar to studies analysing other outcome variables, the results show that the effect of works councils exists only for employees covered by a collective bargaining agreement.
Computer simulation has become established in a two-fold way: As a tool for planning, analyzing, and optimizing complex systems but also as a method for the scientific instigation of theories and thus for the generation of knowledge. Generated results often serve as a basis for investment decisions, e.g., road construction and factory planning, or provide evidence for scientific theory-building processes. To ensure the generation of credible and reproducible results, it is indispensable to conduct systematic and methodologically sound simulation studies. A variety of procedure models exist that structure and predetermine the process of a study. As a result, experimenters are often required to repetitively but thoroughly carry out a large number of experiments. Moreover, the process is not sufficiently specified and many important design decisions still have to be made by the experimenter, which might result in an unintentional bias of the results.
To facilitate the conducting of simulation studies and to improve both replicability and reproducibility of the generated results, this thesis proposes a procedure model for carrying out Hypothesis-Driven Simulation Studies, an approach that assists the experimenter during the design, execution, and analysis of simulation experiments. In contrast to existing approaches, a formally specified hypothesis becomes the key element of the study so that each step of the study can be adapted and executed to directly contribute to the verification of the hypothesis. To this end, the FITS language is presented, which enables the specification of hypotheses as assumptions regarding the influence specific input values have on the observable behavior of the model. The proposed procedure model systematically designs relevant simulation experiments, runs, and iterations that must be executed to provide evidence for the verification of the hypothesis. Generated outputs are then aggregated for each defined performance measure to allow for the application of statistical hypothesis testing approaches. Hence, the proposed assistance only requires the experimenter to provide an executable simulation model and a corresponding hypothesis to conduct a sound simulation study. With respect to the implementation of the proposed assistance system, this thesis presents an abstract architecture and provides formal specifications of all required services.
To evaluate the concept of Hypothesis-Driven Simulation Studies, two case studies are presented from the manufacturing domain. The introduced approach is applied to a NetLogo simulation model of a four-tiered supply chain. Two scenarios as well as corresponding assumptions about the model behavior are presented to investigate conditions for the occurrence of the bullwhip effect. Starting from the formal specification of the hypothesis, each step of a Hypothesis-Driven Simulation Study is presented in detail, with specific design decisions outlined, and generated inter- mediate data as well as final results illustrated. With respect to the comparability of the results, a conventional simulation study is conducted which serves as reference data. The approach that is proposed in this thesis is beneficial for both practitioners and scientists. The presented assistance system allows for a more effortless and simplified execution of simulation experiments while the efficient generation of credible results is ensured.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Startups are essential agents for the evolution of economies and the creative destruction of established market conditions for the benefit of a more effective and efficient economy. Their significance is manifested in their drive for innovation and technological advancements, their creation of new jobs, their contribution to economic growth, and their impact on increased competition and increased market efficiency. By reason of their attributes of newness and smallness, startups often experience a limitation in accessing external financial resources. Extant research on entrepreneurial finance examines the capital structure of startups, various funding tools, financing environments in certain regions, and investor selection criteria among other topics. My dissertation contributes to this research area by examining the becoming increasingly important funding instrument of venture debt. Prior research on venture debt only investigated the business model of venture debt, the concept of venture debt, the selection criteria of venture debt providers, and the role of patents in the venture debt provider’s selection process. Based on qualitative and quantitative methods, the dissertation outlines the emergence of venture debt in Europe as well as the impact of venture debt on startups to open up a better understanding of venture debt.
The results of the qualitative studies indicate that venture debt was formed based on a ‘Kirznerian’ entrepreneurial opportunity and venture debt impacts startups positive and negative in their development via different impact mechanisms.
Based on these results, the dissertation analyzes the empirical impact of venture debt on a startup’s ability to acquire additional financial resources as well as the role of the reputation of venture debt providers. The results suggest that venture debt increases the likelihood of acquiring additional financial resources via subsequent funding rounds and trade sales. In addition, a higher venture debt provider reputation increases the likelihood of acquiring additional financial resources via IPOs.
This cumulative thesis encompass three studies focusing on the Weddell Sea region in the Antarctic. The first study produces and evaluates a high quality data set of wind measurements for this region. The second study produces and evaluates a 15 year regional climate simulation for the Weddell Sea region. And the third study produces and evaluates a climatology of low level jets (LLJs) from the simulation data set. The evaluations were done in the attached three publications and the produced data sets are published online.
In 2015/2016, the RV Polarstern undertook an Antarctic expedition in the Weddell Sea. We operated a Doppler wind lidar on board during that time running different scan patterns. The resulting data was evaluated, corrected, processed and we derived horizontal wind speed and directions for vertical profiles with up to 2 km height. The measurements cover 38 days with a temporal resolution of 10-15 minutes. A comparisons with other radio sounding data showed only minor differences.
The resulting data set was used alongside other measurements to evaluate temperature and wind of simulation data. The simulation data was produced with the regional climate model CCLM for the period of 2002 to 2016 for the Weddell Sea region. Only smaller biases were found except for a strong warm bias during winter near the surface of the Antarctic Plateau. Thus we adapted the model setup and were able to remove the bias in a second simulation.
This new simulation data was then used to derive a climatology of low level jets (LLJs). Statistics of occurrence frequency, height and wind speed of LLJs for the Weddell Sea region are presented along other parameters. Another evaluation with measurements was also performed in the last study.
Do Personality Traits, Trust and Fairness Shape the Stock-Investing Decisions of an Individual?
(2023)
This thesis is comprised of three projects, all of which are fundamentally connected to the choices that individuals make about stock investments. Differences in stock market participation (SMP) across countries are large and difficult to explain. The second chapter focuses on differences between Germany (low SMP) and East Asian countries (mostly high SMP). The study hypothesis is that cultural differences regarding social preferences and attitudes towards inequality lead to different attitudes towards stock markets and subsequently to different SMPs. Using a large-scale survey, it is found that these factors can, indeed, explain a substantial amount of the country differences that other known factors (financial literacy, risk preferences, etc.) could not. This suggests that social preferences should be given a more central role in programs that aim to enhance SMP in countries like Germany. The third chapter documented the importance of trust as well as herding for stock ownership decisions. The findings show that trust as a general concept has no significant contribution to stock investment intention. A thorough examination of general trust elements reveals that in group and out-group trust have an impact on individual stock market investment. Higher out group trust directly influences a person's decision to invest in stocks, whereas higher in-group trust increases herding attitudes in stock investment decisions and thus can potentially increase the likelihood of stock investments as well. The last chapter investigates the significance of personality traits in stock investing and home bias in portfolio selection. Findings show that personality traits do indeed have a significant impact on stock investment and portfolio allocation decisions. Despite the fact that the magnitude and significance of characteristics differ between two groups of investors, inexperienced and experienced, conscientiousness and neuroticism play an important role in stock investments and preferences. Moreover, high conscientiousness scores increase stock investment desire and portfolio allocation to risky assets like stocks, discouraging home bias in asset allocation. Regarding neuroticism, a higher-level increases home bias in portfolio selection and decreases willingness to stock investment and portfolio share. Finally, when an investor has no prior experience with portfolio selection, patriotism generates home bias. For experienced investors, having a low neuroticism score and a high conscientiousness and openness score seemed to be a constant factor in deciding to invest in a well-diversified international portfolio
Intensiv diskutierte Aspekte der Politikwissenschaft heben zunehmend die Bedeutung von Strategiefähigkeit zur erfolgreichen Durchführung von Wahlkämpfen für Parteien hervor. Der Widerspruch der mit den Implikationen der modernen Mediengesellschaft eingehergehenden unterstellten Akteursfähigkeit der Parteien und ihrer kollektiven heterogenen Interessens- und Organisationsvielfalt bleibt dabei bestehen. Die Fokussierung der Parteien auf das Ziel der Stimmenmaximierung bringt unter den sich wandelnden Rahmenbedingungen Veränderungen der Binnenstrukturen mit sich. So diskutieren Parteienforscher seit Längerem die Notwendigkeit eines vierten Parteitypus als Nachfolger von Kirchheimers Volkspartei (1965). Verschiedene dieser Ansätze berücksichtigen primär die Wahlkampffokussierung der Parteien, während andere vor allem auf den gesteigerten Strategiebedarf abzielen. Auch die Wechselwirkungen mit den Erfordernissen der Mediengesellschaft sowie Auswirkungen des gesellschaftlichen Wandels stehen im Vordergrund zahlreicher Untersuchungen. Die Arbeit von Uwe Jun (2004), der mit dem Modell der professionalisierten Medienkommunikationspartei auch die organisatorischen und programmatischen Transformationsaspekte des Parteiwandels beleuchtet, liefert einen bemerkenswerten Beitrag zur Party-Change-Debatte und bietet durch die angeschlossene vergleichende exemplarische Fallstudie eine praxisnahe Einordnung. Die geringe empirische Relevanz, die Jun seinem Parteityp anhand der Untersuchung von SPD und New Labor zwischen 1995 und 2005 bestätigt, soll in dieser Arbeit versucht werden zu relativieren, in dem der Parteiwandel der deutschen Großparteien seit der Wiedervereinigung durch die Untersuchung ihrer Wahlkampffähigkeit aufgezeigt wird. Anhand eines längsschnittlichen Vergleiches der Bundestagswahlkämpfe von SPD und CDU zwischen 1990 und 2013 soll die Plausibilität dieses vierten Parteitypus überprüft werden. Hierdurch soll die Entwicklung der Strategie- und Wahlkampffähigkeit beider Großparteien in den Bundestagswahlkämpfen seit 1990 untersucht und die Ergebnisse miteinander verglichen und in Bezug auf den Parteiwandel eingeordnet werden.
Dass sich Parteien genau wie ihre gesellschaftliche und politische Umwelt im Wandel befinden, ist nicht zu bestreiten und seit Langem viel diskutierter Gegenstand der Parteienforschung. „Niedergangsdiskussion“, Mitgliederschwund, Nicht- und Wechselwähler, Politik- und Parteienverdrossenheit, Kartellisierung und Institutionalisierung von Parteien sind nur einige der in diesem Kontext geläufigen Schlagwörter. Prozesse der Individualisierung, Globalisierung und Mediatisierung führen zu veränderten Rahmenbedingungen, unter denen Parteien sich behaupten müssen. Diese Veränderungen in der äußeren Umwelt wirken sich nachhaltig auf das parteipolitische Binnenleben, auf Organisationsstrukturen und Programmatik aus. Die Parteienforschung hat daher schon vor zwanzig Jahren begonnen, ein typologisches Nachfolgemodell der Volkspartei zu diskutieren, das diesen Wandel berücksichtigt. Verschiedene typologische Konstruktionen von z. B. Panebianco (1988), Katz und Mair (1995) oder von Beyme erfassen (2000) wichtige Facetten des Strukturwandels politischer Parteien und stellen mehrheitlich plausible typologische Konzepte vor, die die Parteien in ihrem Streben nach Wählerstimmen und Regierungsmacht zutreffend charakterisieren. Die Parteienforschung stimmt bezüglich des Endes der Volksparteiära mehrheitlich überein. Bezüglich der Nachfolge konnte sich unter den neueren vorgeschlagenen Typen jedoch kein vierter Typ als verbindliches Leitmodell etablieren. Bei genauerer Betrachtung weichen die in den verschiedenen Ansätzen für einen vierten Parteitypen hervorgehobenen Merkmale (namentlich Professionalisierung des Parteiapparates, die Berufspolitikerdominanz, Verstaatlichung und Kartellbildung sowie die Fixierung auf die Medien) wenig von jüngeren Modellvorschlägen ab und bedürfen daher mehr einer Ergänzung. Die in der Regel mehrdimensionalen entwicklungstypologischen Verlaufstypen setzten seit den 1980er Jahren unterschiedliche Schwerpunkte und warten mit vielen Vorschlägen der Einordnung auf. Einer der jüngsten Ansätze von Uwe Jun aus dem Jahr 2004, der das typologische Konzept der professionalisierten Medienkommunikationspartei einführt, macht deutlich, dass die Diskussion um Gestalt und Ausprägungen des vierten Parteityps noch in vollem Gang und für weitere Vorschläge offen ist – der „richtige“ Typ also noch nicht gefunden wurde. Jun bleibt in seiner Untersuchung den zentralen Transformationsleitfragen nach der Ausgestaltung der Parteiorganisation, der ideologisch-programmatischen Orientierung und der strategisch-elektoralen Wählerorientierung verhaftet und setzt diese Elemente in den Fokus sich wandelnder Kommunikationsstrategien. Die bisher in parteitypologischen Arbeiten mitunter vernachlässigte Komponente der strukturellen Strategiefähigkeit als Grundlage zur Entwicklung ebensolcher Reaktionsstrategien wird bei Jun angestoßen und soll in dieser Arbeit aufgegriffen und vertieft werden.
Der aktuellen Partychange-Diskussion zum Trotz scheint die Annahme, dass Parteien, die sich verstärkt der Handlungslogik der Massenmedien unterwerfen, deren strategischen Anforderungen durch interne Adaptionsverfahren auch dauerhaft gerecht zu werden vermögen, nicht immer zutreffend. Die Veränderungen der Kommunikationsstrategien als Reaktion auf gesamtgesellschaftliche Wandlungsprozesse stehen zwar im Zentrum der Professionalisierungsbemühungen der politischen Akteure, bleiben aber in ihrer Wirkung eingeschränkt. Wenngleich das Wissen in den Parteien um die Notwendigkeiten (medialer) Strategiefähigkeit besteht und die Parteien hierauf mit Professionalisierung, organisatorischen und programmatischen Anpassungsleistungen und der Herausbildung strategischer Zentren reagieren, so ist mediengerechtes strategisches Agieren noch lange keine natürliche Kernkompetenz der Parteien. Vor allem in Wahlkampfzeiten, die aufgrund abnehmender Parteibindungen und zunehmender Wählervolatilität für die Parteien zum eigentlich zentralen Moment der Parteiendemokratie werden, wird mediengerechtes Handeln zum wesentlichen Erfolgsfaktor. Strategiefähigkeit wird hierbei zur entscheidenden Voraussetzung und scheint zudem in diesen Phasen von den Parteien erfolgreicher umgesetzt zu werden als im normalen politischen Alltag. Die wahlstrategische Komponente findet in Juns typologischer Konstruktion wenig Beachtung und soll in dieser Arbeit daher als ergänzendes Element hinzugefügt werden. Arbeitshypothese Die beiden deutschen Großparteien berufen sich auf unterschiedliche Entstehungsgeschichten, die sich bis in die Gegenwart auf die Mitglieder-, Issue- und Organisationsstrukturen von SPD und CDU auswirken und die Parteien in ihren Anpassungsleistungen an die sich wandelnde Gesellschaft beeinflussen. Beide Parteien versuchen, auf die veränderten sozialen und politischen Rahmenbedingungen und den daraus resultierenden Bedeutungszuwachs von politischer Kommunikationsplanung mit einem erhöhten Maß an Strategiefähigkeit und kommunikativer Kompetenz zu reagieren. Diese Entwicklung tritt seit der deutschen Wiedervereinigung umso stärker in Augenschein, als dass nach 1990 die Bindekraft der Volksparteien nochmals nachließ, sodass die Parteien sich zunehmend gezwungen sehen, die „lose verkoppelten Anarchien“ in wahlstrategische Medienkommunikationsparteien zu transformieren. Diesen vierten Parteityp kennzeichnet vor allem die zunehmende Bemühung um Strategiefähigkeit, die mittels Organisationsstrukturen und programmatischer Anpassungsleistungen die Effizienz der elektoralen Ausrichtung verbessern soll. Insgesamt geht die Party-Change-Forschung davon aus, dass die Parteien sich zunehmend angleichen. Dies gilt es in dieser Studie zu überprüfen. Unter Berücksichtigung unterschiedlicher Entwicklungspfade kann vermutet werden, dass auch die Transformationsprozesse bei SPD und CDU in unterschiedlicher Weise verlaufen. Wenngleich die SPD über einen höheren Strategiebedarf und die größere Innovationsbereitschaft zu verfügen scheint, werden auf Seiten der Union potentiell strategiefähigere Strukturen vermutet, die die erfolgreiche Umsetzung von Wahlkampfstrategien erleichtern. Die historische Entwicklung und der Aspekt der Historizität spielen in diesem Kontext eine Rolle.
Zusätzlich spielen individuelle Führungspersönlichkeiten eine zentrale Rolle in innerparteilichen Transformationsprozessen, welche für die Ausprägung strategiefähiger Strukturen oftmals von größerer Bedeutung sind als institutionalisierte Strukturen. Im Vordergrund steht die Untersuchung des Parteiwandels anhand der Veränderung der Kommunikationsstrategien der Parteien im Allgemeinen sowie der Strategiefähigkeit in Wahlkämpfen im Besonderen, da diese als zentrale Merkmale für den vierten Parteityp in Anlehnung an die Professionelle Medienkommunikationspartei (Jun 2004) gewertet werden sollen. Strategiefähigkeit soll dabei anhand der Kriterien des Umgangs der Parteien mit Programmatik, Organisation und externen Einflussfaktoren in Wahlkämpfen operationalisiert werden. Die Analyse untersucht sowohl das Handeln einzelner Personen wie auch die Rolle der Partei als Gesamtorganisation. Die Arbeit besteht aus zehn Kapiteln und gliedert sich in zwei Blöcke: einen theoretisch konzeptionellen Teil, der die in der Perspektive dieser Arbeit zentralen Grundlagen und Rahmenbedingungen zusammenführt sowie die sich daran anschließende Untersuchung der Konzeption und Implementation von Kommunikationskampagnen im Wahlkampf seit 1990. Das aktuell in die politikwissenschaftliche Diskussion eingebrachte Feld der politischen Strategiefähigkeit (Raschke/Tils 2007) wird in ausführlicher theoretischer Grundlegung bisher zwar mit den Implikationen der Medienkommunikation und damit einhergehend auch den organisatorischen und programmatischen Strukturmerkmalen der Parteien verknüpft, diese erfolgte allerdings oft ohne vertiefte Berücksichtigung des Parteiwandels. Dies soll in diesem Beitrag daher versucht werden. Der Diskursanalyse des Strategiebegriffes in Wahlkampfsituationen folgt die detaillierte Darstellung der drei Operationalisierungsparameter, die in die Festlegung des Parteityps münden. Die Diskussion idealtypischer Wahlkampfmodelle als theoretischer Bezugsrahmen für die Bewertung der Wahlkampagnen ergänzt den theoretisch-konzeptionellen Bezugsrahmen. Die insgesamt in der Literatur in ihren Ausführungen oftmals normativ gestalteten Darstellungen idealtypischer politischer Strategie sollen im letzten Teil der Arbeit auf ihre Umsetzbarkeit im parteipolitischen Alltag überprüft werden und dies nicht nur anhand einzelner, mit einander nicht in Zusammenhang stehender Ereignisse, sondern anhand der sich periodisch unter vergleichbaren Bedingungen wiederholenden Wahlkämpfe. Dafür werden die jeweiligen Ausgangs- und Rahmenbedingungen der einzelnen Wahlkämpfe sowie die zuvor dargelegten Elemente professionalisierter Wahlkampagnen für die Wahlkampagnen von SPD und CDU seit 1990 dargestellt. Aus diesen Gegenüberstellungen soll im Anschluss der längsschnittliche Vergleich der Strategiefähigkeit und Kommunikationskompetenz von SPD und CDU abgeleitet werden
Das Ziel dynamischer Mikrosimulationen ist es, die Entwicklung von Systemen über das Verhalten der einzelnen enthaltenen Bestandteile zu simulieren, um umfassende szenariobasierte Analysen zu ermöglichen. Im Bereich der Wirtschafts- und Sozialwissenschaften wird der Fokus üblicherweise auf Populationen bestehend aus Personen und Haushalten gelegt. Da politische und wirtschaftliche Entscheidungsprozesse meist auf lokaler Ebene getroffen werden, bedarf es zudem kleinräumiger Informationen, um gezielte Handlungsempfehlungen ableiten zu können. Das stellt Forschende wiederum vor große Herausforderungen im Erstellungsprozess regionalisierter Simulationsmodelle. Dieser Prozess reicht von der Generierung geeigneter Ausgangsdatensätze über die Erfassung und Umsetzung der dynamischen Komponenten bis hin zur Auswertung der Ergebnisse und Quantifizierung von Unsicherheiten. Im Rahmen dieser Arbeit werden ausgewählte Komponenten, die für regionalisierte Mikrosimulationen von besonderer Relevanz sind, beschrieben und systematisch analysiert.
Zunächst werden in Kapitel 2 theoretische und methodische Aspekte von Mikrosimulationen vorgestellt, um einen umfassenden Überblick über verschiedene Arten und Möglichkeiten der Umsetzung dynamischer Modellierungen zu geben. Im Fokus stehen dabei die Grundlagen der Erfassung und Simulation von Zuständen und Zustandsänderungen sowie die damit verbundenen strukturellen Aspekte im Simulationsprozess.
Sowohl für die Simulation von Zustandsänderungen als auch für die Erweiterung der Datenbasis werden primär logistische Regressionsmodelle zur Erfassung und anschließenden wahrscheinlichkeitsbasierten Vorhersage der Bevölkerungsstrukturen auf Mikroebene herangezogen. Die Schätzung beruht insbesondere auf Stichprobendaten, die in der Regel neben einem eingeschränktem Stichprobenumfang keine oder nur unzureichende regionale Differenzierungen zulassen. Daher können bei der Vorhersage von Wahrscheinlichkeiten erhebliche Differenzen zu bekannten Totalwerten entstehen. Um eine Harmonisierung mit den Totalwerten zu erhalten, lassen sich Methoden zur Anpassung von Wahrscheinlichkeiten – sogenannte Alignmentmethoden – anwenden. In der Literatur werden zwar unterschiedliche Möglichkeiten beschrieben, über die Auswirkungen dieser Verfahren auf die Güte der Modelle ist jedoch kaum etwas bekannt. Zur Beurteilung verschiedener Techniken werden diese im Rahmen von Kapitel 3 in umfassenden Simulationsstudien unter verschiedenen Szenarien umgesetzt. Hierbei kann gezeigt werden, dass durch die Einbindung zusätzlicher Informationen im Modellierungsprozess deutliche Verbesserungen sowohl bei der Schätzung der Parameter als auch bei der Vorhersage der Wahrscheinlichkeiten erzielt werden können. Zudem lassen sich dadurch auch bei fehlenden regionalen Identifikatoren in den Modellierungsdaten kleinräumige Wahrscheinlichkeiten erzeugen. Insbesondere die Maximierung der Likelihood des zugrundeliegenden Regressionsmodells unter der Nebenbedingung, dass die bekannten Totalwerte eingehalten werden, weist in allen Simulationsstudien überaus gute Ergebnisse auf.
Als eine der einflussreichsten Komponenten in regionalisierten Mikrosimulationen erweist sich die Umsetzung regionaler Mobilität. Gleichzeitig finden Wanderungen in vielen Mikrosimulationsmodellen keine oder nur unzureichende Beachtung. Durch den unmittelbaren Einfluss auf die gesamte Bevölkerungsstruktur führt ein Ignorieren jedoch bereits bei einem kurzen Simulationshorizont zu starken Verzerrungen. Während für globale Modelle die Integration von Wanderungsbewegungen über Landesgrenzen ausreicht, müssen in regionalisierten Modellen auch Binnenwanderungsbewegungen möglichst umfassend nachgebildet werden. Zu diesem Zweck werden in Kapitel 4 Konzepte für Wanderungsmodule erstellt, die zum einen eine unabhängige Simulation auf regionalen Subpopulationen und zum anderen eine umfassende Nachbildung von Wanderungsbewegungen innerhalb der gesamten Population zulassen. Um eine Berücksichtigung von Haushaltsstrukturen zu ermöglichen und die Plausibilität der Daten zu gewährleisten, wird ein Algorithmus zur Kalibrierung von Haushaltswahrscheinlichkeiten vorgeschlagen, der die Einhaltung von Benchmarks auf Individualebene ermöglicht. Über die retrospektive Evaluation der simulierten Migrationsbewegungen wird die Funktionalität der Wanderdungskonzepte verdeutlicht. Darüber hinaus werden über die Fortschreibung der Population in zukünftige Perioden divergente Entwicklungen der Einwohnerzahlen durch verschiedene Konzepte der Wanderungen analysiert.
Eine besondere Herausforderung in dynamischen Mikrosimulationen stellt die Erfassung von Unsicherheiten dar. Durch die Komplexität der gesamten Struktur und die Heterogenität der Komponenten ist die Anwendung klassischer Methoden zur Messung von Unsicherheiten oft nicht mehr möglich. Zur Quantifizierung verschiedener Einflussfaktoren werden in Kapitel 5 varianzbasierte Sensitivitätsanalysen vorgeschlagen, die aufgrund ihrer enormen Flexibilität auch direkte Vergleiche zwischen unterschiedlichsten Komponenten ermöglichen. Dabei erweisen sich Sensitivitätsanalysen nicht nur für die Erfassung von Unsicherheiten, sondern auch für die direkte Analyse verschiedener Szenarien, insbesondere zur Evaluation gemeinsamer Effekte, als überaus geeignet. In Simulationsstudien wird die Anwendung im konkreten Kontext dynamischer Modelle veranschaulicht. Dadurch wird deutlich, dass zum einen große Unterschiede hinsichtlich verschiedener Zielwerte und Simulationsperioden auftreten, zum anderen aber auch immer der Grad an regionaler Differenzierung berücksichtigt werden muss.
Kapitel 6 fasst die Erkenntnisse der vorliegenden Arbeit zusammen und gibt einen Ausblick auf zukünftige Forschungspotentiale.
Even though proper research on Cauchy transforms has been done, there are still a lot of open questions. For example, in the case of representation theorems, i.e. the question when a function can be represented as a Cauchy transform, there is 'still no completely satisfactory answer' ([9], p. 84). There are characterizations for measures on the circle as presented in the monograph [7] and for general compactly supported measures on the complex plane as presented in [27]. However, there seems to exist no systematic treatise of the Cauchy transform as an operator on $L_p$ spaces and weighted $L_p$ spaces on the real axis.
This is the point where this thesis draws on and we are interested in developing several characterizations for the representability of a function by Cauchy transforms of $L_p$ functions. Moreover, we will attack the issue of integrability of Cauchy transforms of functions and measures, a topic which is only partly explored (see [43]). We will develop different approaches involving Fourier transforms and potential theory and investigate into sufficient conditions and characterizations.
For our purposes, we shall need some notation and the concept of Hardy spaces which will be part of the preliminary Chapter 1. Moreover, we introduce Fourier transforms and their complex analogue, namely Fourier-Laplace transforms. This will be of extraordinary usage due to the close connection of Cauchy and Fourier(-Laplace) transforms.
In the second chapter we shall begin our research with a discussion of the Cauchy transformation on the classical (unweighted) $L_p$ spaces. Therefore, we start with the boundary behavior of Cauchy transforms including an adapted version of the Sokhotski-Plemelj formula. This result will turn out helpful for the determination of the image of the Cauchy transformation under $L_p(\R)$ for $p\in(1,\infty).$ The cases $p=1$ and $p=\infty$ are playing special roles here which justifies a treatise in separate sections. For $p=1$ we will involve the real Hardy space $H_{1}(\R)$ whereas the case $p=\infty$ shall be attacked by an approach incorporating intersections of Hardy spaces and certain subspaces of $L_{\infty}(\R).$
The third chapter prepares ourselves for the study of the Cauchy transformation on subspaces of $L_{p}(\R).$ We shall give a short overview of the basic facts about Cauchy transforms of measures and then proceed to Cauchy transforms of functions with support in a closed set $X\subset\R.$ Our goal is to build up the main theory on which we can fall back in the subsequent chapters.
The fourth chapter deals with Cauchy transforms of functions and measures supported by an unbounded interval which is not the entire real axis. For convenience we restrict ourselves to the interval $[0,\infty).$ Bringing once again the Fourier-Laplace transform into play, we deduce complex characterizations for the Cauchy transforms of functions in $L_{2}(0,\infty).$ Moreover, we analyze the behavior of Cauchy transform on several half-planes and shall use these results for a fairly general geometric characterization. In the second section of this chapter, we focus on Cauchy transforms of measures with support in $[0,\infty).$ In this context, we shall derive a reconstruction formula for these Cauchy transforms holding under pretty general conditions as well as results on the behaviur on the left half-plane. We close this chapter by rather technical real-type conditions and characterizations for Cauchy transforms of functions in $L_p(0,\infty)$ basing on an approach in [82].
The most common case of Cauchy transforms, those of compactly supported functions or measures, is the subject of Chapter 5. After complex and geometric characterizations originating from similar ideas as in the fourth chapter, we adapt a functional-analytic approach in [27] to special measures, namely those with densities to a given complex measure $\mu.$ The chapter is closed with a study of the Cauchy transformation on weighted $L_p$ spaces. Here, we choose an ansatz through the finite Hilbert transform on $(-1,1).$
The sixth chapter is devoted to the issue of integrability of Cauchy transforms. Since this topic has no comprehensive treatise in literature yet, we start with an introduction of weighted Bergman spaces and general results on the interaction of the Cauchy transformation in these spaces. Afterwards, we combine the theory of Zen spaces with Cauchy transforms by using once again their connection with Fourier transforms. Here, we shall encounter general Paley-Wiener theorems of the recent past. Lastly, we attack the issue of integrability of Cauchy transforms by means of potential theory. Therefore, we derive a Fourier integral formula for the logarithmic energy in one and multiple dimensions and give applications to Fourier and hence Cauchy transforms.
Two appendices are annexed to this thesis. The first one covers important definitions and results from measure theory with a special focus on complex measures. The second appendix contains Cauchy transforms of frequently used measures and functions with detailed calculations.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Allocating scarce resources efficiently is a major task in mechanism design. One of the most fundamental problems in mechanism design theory is the problem of selling a single indivisible item to bidders with private valuations for the item. In this setting, the classic Vickrey auction of~\citet{vickrey1961} describes a simple mechanism to implement a social welfare maximizing allocation.
The Vickrey auction for a single item asks every buyer to report its valuation and allocates the item to the highest bidder for a price of the second highest bid. This auction features some desirable properties, e.g., buyers cannot benefit from misreporting their true value for the item (incentive compatibility) and the auction can be executed in polynomial time.
However, when there is more than one item for sale and buyers' valuations for sets of items are not additive or the set of feasible allocations is constrained, then constructing mechanisms that implement efficient allocations and have polynomial runtime might be very challenging. Consider a single seller selling $n\in \N$ heterogeneous indivisible items to several bidders. The Vickrey-Clarke-Groves auction generalizes the idea of the Vickrey auction to this multi-item setting. Naturally, every bidder has an intrinsic value for every subset of items. As in in the Vickrey auction, bidders report their valuations (Now, for every subset of items!). Then, the auctioneer computes a social welfare maximizing allocation according to the submitted bids and charges buyers the social cost of their winning that is incurred by the rest of the buyers. (This is the analogue to charging the second highest bid to the winning bidder in the single item Vickrey auction.) It turns out that the Vickrey-Clarke-Groves auction is also incentive compatible but it poses some problems: In fact, say for $n=40$, bidders would have to submit $2^{40}-1$ values (one value for each nonempty subset of the ground set) in total. Thus, asking every bidder for its valuation might be impossible due to time complexity issues. Therefore, even though the Vickrey-Clarke-Groves auction implements a social welfare maximizing allocation in this multi-item setting it might be impractical and there is need for alternative approaches to implement social welfare maximizing allocations.
This dissertation represents the results of three independent research papers all of them tackling the problem of implementing efficient allocations in different combinatorial settings.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.
Modern decision making in the digital age is highly driven by the massive amount of
data collected from different technologies and thus affects both individuals as well as
economic businesses. The benefit of using these data and turning them into knowledge
requires appropriate statistical models that describe the underlying observations well.
Imposing a certain parametric statistical model goes along with the need of finding
optimal parameters such that the model describes the data best. This often results in
challenging mathematical optimization problems with respect to the model’s parameters
which potentially involve covariance matrices. Positive definiteness of covariance matrices
is required for many advanced statistical models and these constraints must be imposed
for standard Euclidean nonlinear optimization methods which often results in a high
computational effort. As Riemannian optimization techniques proved efficient to handle
difficult matrix-valued geometric constraints, we consider optimization over the manifold
of positive definite matrices to estimate parameters of statistical models. The statistical
models treated in this thesis assume that the underlying data sets used for parameter
fitting have a clustering structure which results in complex optimization problems. This
motivates to use the intrinsic geometric structure of the parameter space. In this thesis,
we analyze the appropriateness of Riemannian optimization over the manifold of positive
definite matrices on two advanced statistical models. We establish important problem-
specific Riemannian characteristics of the two problems and demonstrate the importance
of exploiting the Riemannian geometry of covariance matrices based on numerical studies.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
Stress gilt als zentrales Gesundheitsrisiko des 21. Jahrhunderts und wird in der Forschung als multidimensionales Konstrukt auf psychologischer und biologischer Ebene untersucht. Wäh-rend die subjektive Wahrnehmung von Stress nicht mit der biologischen Stressreaktivität zu-sammenhängen muss, ist der negative Einfluss stressassoziierter biologischer Prozesse auf Wohlbefinden und Gesundheit gut belegt. Bereits im Grundschulalter zeigen Kinder eine mit Erwachsenen vergleichbare Stressbelastung und gesundheitliche Folgen, Bewältigungsstrategien sind in diesem Alter allerdings noch nicht vollständig entwickelt. Präventionsprogramme im Grundschulalter sollen Kinder in ihren sich entwickelnden Stressbewältigungsfähigkeiten fördern, wobei sowohl emotionsfokussierte und problemorientierte Ansätze als auch soziale Unterstützung wichtige Faktoren darstellen könnten.
Das einleitende Literatur-Review evaluiert bisherige Stresspräventionsstudien und verdeutlicht, dass zwar die Wirksamkeit und Anwendbarkeit von mehrfaktoriellen Stresspräventionsprogrammen im Rahmen psychometrischer Erhebungen gezeigt werden konnten, biologische Prozesse in der Forschung bisher allerdings nicht erhoben und außer Acht gelassen wurden.
Die empirische Untersuchung in Studie 1 zeigt, dass eine multidimensionale psychobiologische Betrachtungsweise sinnvoll ist, indem sowohl die Psychometrie, als auch psychobiologische Prozesse der Stressreaktion miteinbezogen und die Auswirkungen von Stressprävention auf den verschiedenen Ebenen untersucht wurden. Zwei Kurzinterventionen wurden dazu miteinander verglichen und ihre Wirkung auf psychophysiologischen Ebenen (z.B. Kortisol, α-Amylase und Herzrate) in einem Prä-Post Design geprüft. Eine statistisch signifikante Abnahme psychophysiologischer Stressreaktivität, sowie stressassoziierter psychologischer Symptome verdeutlichte die multidimensionale Wirksamkeit von Stressmanagementtrainings.
Studie 2 wurde im Rahmen der Covid-19-Pandemie entworfen. Die in Studie 1 trainierten Kinder wurden mittels Online-Fragebogenerhebung mit einer Kontrollgruppe hinsichtlich ihrer Stressbelastung verglichen. Die Ergebnisse zeigten eine geringere Belastung und vermehrte günstige Bewältigungsstrategien trainierter Kinder im Vergleich zur Kontrollgruppe.
Diese Ergebnisse heben die Relevanz einer multidimensionalen Betrachtung kindlichen Stresses hervor. Es wurde gezeigt, dass Stresspräventionsprogramme auf den unterschiedlichen Ebenen der Stressreaktion wirken und sogar in gesamtgesellschaftlichen Krisensituationen stresspro-tektiv wirken können. Zukünftige Studien sollten Stresspräventionen im Grundschulalter psychophysiologisch evaluieren und deren Wirkung in Längsschnittstudien beurteilen, um das Verständnis der zugrundeliegenden Mechanismen zu verbessern.
Die endemischen Arganbestände in Südmarokko sind die Quelle des wertvollen Arganöls, sind aber durch bspw. Überweidung oder illegale Feuerholzgewinnung stark übernutzt. Aufforstungsmaßnahmen sind vorhanden, sind aber aufgrund von zu kurz angelegten Bewässerungs- und Schutzverträgen häufig nicht erfolgreich. Das Aufkommen von Neuwuchs ist durch das beinahe restlose Sammeln von Kernen kaum möglich, durch Fällen oder Absterben von Bäumen verringert sich die kronenüberdeckte Fläche und unbedeckte Flächen zwischen den Bäumen nehmen zu.
Die Entwicklung der Arganbestände wurde über den Zeitraum von 1972 und 2018 mit historischen und aktuellen Satellitenbildern untersucht, ein Großteil der Bäume hat sich in dieser Zeit kaum verändert. Zustandsaufnahmen von 2018 zeigten, dass viele dieser Bäume durch Überweidung und Abholzung nur als Sträucher wachsen und so in degradiertem Zustand stabil sind.
Trotz der Degradierung einiger Bäume zeigt sich, dass der Boden unter den Bäumen die höchsten Gehalte an organischer Bodensubstanz und Nährstoffen auf den Flächen aufweist, zwischen zwei Bäumen sind die Gehalte am niedrigsten. Der Einfluss des Baumes auf den Boden geht über die Krone hinaus in Richtung Norden durch Beschattung in der Mittagssonne, Osten durch Windverwehung von Streu und Bodenpartikeln und hangabwärts durch Verspülung von Material.
Über experimentelle Methoden unter und zwischen den Arganbäumen wurden Erkenntnisse zur Bodenerosion gewonnen. Die hydraulische Leitfähigkeit unter Bäumen ist um den Faktor 1,2-1,5 höher als zwischen den Bäumen, Oberflächenabflüsse und Bodenabträge sind unter den Bäumen etwas niedriger, bei degradierten Bäumen ähnlich den Bereichen zwischen den Bäumen. Die unterschiedlichen Flächenbeschaffenheiten wurden mit einem Windkanal untersucht und zeigten, dass gerade frisch gepflügte Flächen hohe Windemissionen verursachen, während Flächen mit hoher Steinbedeckung kaum von Winderosion betroffen sind.
Die Oberflächenabflüsse von den unterschiedlichen Flächentypen werden in die Vorfluter abgeleitet. Die Sedimentdynamik in diesen Wadis wird hauptsächlich von Niederschlag zwischen den Messungen, Einzugsgebiet und Wadilänge und kaum von den verschiedenen Landnutzungen beeinflusst.
Das Landschaftssystem Argan konnte über diesen Multi-Methodenansatz auf verschiedenen Ebenen analysiert werden.
Climate fluctuations and the pyroclastic depositions from volcanic activity both influence ecosystem functioning and biogeochemical cycling in terrestrial and marine environments globally. These controlling factors are crucial for the evolution and fate of the pristine but fragile fjord ecosystem in the Magellanic moorlands (~53°S) of southernmost Patagonia, which is considered a critical hotspot for organic carbon burial and marine bioproductivity. At this active continental margin in the core zone of the southern westerly wind belt (SWW), frequent Plinian eruptions and the extremely variable, hyper-humid climate should have efficiently shaped ecosystem functioning and land-to-fjord mass transfer throughout the Late Holocene. However, a better understanding of the complex process network defining the biogeochemical cycling at this land-to-fjord continuum principally requires a detailed knowledge of substrate weathering and pedogenesis in the context of the extreme climate. Yet, research on soils, the ubiquitous presence of tephra and the associated chemical weathering, secondary mineral (trans)formation and organic matter (OM) turnover processes is rare in this remote region. This complicates an accurate reconstruction of the ecosystem´s potentially sensitive response to past environmental impacts, including the dynamics of Late Holocene land-to-fjord fluxes as a function of volcanic activity and strong hydroclimate variability.
Against this background, this PhD thesis aims to disentangle the controlling factors that modulate the terrigenous element mobilization and export mechanisms in the hyper-humid Patagonian Andes and assesses their significance for fjord primary productivity over the past 4.5 kyrs BP. For the first time, distinct biogeochemical characteristics of the regional weathering system serve as major criterion in paleoenvironmental reconstruction in the area. This approach includes broad-scale mineralogical and geochemical analyses of basement lithologies, four soil profiles, volcanic ash deposits, the non-karst stalagmite MA1 and two lacustrine sediment cores. In order to pay special attention to the possibly important temporal variations of pedosphere-atmosphere interaction and ecological consequences initiated by volcanic eruptions, the novel data were evaluated together with previously published reconstructions of paleoclimate and paleoenvironmental conditions.
The devastative high-tephra loading of a single eruption from Mt. Burney volcano (MB2 at 4.216 kyrs BP) sustainably transformed this vulnerable fjord ecosystem, while acidic peaty Andosols developed from ~2.5 kyrs BP onwards after the recovery from millennium-scale acidification. The special setting is dominated by most variable redox-pH conditions, profound volcanic ash weathering and intense OM turnover processes, which are closely linked and ultimately regulated by SWW-induced water-level fluctuations. Constant nutrient supply though sea spray deposition represents a further important control on peat accumulation and OM turnover dynamics. These extreme environmental conditions constrain the biogeochemical framework for an extended land-to-fjord export of leachates comprising various organic and inorganic colloids (i.e., Al-humus complexes and Fe-(hydr)oxides). Such tephra- and/or Andosol-sourced flux contains high proportions of terrigenous organic carbon (OCterr) and mobilized essential (micro)nutrients, e.g., bio-available Fe, that are beneficial for fjord bioproductivity. It can be assumed that this supply of bio-available Fe produced by specific Fe-(hydr)oxide (trans)formation processes from tephra components may outlast more than 6 kyrs and surpasses the contribution from basement rock weathering and glacial meltwaters. However, the land-to-fjord exports of OCterr and bio-available Fe occur mostly asynchronous and are determined by the frequency and duration of redox cycles in soils or are initiated by SWW-induced extreme weather events.
The verification of (crypto)tephra layers embedded stalagmite MA1 enabled the accurate dating of three smaller Late Holocene eruptions from Mt. Burney (MB3 at 2.291 kyrs BP and MB4 at 0.853 kyrs BP) and Aguilera (A1 at 2.978 kyrs BP) volcanoes. Irrespective of the improvement of the regional tephrochronology, the obtained precise 230Th/U-ages allowed constraints on the ecological consequences caused by these Plinian eruptions. The deposition of these thin tephra layers should have entailed a very beneficial short-term stimulation of fjord bioproductivity with bio-available Fe and other (micro)nutrients, which affected the entire area between 52°S and 53°S 30´, respectively. For such beneficial effects, the thickness of tephra deposited to this highly vulnerable peatland ecosystem should be below a threshold of 1 cm.
The Late Holocene element mobilization and land-to-fjord transport was mainly controlled by (i) volcanic activity and tephra thickness, (ii) SWW-induced and southern hemispheric climate variability and (iii) the current state of the ecosystem. The influence of cascading climate and environmental impacts on OCterr and Fe-(hydr)oxide fluxes to can be categorized by four individual, in part overlapping scenarios. These different scenarios take into account the previously specified fundamental biogeochemical mechanisms and define frequently recurring patterns of ecosystem feedbacks governing the land-to-fjord mass transfer in the hyper-humid Patagonian Andes on the centennial-scale. This PhD thesis provides first evidence for a primarily tephra-sourced, continuous and long-lasting (micro)nutrient fertilization for phytoplankton growth in South Patagonian fjords, which is ultimately modulated by variations in SWW-intensity. It highlights the climate sensitivity of such critical land-to-fjord element transport and particularly emphasizes the important but so far underappreciated significance of volcanic ash inputs for biogeochemical cycles at active continental margins.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
The present dissertation was developed to emphasize the importance of self-regulatory abilities and to derive novel opportunities to empower self-regulation. From the perspective of PSI (Personality Systems Interactions) theory (Kuhl, 2001), interindividual differences in self-regulation (action vs. state orientation) and their underlying mechanisms are examined in detail. Based on these insights, target-oriented interventions are derived, developed, and scientifically evaluated. The present work comprises a total of four studies which, on the one hand, highlight the advantages of a good self-regulation (e.g., enacting difficult intentions under demands; relation with prosocial power motive enactment and well-being). On the other hand, mental contrasting (Oettingen et al., 2001), an established self-regulation method, is examined from a PSI perspective and evaluated as a method to support individuals that struggle with self-regulatory deficits. Further, derived from PSI theory`s assumptions, I developed and evaluated a novel method (affective shifting) that aims to support individuals in overcoming self-regulatory deficits. Thereby affective shifting supports the decisive changes in positive affect for successful intention enactment (Baumann & Scheffer, 2010). The results of the present dissertation show that self-regulated changes between high and low positive affect are crucial for efficient intention enactment and that methods such as mental contrasting and affective shifting can empower self-regulation to support individuals to successfully close the gap between intention and action.
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
Insekten stellen die artenreichste Klasse des Tierreichs dar, wobei viele der Arten bedroht sind. Das liegt neben dem Klimawandel vor allem an der sich in den letzten Jahrzehnten stark verändernden landwirtschaftlichen Nutzung von Flächen, was zu Lebensraumzerstörung und Habitatfragmentierung führt. Die intensivere Bewirtschaftung von Gunstflächen einerseits, sowie die Flächenaufgabe unrentabler Flächen andererseits, hat schwerwiegende Folgen für Insekten, die an extensiv genutzte Kulturflächen angepasst sind, was besonders durch den abnehmenden Anteil an Spezialisten deutlich wird. Eine Region, die aufgrund des kleinräumigen Nebeneinanders von naturnahen Bereichen und anthropogen geschaffenen Kulturflächen (entlang eines großen Höhengradienten) eine wichtige Rolle für die Biodiversität besitzt, speziell als Lebensraum für Spezialisten aller Artengruppen, sind die Alpen. Auch hier stellt der landwirtschaftliche Nutzungswandel ein großes Problem dar, weshalb es einen nachhaltigen Schutz der extensiv genutzten Kulturlebensräume bedarf. Um zu klären, wie eine nachhaltige Berglandwirtschaft zukünftig erhalten bleiben kann, wurden im ersten Kapitel der Promotion die Regelungsrahmen der internationalen, europäischen, nationalen und regionalen Gesetze näher betrachtet. Es zeigt sich, dass der multifunktionale Ansatz der Alpenkonvention und des zugehörigen Protokolls „Berglandwirtschaft“ nur eine geringe normative Konkretisierung aufweisen und daher nicht im ausreichenden Maße in der Gemeinsamen Agrarpolitik der EU sowie im nationalen Recht umgesetzt werden; dadurch können diese einer negativen Entwicklung in der Berglandwirtschaft nicht ausreichend entgegenwirken. Neben diesen Rechtsgrundlagen fehlt es jedoch auch an naturwissenschaftlichen Grundlagen, um die Auswirkungen des landwirtschaftlichen Nutzungswandels auf alpine und arktische Tierarten zu beurteilen. Untersuchungen mit Charakterarten für diese Kulturräume sind somit erforderlich, wobei Tagfalter aufgrund ihrer Sensibilität gegenüber Umweltveränderungen geeignete Indikatoren sind. Deshalb wurden im zweiten Kapitel der Promotion die beiden Schwestertaxa Boloria pales und B. napaea untersucht, die für arktische und / oder alpine Grünlandflächen typisch sind. Die bisher unbekannte Phylogeographie beider Arten wurde daher mit zwei mitochondrialen und zwei Kerngenen über das gesamte europäische Verbreitungsgebiet untersucht. In diesem Zusammenhang die zwischen- und innerartlichen Auftrennungen analysiert und datiert sowie die ihnen unterliegenden Ausbreitungsmuster entschlüsselt. Um spezielle Anpassungsformen an die arktischen und alpinen Lebensräume der Arten zu entschlüsseln und die Folgen der landwirtschaftlichen Nutzungsänderung richtig einordnen zu können, wurden mehrere Populationen beider Arten freilandökologisch untersucht. Während B. pales über den gesamten alpinen Sommer schlüpfen kann und proterandrische Strukturen zeigt, ist B. napaea durch das Fehlen der Proterandie und ein verkürztes Schlupfzeitfenster eher an die kürzeren, arktischen Sommer angepasst. Obwohl beide Arten die gleichen Nektarquellen nutzen, gibt es aufgrund verschiedener Bedürfnisse Unterschiede in den Nektarpräferenzen zwischen den Geschlechtern; auch innerartliche Unterschiede im Dispersionsverhalten wurden gefunden. Populationen beider Arten können eine kurze Beweidung überleben, wobei der Zeitpunkt der Beweidung von Bedeutung ist; eine Nutzung gegen Ende der Schlupfphase hat einen größeren Einfluss auf die Population. Daneben wurde ein deutlicher Unterschied zwischen Flächen mit langfristiger und fehlender Beweidung gefunden. Neben einer geringen Populationsdichte, gibt es auf ganzjährig beweideten Flächen einen größeren Druck, den Lebensraum zu verlassen und die zurückgelegten Flugdistanzen sind hier auch deutlich größer.
Der digitale Fortschritt der vergangenen Jahrzehnte beruht zu einem großen Teil auf der Innovationskraft junger aufstrebender Unternehmen. Während diese Unternehmen auf der einen Seite ihr hohes Maß an Innovativität eint, entsteht für diese zeitgleich auch ein hoher Bedarf an finanziellen Mitteln, um ihre geplanten Innovations- und Wachstumsziele auch in die Tat umsetzen zu können. Da diese Unternehmen häufig nur wenige bis keine Unternehmenswerte, Umsätze oder auch Profitabilität vorweisen können, gestaltet sich die Aufnahme von externem Kapital häufig schwierig bis unmöglich. Aus diesem Umstand entstand in der Mitte des zwanzigsten Jahrhunderts das Geschäftsmodell der Risikofinanzierung, des sogenannten „Venture Capitals“. Dabei investieren Risikokapitalgeber in aussichtsreiche junge Unternehmen, unterstützen diese in ihrem Wachstum und verkaufen nach einer festgelegten Dauer ihre Unternehmensanteile, im Idealfall zu einem Vielfachen ihres ursprünglichen Wertes. Zahlreiche junge Unternehmen bewerben sich um Investitionen dieser Risikokapitalgeber, doch nur eine sehr geringe Zahl erhält diese auch. Um die aussichtsreichsten Unternehmen zu identifizieren, sichten die Investoren die Bewerbungen anhand verschiedener Kriterien, wodurch bereits im ersten Schritt der Bewerbungsphase zahlreiche Unternehmen aus dem Kreis potenzieller Investmentobjekte ausscheiden. Die bisherige Forschung diskutiert, welche Kriterien Investoren zu einer Investition bewegen. Daran anschließend verfolgt diese Dissertation das Ziel, ein tiefergehendes Verständnis darüber zu erlangen, welche Faktoren die Entscheidungsfindung der Investoren beeinflussen. Dabei wird vor allem auch untersucht, wie sich persönliche Faktoren der Investoren, sowie auch der Unternehmensgründer, auf die Investitionsentscheidung auswirken. Ergänzt werden diese Untersuchungen zudem durch die Analyse der Wirkung des digitalen Auftretens von Unternehmensgründern auf die Entscheidungsfindung von Risikokapitalgebern. Des Weiteren verfolgt diese Dissertation als zweites Ziel einen Erkenntnisgewinn über die Auswirkungen einer erfolgreichen Investition auf den Unternehmensgründer. Insgesamt umfasst diese Dissertation vier Studien, die im Folgenden näher beschrieben werden.
In Kapitel 2 wird untersucht, inwiefern sich bestimmte Humankapitaleigenschaften des Investors auf dessen Entscheidungsverhalten auswirken. Mithilfe vorangegangener Interviews und Literaturrecherchen wurden insgesamt sieben Kriterien identifiziert, die Risikokapitalinvestoren in ihrer Entscheidungsfindung nutzen. Daraufhin nahmen 229 Investoren an einem Conjoint Experiment teil, mithilfe dessen gezeigt werden konnte, wie wichtig die jeweiligen Kriterien im Rahmen der Entscheidung sind. Von besonderem Interesse ist dabei, wie sich die Wichtigkeit der Kriterien in Abhängigkeit der Humankapitaleigenschaften der Investoren unterscheiden. Dabei kann gezeigt werden, dass sich die Wichtigkeit der Kriterien je nach Bildungshintergrund und Erfahrung der Investoren unterscheidet. So legen beispielsweise Investoren mit einem höheren Bildungsabschluss und Investoren mit unternehmerischer Erfahrung deutlich mehr Wert auf die internationale Skalierbarkeit der Unternehmen. Zudem unterscheidet sich die Wichtigkeit der Kriterien auch in Abhängigkeit der fachlichen Ausbildung. So legen etwa Investoren mit einer fachlichen Ausbildung in Naturwissenschaften einen deutlich stärkeren Fokus auf den Mehrwert des Produktes beziehungsweise der Dienstleistung. Zudem kann gezeigt werden, dass Investoren mit mehr Investitionserfahrung die Erfahrung des Managementteams wesentlich wichtiger einschätzen als Investoren mit geringerer Investitionserfahrung. Diese Ergebnisse ermöglichen es Unternehmensgründern ihre Bewerbungen um eine Risikokapitalfinanzierung zielgenauer auszurichten, etwa durch eine Analyse des beruflichen Hintergrunds der potentiellen Investoren und eine damit einhergehende Anpassung der Bewerbungsunterlagen, zum Beispiel durch eine stärkere Schwerpunktsetzung besonders relevanter Kriterien.
Die in Kapitel 3 vorgestellte Studie bedient sich der Daten des gleichen Conjoint Experiments aus Kapitel 2, legt hierbei allerdings einen Fokus auf den Unterschied zwischen Investoren aus den USA und Investoren aus Kontinentaleuropa. Dazu wurden Subsamples kreiert, in denen 128 Experimentteilnehmer in den USA angesiedelt sind und 302 in Kontinentaleuropa. Die Analyse der Daten zeigt, dass US-amerikanische Investoren, im Vergleich zu Investoren in Kontinentaleuropa, einen signifikant stärkeren Fokus auf das Umsatzwachstum der Unternehmen legen. Zudem legen kontinentaleuropäische Investoren einen deutlich stärkeren Fokus auf die internationale Skalierbarkeit der Unternehmen. Um die Ergebnisse der Analyse besser interpretieren zu können, wurden diese im Anschluss mit vier amerikanischen und sieben europäischen Investoren diskutiert. Dabei bestätigen die europäischen Investoren die Wichtigkeit der hohen internationalen Skalierbarkeit aufgrund der teilweise geringen Größe europäischer Länder und dem damit zusammenhängenden Zwang, schnell international skalieren zu können, um so zufriedenstellende Wachstumsraten zu erreichen. Des Weiteren wurde der vergleichsweise geringere Fokus auf das Umsatzwachstum in Europa mit fehlenden Mitteln für eine schnelle Expansion begründet. Gleichzeitig wird der starke Fokus der US-amerikanischen Investoren auf Umsatzwachstum mit der höheren Tendenz zu einem Börsengang in den USA begründet, bei dem hohe Umsätze als Werttreiber dienen. Die Ergebnisse dieses Kapitels versetzen Unternehmensgründer in die Lage, ihre Bewerbung stärker an die wichtigsten Kriterien der potenziellen Investoren auszurichten, um so die Wahrscheinlichkeit einer erfolgreichen Investitionsentscheidung zu erhöhen. Des Weiteren bieten die Ergebnisse des Kapitels Investoren, die sich an grenzüberschreitenden syndizierten Investitionen beteiligen, die Möglichkeit, die Präferenzen der anderen Investoren besser zu verstehen und die Investitionskriterien besser auf potenzielle Partner abzustimmen.
Kapitel 4 untersucht ob bestimmte Charaktereigenschaften des sogenannten Schumpeterschen Entrepreneurs einen Einfluss auf die Wahrscheinlichkeit eines zweiten Risikokapitalinvestments haben. Dazu wurden von Gründern auf Twitter gepostete Nachrichten sowie Information von Investitionsrunden genutzt, die auf der Plattform Crunchbase zur Verfügung stehen. Insgesamt wurden mithilfe einer Textanalysesoftware mehr als zwei Millionen Tweets von 3313 Gründern analysiert. Die Ergebnisse der Studie deuten an, dass einige Eigenschaften, die typisch für Schumpetersche Gründer sind, die Chancen für eine weitere Investition erhöhen, während andere keine oder negative Auswirkungen haben. So erhöhen Gründer, die auf Twitter einen starken Optimismus sowie ihre unternehmerische Vision zur Schau stellen die Chancen auf eine zweite Risikokapitalfinanzierung, gleichzeitig werden diese aber durch ein zu starkes Streben nach Erfolg reduziert. Diese Ergebnisse haben eine hohe praktische Relevanz für Unternehmensgründer, die sich auf der Suche nach Risikokapital befinden. Diese können dadurch ihr virtuelles Auftreten („digital identity“) zielgerichteter steuern, um so die Wahrscheinlichkeit einer weiteren Investition zu erhöhen.
Abschließend wird in Kapitel 5 untersucht, wie sich die digitale Identität der Gründer verändert, nachdem diese eine erfolgreiche Risikokapitalinvestition erhalten haben. Dazu wurden sowohl Twitter-Daten als auch Crunchbase-Daten genutzt, die im Rahmen der Erstellung der Studie in Kapitel 4 erhoben wurden. Mithilfe von Textanalyse und Paneldatenregressionen wurden die Tweets von 2094 Gründern vor und nach Erhalt der Investition untersucht. Dabei kann gezeigt werden, dass der Erhalt einer Risikokapitalinvestition das Selbstvertrauen, die positiven Emotionen, die Professionalisierung und die Führungsqualitäten der Gründer erhöhen. Gleichzeitig verringert sich allerdings die Authentizität der von den Gründern verfassten Nachrichten. Durch die Verwendung von Interaktionseffekten kann zudem gezeigt werden, dass die Steigerung des Selbstvertrauens positiv durch die Reputation des Investors moderiert wird, während die Höhe der Investition die Authentizität negativ moderiert. Investoren haben durch diese Erkenntnisse die Möglichkeit, den Weiterentwicklungsprozess der Gründer nach einer erfolgreichen Investition besser nachvollziehen zu können, wodurch sie in die Lage versetzt werden, die Aktivitäten ihrer Gründer auf Social Media Plattformen besser zu kontrollieren und im Bedarfsfall bei ihrer Anpassung zu unterstützen.
Die in den Kapiteln 2 bis 5 vorgestellten Studien dieser Dissertation tragen damit zu einem besseren Verständnis der Entscheidungsfindung im Venture Capital Prozess bei. Der bisherige Stand der Forschung wird um Erkenntnisse erweitert, die sowohl den Einfluss der Eigenschaften der Investoren als auch der Gründer betreffen. Zudem wird auch gezeigt, wie sich die Investition auf den Gründer selbst auswirken kann. Die Implikationen der Ergebnisse, sowie Limitationen und Möglichkeiten künftiger Forschung werden in Kapitel 6 näher beschrieben. Da die in dieser Dissertation verwendeten Methoden und Daten erst seit wenigen Jahren im Kontext der Venture Capital Forschung genutzt werden, beziehungsweise überhaupt verfügbar sind, bietet sie sich als eine Grundlage für weitere Forschung an.
For decades, academics and practitioners aim to understand whether and how (economic) events affect firm value. Optimally, these events occur exogenously, i.e. suddenly and unexpectedly, so that an accurate evaluation of the effects on firm value can be conducted. However, recent studies show that even the evaluation of exogenous events is often prone to many challenges that can lead to diverse interpretations, resulting in heated debates. Recently, there have been intense debates in particular on the impact of takeover defenses and of Covid-19 on firm value. The announcements of takeover defenses and the propagation of Covid-19 are exogenous events that occur worldwide and are economically important, but have been insufficiently examined. By answering open research questions, this dissertation aims to provide a greater understanding about the heterogeneous effects that exogenous events such as the announcements of takeover defenses and the propagation of Covid-19 have on firm value. In addition, this dissertation analyzes the influence of certain firm characteristics on the effects of these two exogenous events and identifies influencing factors that explain contradictory results in the existing literature and thus can reconcile different views.
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
Zeitgleich mit stetig wachsenden gesellschaftlichen Herausforderungen haben im vergangenen Jahrzehnt Sozialunternehmen stark an Bedeutung gewonnen. Sozialunternehmen verfolgen das Ziel, mit unternehmerischen Mitteln gesellschaftliche Probleme zu lösen. Da der Fokus von Sozialunternehmen nicht hauptsächlich auf der eigenen Gewinnmaximierung liegt, haben sie oftmals Probleme, geeignete Unternehmensfinanzierungen zu erhalten und Wachstumspotenziale zu verwirklichen.
Zur Erlangung eines tiefergehenden Verständnisses des Phänomens der Sozialunternehmen untersucht der erste Teil dieser Dissertation anhand von zwei Studien auf der Basis eines Experiments das Entscheidungsverhalten der Investoren von Sozialunternehmen. Kapitel 2 betrachtet daher das Entscheidungsverhalten von Impact-Investoren. Der von diesen Investoren verfolgte Investmentansatz „Impact Investing“ geht über eine reine Orientierung an Renditen hinaus. Anhand eines Experiments mit 179 Impact Investoren, die insgesamt 4.296 Investitionsentscheidungen getroffen haben, identifiziert eine Conjoint-Studie deren wichtigste Entscheidungskriterien bei der Auswahl der Sozialunternehmen. Kapitel 3 analysiert mit dem Fokus auf sozialen Inkubatoren eine weitere spezifische Gruppe von Unterstützern von Sozialunternehmen. Dieses Kapitel veranschaulicht auf der Basis des Experiments die Motive und Entscheidungskriterien der Inkubatoren bei der Auswahl von Sozialunternehmen sowie die von ihnen angebotenen Formen der nichtfinanziellen Unterstützung. Die Ergebnisse zeigen unter anderem, dass die Motive von sozialen Inkubatoren bei der Unterstützung von Sozialunternehmen unter anderem gesellschaftlicher, finanzieller oder reputationsbezogener Natur sind.
Der zweite Teil erörtert auf der Basis von zwei quantitativ empirischen Studien, inwiefern die Registrierung von Markenrechten sich zur Messung sozialer Innovationen eignet und mit finanziellem und sozialem Wachstum von sozialen Startups in Verbindung steht. Kapitel 4 erörtert, inwiefern Markenregistrierungen zur Messung von sozialen Innovationen dienen können. Basierend auf einer Textanalyse der Webseiten von 925 Sozialunternehmen (> 35.000 Unterseiten) werden in einem ersten Schritt vier Dimensionen sozialer Innovationen (Innovations-, Impact-, Finanz- und Skalierbarkeitsdimension) ermittelt. Darauf aufbauend betrachtet dieses Kapitel, wie verschiedene Markencharakteristiken mit den Dimensionen sozialer Innovationen zusammenhängen. Die Ergebnisse zeigen, dass insbesondere die Anzahl an registrierten Marken als Indikator für soziale Innovationen (alle Dimensionen) dient. Weiterhin spielt die geografische Reichweite der registrierten Marken eine wichtige Rolle. Aufbauend auf den Ergebnissen von Kapitel 4 untersucht Kapitel 5 den Einfluss von Markenregistrierungen in frühen Unternehmensphasen auf die weitere Entwicklung der hybriden Ergebnisse von sozialen Startups. Im Detail argumentiert Kapitel 5, dass sowohl die Registrierung von Marken an sich als auch deren verschiedene Charakteristiken unterschiedlich mit den sozialen und ökonomischen Ergebnissen von sozialen Startups in Verbindung stehen. Anhand eines Datensatzes von 485 Sozialunternehmen zeigen die Analysen aus Kapitel 5, dass soziale Startups mit einer registrierten Marke ein vergleichsweise höheres Mitarbeiterwachstum aufweisen und einen größeren gesellschaftlichen Beitrag leisten.
Die Ergebnisse dieser Dissertation weiten die Forschung im Social Entrepreneurship-Bereich weiter aus und bieten zahlreiche Implikationen für die Praxis. Während Kapitel 2 und 3 das Verständnis über die Eigenschaften von nichtfinanziellen und finanziellen Unterstützungsorganisationen von Sozialunternehmen vergrößern, schaffen Kapitel 4 und 5 ein größeres Verständnis über die Bedeutung von Markenanmeldungen für Sozialunternehmen.
Die Effekte diverser Hormone auf das Sozialverhalten von Männern und Frauen sind nicht vollständig geklärt, da eine genaue Messung dieser, sowie eine Ableitung kausaler Zusammenhänge, die Forschung seither vor Herausforderungen stellt. Umso wichtiger sind Studien, welche versuchen für konfundierende Aspekte zu kontrollieren und die hormonellen oder endokrinen Effekte auf das Sozialverhalten und die soziale Kognition zu untersuchen. Während Studien bereits Effekte von akutem Stress auf Sozialverhalten zeigten, sind die zugrundeliegenden neurobiologischen Mechanismen nicht vollständig bekannt, da hierfür ein rein pharmakologischer Ansatz von Nöten wäre. Die wenigen Studien, die einen solchen wählten, zeigen konträre Befunde. Bisherige Untersuchungen mit psychosozialen Stressoren lassen jedoch prosoziale Tendenzen nach Stress sowohl für Männer als auch für Frauen vermuten. Darüber hinaus sind auch Untersuchungen zu weiblichen Geschlechtshormonen und ihrem Einfluss auf Sozialverhalten sowie die soziale Kognition bei Frauen besonders herausfordernd durch die hormonellen Schwankungen während des Menstruationszyklus oder auch Veränderungen durch die Einnahme oraler Kontrazeptiva. Studien die sowohl Zyklusphasen als auch die Effekte von oralen Kontrazeptiva untersuchten, deuten aber bereits auf Unterschiede zwischen den verschiedenen Phasen, sowie Frauen mit natürlichem Zyklus und Einnahme oraler Kontrazeptiva hin.
Der theoretische Teil beschreibt die Grundlagen zur Stressreaktion des Menschen und die hormonellen Veränderungen weiblicher Geschlechtshormone. Folgend, soll ein Kapitel zur aktuellen Forschungslage zu Effekten von akutem Stress auf Sozialverhalten und die soziale Kognition einen Überblick über die bisherige Befundlage schaffen. Die erste empirische Studie, welche die Effekte von Hydrocortison auf das Sozialverhalten und die Emotionserkennung untersucht, soll anschließend in diese aktuelle Befundlage eingeordnet werden und zu der weniger erforschten Sparte der pharmakologischen Studien beitragen. Die zweite empirische Studie befasst sich folgend mit den Effekten weiblicher Geschlechtshormone auf Sozialverhalten und Empathie, genauer wie auch Zyklusphasen und orale Kontrazeptiva (über Hormone vermittelt) einen Einfluss bei Frauen nehmen. Abschließend sollen die Effekte von Stresshormonen bei Männern, und modulierende Eigenschaften weiblicher Geschlechtshormone, Zyklusphasen und oraler Kontrazeptiva bei Frauen, jeweils in Hinblick auf Sozialverhalten und die soziale Kognition diskutiert werden.
This thesis focus on threats as an experience of stress. Threats are distinguished from challenges and hindrances as another dimension of stress in challenge-hindrance models (CHM) of work stress (Tuckey et al., 2015). Multiple disciplines of psychology (e.g. stereotype, Fingerhut & Abdou, 2017; identity, Petriglieri, 2011) provide a variety of possible events that can trigger threats (e.g., failure expe-riences, social devaluation; Leary et al., 2009). However, systematic consideration of triggers and thus, an overview of when does the danger of threats arises, has been lacking to date. The explanation why events are appraised as threats is related to frustrated needs (e.g., Quested et al., 2011; Semmer et al., 2007), but empirical evidence is rare and needs can cover a wide range of content (e.g., relatedness, competence, power), depending on need approaches (e.g., Deci & Ryan, 2000; McClelland, 1961). This thesis aims to shed light on triggers (when) and the need-based mechanism (why) of threats.
In the introduction, I introduce threats as a dimension of stress experience (cf. Tuckey et al., 2015) and give insights into the diverse field of threat triggers (the when of threats). Further, I explain threats in terms of a frustrated need for positive self-view, before presenting specific needs as possible deter-minants in the threat mechanism (the why of threats). Study 1 represents a literature review based on 122 papers from interdisciplinary threat research and provides a classification of five triggers and five needs identified in explanations and operationalizations of threats. In Study 2, the five triggers and needs are ecologically validated in interviews with police officers (n = 20), paramedics (n = 10), teach-ers (n = 10), and employees of the German federal employment agency (n = 8). The mediating role of needs in the relationship between triggers and threats is confirmed in a correlative survey design (N = 101 Leaders working part-time, Study 3) and in a controlled laboratory experiment (N = 60 two-person student teams, Study 4). The thesis ends with a general discussion of the results of the four studies, providing theoretical and practical implications.
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
Algorithmen als Richter
(2022)
Die menschliche Entscheidungsgewalt wird durch algorithmische
Entscheidungssysteme herausgefordert. Verfassungsrechtlich besonders
problematisch ist dies in Bereichen, die das staatliche Handeln betreffen.
Eine herausgehobene Stellung nimmt durch den besonderen Schutz der
Art. 92 ff. GG die rechtsprechende Gewalt ein. Lydia Wolff fragt daher danach, welche Antworten das Grundgesetz auf digitale Veränderungen in diesem Bereich bereithält und wie sich ein Eigenwert menschlicher Entscheidungen in der Rechtsprechung angesichts technischen Wandels darstellen lässt.
Das Werk erörtert hierzu einen Beitrag zum verfassungsrechtlichen
Richterbegriff und stellt diesen etablierten Begriff in einen Kontext neuer digitaler Herausforderungen durch algorithmische Konkurrenz.
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
Modellbildung und Umsetzung von Methoden zur energieeffizienten Nutzung von Containertechnologien
(2021)
Die Nutzung von Cloud-Software und skalierten Web-Apps sowie Web-Services hat in den letzten Jahren extrem zugenommen, was zu einem Anstieg der Hochleistungs-Cloud-Rechenzentren führt. Neben der Verbesserung der Dienste spiegelt sich dies auch im weltweiten Stromverbrauch von Rechenzentren wider, der derzeit etwas mehr als 1% (entspricht etwa 200 TWh) beträgt. Prognosen sagen für die kommenden Jahre einen massiven Anstieg des Stromverbrauchs von Cloud-Rechenzentren voraus. Grundlage dieser Bewegung ist die Beschleunigung von Administration und Entwicklung, die unter anderem durch den Einsatz von Containern entsteht. Als Basis für Millionen von Web-Apps und -Services beschleunigen sie die Skalierung, Bereitstellung und Aktualisierung von Cloud-Diensten.
In dieser Arbeit wird aufgezeigt, dass Container zusätzlich zu ihren vielen technischen Vorteilen Möglichkeiten zur Reduzierung des Energieverbrauchs von Cloud-Rechenzentren bieten, die aus
einer ineffizienten Konfiguration von Containern sowie Container-Laufzeitumgebungen resultieren. Basierend auf einer Umfrage und einer Auswertung geeigneter Literatur werden in einem ersten Schritt wahrscheinliche Probleme beim Einsatz von Containern aufgedeckt. Weiterhin wird die Sensibilität von Administratoren und Entwicklern bezüglich des Energieverbrauchs von Container-Software ermittelt. Aufbauend auf den Ergebnissen der Umfrage und der Auswertung werden anhand von Standardszenarien im Containerumfeld die Komponenten des de facto Standards Docker untersucht. Anschließend wird ein Modell, bestehend aus Messmethodik, Empfehlungen für eine effiziente
Konfiguration von Containern und Tools, beschrieben. Die Messmethodik sollte einfach anwendbar sein und gängige Technologien in Rechenzentren unterstützen. Darüber hinaus geben die Handlungsempfehlungen sowohl Entwicklern als auch Administratoren die Möglichkeit zu entscheiden, welche Komponenten von Docker im Sinne eines energieeffizienten Einsatzes und in Abhängigkeit vom Einsatzszenario der Container genutzt werden sollten und welche weggelassen werden könnten. Die resultierenden Container können im Sinne der Energieeffizienz auf Servern und gleichermaßen auf PCs und Embedded Systems (als Teil von IoT und Edge Cloud) eingesetzt werden und somit nicht nur dem zuvor beschriebenen Problem in der Cloud entgegenwirken.
Die Arbeit beschäftigt sich zudem mit dem Verhalten von skalierten Webanwendungen. Gängige Orchestrierungswerkzeuge definieren statische Skalierungspunkte für Anwendungen, die in den meisten Fällen auf der CPU-Auslastung basieren. Es wird dargestellt, dass dabei weder die tatsächliche Erreichbarkeit noch der Stromverbrauch der Anwendungen berücksichtigt werden. Es wird der Autoscaler des Open-Source-Container-Orchestrierungswerkzeugs Kubernetes betrachtet, der um ein neu entwickeltes Werkzeug erweitert wird. Es wird deutlich, dass eine dynamische Anpassung der Skalierungspunkte durch eine Vorabauswertung gängiger Nutzungsszenarien sowie Informationen über deren Stromverbrauch und die Erreichbarkeit bei steigender Last erreicht werden kann.
Schließlich folgt eine empirische Untersuchung des generierten Modells in Form von drei Simulationen, die die Auswirkungen auf den Energieverbrauch von Cloud-Rechenzentren darlegen sollen.
Die Dissertation weist nach, dass der Gerichtshof der Europäischen Union (im Folgenden: EuGH) das mitgliedstaatliche Ausgestaltungsermessen bei der Umsetzung von Richtlinien i. S. d. Art. 288 Abs. 3 AEUV, die weitreichendste Form richtlinieninhaltlich vorgesehener Umsetzungsspielräume der Mitgliedstaaten, in unterschiedlicher Art und Weise beschränkt und dabei teilweise gegen Vorgaben des primären Unionsrechts verstößt. Soweit Rechtsverstöße festgestellt werden, macht die Dissertation weiterführend Vorschläge für eine Korrektur der betroffenen unionsgerichtlichen Begrenzungsansätze im Hinblick auf das mitgliedstaatliche Ausgestaltungsermessen bei der Richtlinienumsetzung. Hierzu geht die Dissertation wie folgt vor: Ausgehend von vier in der Einleitung (Kapitel 1) aufgeworfenen Forschungsleitfragen stellt die Dissertation in Kapitel 2 die untersuchungsrelevanten unionsrechtlichen Grundlagen der Rechtsaktsform der Richtlinie dar. Dabei wird insbesondere auf die unionsvertragliche Verteilung der Kompetenzen zwischen der EU und ihren Mitgliedstaaten bei der kooperativ-zweistufigen Richtlinienrechtsetzung eingegangen und eine restriktive Auslegung des Terminus‘ „Ziel“ i. S. d. Art. 288 Abs. 3 AEUV entwickelt (sog. kompetenzinhaltsbestimmender modifiziert-enger Zielbegriff). In Kapitel 3 arbeitet die Dissertation die in der Richtlinienpraxis vorkommenden Grundformen richtlinieninhaltlich vorgesehener mitgliedstaatlicher Entscheidungsbefugnisse bei der Richtlinienumsetzung heraus und bestimmt das Ausgestaltungsermessen begrifflich als die weitreichendste Form mitgliedstaatlicher Umsetzungsspielräume. Kapitel 4 widmet sich zunächst der Ermittlung der Ansätze des EuGH zur Begrenzung des mitgliedstaatlichen Ausgestaltungsermessens. Dabei wird deutlich, dass das Unionsgericht durch seine Rechtsprechung nicht nur die Entstehung mitgliedstaatlichen Ausgestaltungsermessens begrenzt. Eine exemplarische Analyse der EuGH-Rechtsprechung zu Art. 4 Abs. 2 UAbs. 1 S. 1 und S. 2 lit. b der UVP-Richtlinie 2011/92/EU und seiner Vorgängernormen zeigt vielmehr, dass und wie der EuGH auch den Umfang des nach dem auslegungserheblichen Wortlaut einer Richtlinie bestehenden mitgliedstaatlichen Ausgestaltungsermessens begrenzt. Die hiernach ermittelten Begrenzungsansätze werden sodann einer rechtlichen Bewertung im Hinblick auf die Vorgaben des primären Unionsrechts einschließlich des in Kapitel 2 entwickelten restriktiven Zielbegriffs i. S. d. Art. 288 Abs. 3 AEUV unterzogen. Da einzelne Begrenzungsansätze des EuGH sich mit dem primären Unionsrecht als nicht vereinbar erweisen, werden insoweit schließlich Vorschläge für eine unionsrechtskonforme Korrektur dieser Rechtsprechung gemacht. Die Zusammenfassung der Forschungsergebnisse in Form einer thesenartigen Beantwortung der in der Einleitung aufgeworfenen vier Forschungsleitfragen findet sich in Kapitel 5.
Der vorliegende Text ist als Mantelpapier im Rahmen einer kumulativen Dissertation an der Universität Trier angenommen worden. Er dient der Zusammenfassung, Reflexion und erweiterten theoretischen Betrachtung der empirischen Einzelbeiträge, die alle einen Einzelaspekt des Gesamtgeschehens „Innovationslabor zur Unterstützung unternehmerischen Lernens und der Entwicklung sozialer Dienstleistungsinnovationen“ behandeln. Dabei wird das Innovationslabor grundsätzlich als Personalentwicklungsmaßnahme aufgefasst. In einem gedanklichen Experiment werden die Ergebnisse auf Organisationen der Erwachsenen- und Weiterbildung übertragen.
Das Besondere dieses Rahmenpapiers ist die Verbindung eines relationalen Raumverständnisses mit der lerntheoretischen Untermauerung des Gegenstandes „Innovationslabor“ aus der Perspektive der Organisationspädagogik und Erwachsenenbildung. Die Ergebnisse zeigen den Lernraum Labor als abseits des Arbeitslebens, als semi-autonom angebundenen Raum, wo Lernprozesse auf unterschiedlichen Ebenen stattfinden und angestoßen werden. Das Labor wird als heterotoper (Lern-)Raum diskutiert. Neu ist auch der Einbezug einer kritischen Perspektive, die bislang im Diskurs um Innovationslabore fehlte: Das Labor wird als prekärer Lernraum charakterisiert. Somit liegt mit dieser Arbeit nun eine grundlegende Ausarbeitung des Labors als Lernraum vor, die zahlreiche weitere Anschlussmöglichkeiten für Forschung ermöglicht.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
In der vorliegenden Arbeit wurden die Einsatzmöglichkeiten von Carbon Footprints in Großküchen untersucht. Dabei wurden sowohl methodische Aspekte und Herausforderungen ihrer Erhebung untersucht als auch mögliche Kennzeichnungsformate (Label) evaluiert.
Zunächst wurde am Beispiel Hochschulgastronomie eine vollständige Carbon Footprint Studie nach DIN 14067 für sechs exemplarische Gerichte (PCF) sowie angelehnt an DIN 14064 für den Mensabetrieb (CCF) durchgeführt. Es zeigte sich, dass die gewichteten durchschnittlichen Emissionen pro Teller, unter Einbezug der verwendeten Rohstoffe und des Energiebedarfs, 1,8 kg CO2eq pro Teller betragen (Mgew=1,78 kg CO2eq; [0,22-3,36]). Zur Vereinfachung des Erhebungsprozesses wurden anknüpfend an diese Ergebnisse Pauschalisierungsansätze zur vereinfachten Emissionsallokation im Gastrosektor evaluiert und in Form eines appgestützten Berechnungstools umgesetzt. Es konnte verifiziert werden, dass der Energiebedarf und die daraus resultierenden Emissionen unabhängig von der Beschaffenheit der Gerichte auf die Anzahl produzierter Gerichte alloziert werden können und die Ausgabewerte dennoch hinreichend belastbar sind (Abweichung <10 %).
In dieser Studie konnte gezeigt werden, dass am untersuchten Standort Skaleneffekte hinsichtlich der Anzahl produzierter Gerichte und Strombedarf pro Gericht auftreten. Beide Faktoren korrelieren stark negativ miteinander (r=-.78; p<.05). Zur Verifikation der Ergebnisse wurde eine Datenabfrage unter allen deutschen Studierendenwerken (N=57) hinsichtlich des Energiebedarfs und der Produktionsmengen in Hochschulmensen durchgeführt. Aus den Daten von 42 Standorten konnten für das Jahr 2018 prognostizierte Gesamtemissionen in Höhe von 174.275 Tonnen CO2eq, verursacht durch etwa 98 Millionen verkaufte Gerichte, ermittelt werden. Im Gegensatz zur durchgeführten Standort-Studie konnten die Skaleneffekte, d.h. sinkender Strombedarf pro Teller bei steigender Produktionszahl, bei der deutschlandweiten Datenerhebung statistisch nicht nachgewiesen werden
(r=-.29; p=.074).
Im Anschluss wurden mögliche Label-Formate für Carbon Footprints evaluiert, indem vier vorbereitete Label unterschiedlicher Beschaffenheit (absolute Zahlen, einordnend, vergleichend und wertend) in sechs Fokusgruppen mit insgesamt 17 Teilnehmer:innen im Alter zwischen 20 und 31 Jahren (M=25,12; SD=3,31) diskutiert wurden. Im Ergebnis zeigte sich, dass bei den Teilnehmer:innen ein breiter Wunsch nach der Ausweisung absoluter Zahlen bestand. Zur besseren Einordnung sollte ein Label zudem einordnende Elemente enthalten. Wertende Label in Form von Ampelsymbolen oder Smileys mit unterschiedlichen Emotionen wurden überwiegend abgelehnt. Ableitend aus den Erkenntnissen konnten zwei synthetisierende Label-Vorschläge entwickelt werden.
The daily dose of health information: A psychological view on the health information seeking process
(2021)
The search for health information is becoming increasingly important in everyday life, as well as socially and scientifically relevant Previous studies have mainly focused on the design and communication of information. However, the view of the seeker as well as individual
differences in skills and abilities has been a neglected topic so far. A psychological perspective on the process of searching for health information would provide important starting points for promoting the general dissemination of relevant information and thus improving health behaviour and health status. Within the present dissertation, the process of seeking health information was thus divided into sequential stages to identify relevant personality traits and skills. Accordignly, three studies are presented that focus on one stage
of the process respectively and empirically test potential crucial traits and skills: Study I investigates possible determinants of an intention for a comprehensive search for health information. Building an intention is considered as the basic step of the search process.
Motivational dispositions and self-regulatory skills were related to each other in a structural equation model and empirically tested based on theoretical investigations. Model fit showed an overall good fit and specific direct and indirect effects from approach and avoidance
motivation on the intention to seek comprehensively could be found, which supports the theoretical assumptions. The results show that as early as the formation of intention, the psychological perspective reveals influential personality traits and skills. Study II deals with the subsequent step, the selection of information sources. The preference for basic characteristics of information sources (i.e., accessibility, expertise, and interaction) is related to health information literacy as a collective term for relevant skills and intelligence as a personality trait. Furthermore, the study considers the influence of possible over- or underestimation of these characteristics. The results show not only a different predictive
contribution of health literacy and intelligence, but also the relevance of subjective and objective measurement.
Finally, Study III deals with the selection and evaluation of the health information previously found. The phenomenon of selective exposure is analysed, as this can be considered problematic in the health context. For this purpose, an experimental design was implemented in which a varying health threat was suggested to the participants. Relevant information was presented and the selective choice of this information was assessed. Health literacy was tested
as a moderator in a function of the induced threat and perceived vulnerability, triggering defence motives on the degree of bias. Findings show the importance of the consideration of the defence motives, which could cause a bias in the form of selective exposure. Furthermore, health literacy even seems to amplify this effect.
Results of the three studies are synthesized, discussed and general conclusions are drawn and implications for further research are determined.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
In vielen Branchen und vor allem in großen Unternehmen gehört eine Unterstützung von Geschäftsprozessen durch Workflow-Management-Systeme zum gelebten Alltag. Im Zentrum steht dabei die Steuerung kontrollflussorientierter Abläufe, während Prozesse mit einem Schwerpunkt auf Daten, Informationen und Wissen meist außen vor bleiben. Solche wissensintensive Prozesse (engl.: knowledge intensive processes) (KiPs) sind Untersuchungsgegenstand in vielen aktuellen Studien, welche ein derzeit aktives Forschungsgebiet formen.
Im Vordergrund solcher KiPs steht dabei das durch die mitwirkenden Personen eingebrachte Wissen, welches in einem wesentlichen Maß die Prozessausführung beeinflusst, hierdurch jedoch die Bearbeitung komplexer und meist hoch volatiler Prozesse ermöglicht. Hierbei handelt es sich zumeist um entscheidungsintensive Prozesse, Prozesse zur Wissensakquisition oder Prozesse, die zu einer Vielzahl unterschiedlicher Prozessabläufe führen können.
Im Rahmen dieser Arbeit wird ein Ansatz entwickelt und vorgestellt, der sich der Modellierung, Visualisierung und Ausführung wissensintensiver Prozesse unter Verwendung Semantischer Technologien widmet. Hierzu werden als die zentralen Anforderungen zur Ausführung von KiPs Flexibilität, Adaptivität und Zielorientierung definiert. Daran anknüpfend werden drei zentrale Grundprinzipien der Prozessmodellierung identifiziert, welche in der ersten Forschungsfrage aufgegriffen werden: „Können die drei Grundprinzipien in einem einheitlichen datenzentrierten, deklarativen, semantischen Ansatz (welcher mit ODD-BP bezeichnet wird) kombiniert werden und können damit die zentralen Anforderungen von KiPs erfüllt werden?”
Die Grundlage für ODD-BP bildet ein Metamodell, welches als Sprachkonstrukt fungiert und die Definition der angestrebten Prozessmodelle erlaubt. Darauf aufbauend wird mit Hilfe von Inferenzierungsregeln ein Verfahren entwickelt, welches das Schlussfolgern von Prozesszuständen ermöglicht und somit eine klassische Workflow-Engine überflüssig macht. Zudem wird eine Methodik eingeführt, die für jede in einem Prozess mitwirkende Person eine maßgeschneiderte, adaptive Prozessvisualisierung ermöglicht, um neben dem Freiheitsgrad der Flexibilität auch eine fundierte Prozessunterstützung bei der Ausführung von KiPs leisten zu können. All dies erfolgt innerhalb einer einheitlichen Wissensbasis, die zum einen die Grundlage für eine vollständige semantische Prozessmodellierung bildet und zum anderen die Möglichkeit zur Integration von Expertenwissen eröffnet. Dieses Expertenwissen kann einen expliziten Beitrag bei der Ausführung wissensintensiver Prozesse leisten und somit die Kollaboration von Mensch und Maschine durch Technologien der symbolischen KI ermöglichen. Die zweite Forschungsfrage greift diesen Aspekt auf: „Kann in dem ODD-BP Ansatz ontologisches Wissen so integriert werden, dass dieses in einer Prozessausführung einen Beitrag leistet?”
Das Metamodell sowie die entwickelten Methoden und Verfahren werden in einem prototypischen, generischen System realisiert, welches grundsätzlich für alle Anwendungsgebiete mit KiPs geeignet ist. Zur Validierung des ODD-BP Ansatzes erfolgt eine Ausrichtung auf den Anwendungsfall einer Notrufabfrage aus dem Leitstellenumfeld. Im Zuge der Evaluation wird gezeigt, wie dieser wissensintensive Ablauf von einer flexiblen, adaptiven und zielorientierten Prozessausführung profitiert. Darüber hinaus wird medizinisches Expertenwissen in den Prozessablauf integriert und es wird nachgewiesen, wie dieses zu verbesserten Prozessergebnissen beiträgt.
Wissensintensive Prozesse stellen Unternehmen und Organisationen in allen Branchen und Anwendungsfällen derzeit vor große Herausforderungen und die Wissenschaft und Forschung widmet sich der Suche nach praxistauglichen Lösungen. Diese Arbeit präsentiert mit ODD-BP einen vielversprechenden Ansatz, indem die Möglichkeiten Semantischer Technologien dazu genutzt werden, eine eng verzahnte Zusammenarbeit zwischen Mensch und Maschine bei der Ausführung von KiPs zu ermöglichen. Die zur Evaluation fokussierte Notrufabfrage innerhalb von Leitstellen stellt zudem einen höchst relevanten Anwendungsfall dar, da in einem akuten Notfall in kürzester Zeit Entscheidungen getroffen werden müssen, um weitreichenden Schaden abwenden und Leben retten zu können. Durch die Berücksichtigung umfassender Datenmengen und das Ausnutzen verfügbaren Expertenwissens kann so eine schnelle Lagebewertung mit Hilfe der maschinellen Unterstützung erreicht und der Mensch beim Treffen von richtigen Entscheidungen unterstützt werden.
Die vorliegende Studie untersucht die Besonderheiten diskursiver Strategien, sowie struktureller und sprachlicher Merkmale der japanischsprachigen Textsorte Literatur-Rezension. Auf der Grundlage der herausgearbeiteten textlinguistischen Merkmale werden Didaktisierungsbeispiele für den Fachtext-Leseunterricht entworfen. Ziel ist somit der Entwurf einer textlinguistischen Fundierung der (Fach-)textlektüre am Beispiel der Textsorte Rezension.
Die Materialbasis der vorliegenden Studie bilden 45 Rezensionen literarischer Neuerscheinungen aus dem wöchentlich erscheinenden Rezensionsorgan Tosho shinbun (Die Bücherzeitung) des Jahres 1999.
Die Kriterien für die Analyse werden aus einem Modell des Textsortenwissens von Fix 2006 abgeleitet (Teil I). Bei der Analyse stehen daher Aspekte im Zentrum, die den Aufbau von Wissen (Schemata) über die Textsorte bzw. Textsortenkonventionen und deren struktureller und sprachlicher Realisationen dienen können. Übergeordnete Phänomene wie Tempusgebrauch oder Fachsprachlichkeit werden ebenfalls untersucht. Eine quantitative Auswertung der Ergebnisse ermöglicht Rückschlüsse auf die didaktische Relevanz eines beobachteten Phänomens (Teil II).
Kriterien zur Textauswahl und Progression bei der Didaktisierung ausgewählter Rezensionen liefert ein Ansatz von Sandig 2000, die eine Klassifizierung von Texten auf der Grundlage der Prototypentheorie vorschlägt. Danach verdichten sich bestimmte Kommunikationsmuster zu mentalen Textschemata mit unterschiedlichen Ausprägungen je nach Prototypikalität der Texteigenschaften.
Das konstruktivistische Leseprozessmodell von Wolff 1990 schließlich liefert die Vorgaben, nach denen die Ergebnisse der Textanalyse anhand ausgewählter Beispiele lesedidaktisch aufbereitet werden (Teil III).
Der Anhang (Teil IV) bietet eine Zusammenstellung von in den untersuchten Rezensionen verwendeten Ausdrücken, der sich auf die Darstellung, Interpretation, Analyse und Bewertung von literarischen Werken beziehen.
The present work explores how theories of motivation can be used to enhance video game research. Currently, Flow-Theory and Self-Determination Theory are the most common approaches in the field of Human-Computer Interaction. The dissertation provides an in-depth look into Motive Disposition Theory and how to utilize it to explain interindividual differences in motivation. Different players have different preferences and make different choices when playing games, and not every player experiences the same outcomes when playing the same game. I provide a short overview of the current state of the research on motivation to play video games. Next, Motive Disposition Theory is applied in the context of digital games in four different research papers, featuring seven studies, totaling 1197 participants. The constructs of explicit and implicit motives are explained in detail while focusing on the two social motives (i.e., affiliation and power). As dependent variables, behaviour, preferences, choices, and experiences are used in different game environments (i.e., Minecraft, League of Legends, and Pokémon). The four papers are followed by a general discussion about the seven studies and Motive Disposition Theory in general. Finally, a short overview is provided about other theories of motivation and how they could be used to further our understanding of the motivation to play digital games in the future. This thesis proposes that 1) Motive Disposition Theory represents a valuable approach to understand individual motivations within the context of digital games; 2) there is a variety of motivational theories that can and should be utilized by researchers in the field of Human-Computer Interaction to broaden the currently one-sided perspective on human motivation; 3) researchers should aim to align their choice of motivational theory with their research goals by choosing the theory that best describes the phenomenon in question and by carefully adjusting each study design to the theoretical assumptions of that theory.
Heterogenität ist Teil des (Schul-)Alltags, was sich in den Lerngruppen widerspiegelt, die Lehrkräfte in der Schule vorfinden. Unterschiedlichen Bedürfnissen auf Seiten der Schüler/-innenschaft soll – und das ist in zahlreichen der Öffentlichkeit zugänglichen Dokumenten, wie Schulgesetzen, Schulordnungen und Standards für die Lehrerbildung festgeschrieben – in Form von Differenzierung begegnet werden. Innerhalb dieser Dissertation wird untersucht, wie Binnendifferenzierung auf Mikroebene in der Schulpraxis implementiert wird. Dabei werden die Einsatzhäufigkeit binnendifferenzierender Maßnahmen und Kontextvariablen von Binnendifferenzierung untersucht. Anhand einer Stichprobe von N = 295 Lehrkräften verschiedener Schulformen, die die Fächer Deutsch und/oder Englisch unterrichten, wurde u.a. gezeigt, dass Binnendifferenzierung allgemein nicht (sehr) häufig eingesetzt wird, dass manche Maßnahmen häufiger Einsatz finden als andere, dass an Gymnasien Binnendifferenzierung nicht so häufig eingesetzt wird, wie an anderen Schulformen und dass die Einsatzhäufigkeit bedingende Kontextfaktoren bspw. kollegiale Zusammenarbeit bei der Unterrichtsplanung und -durchführung, die wahrgenommene Qualität der Lehramtsausbildung hinsichtlich des Umgangs mit Heterogenität und die Bereitschaft zur Implementation von Binnendifferenzierung sind und auch Einstellungen zu Binnendifferenzierung und (Lehrer/-innen)Selbstwirksamkeitserwartungen in Zusammenhang mit dem Maßnahmeneinsatz stehen. Die durchgeführte Post-hoc Analyse zeigte bzgl. Einsatzhäufigkeit weiterhin Zusammenhänge zwischen der Persönlichkeit der Lehrkräfte und der Schulform, an der diese unterrichten. Die Ergebnisse entstammen der Schulpraxis und liefern deshalb praktische Implikationen, wie bspw. Hinweise zur Steigerung der Qualität der Lehramtsausbildung, die neben zukünftigen Forschungsansätzen im Rahmen dieser Arbeit expliziert werden.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
Gewährleistung 4.0
(2021)
Ist das bestehende Rechtssystem auf technischen Fortschritt vorbereitet?
Gunnar Schilling untersucht diese Frage mit Blick auf die automatisierte Abwicklung von Gewährleistungsrechten. Er nimmt eine Analyse des geltenden Rechtsrahmens vor und schlägt schließlich ein separates Regelungsregime
für automatisierte Kaufverträge vor.