Refine
Year of publication
- 2024 (95) (remove)
Document Type
- Part of a Book (34)
- Doctoral Thesis (29)
- Article (22)
- Contribution to a Periodical (6)
- Book (3)
- Part of Periodical (1)
Keywords
- Deutschland (11)
- Demokratie (6)
- Europäische Union (4)
- Alternative für Deutschland (3)
- China (3)
- Konflikt (3)
- Schule (3)
- Aktienmarkt (2)
- Klimaschutz (2)
- Luxemburg (2)
Institute
- Fachbereich 3 (39)
- Fachbereich 4 (9)
- Psychologie (9)
- Politikwissenschaft (7)
- Fachbereich 1 (6)
- Biogeographie (3)
- Bodenkunde (2)
- Fachbereich 6 (2)
- Institut für Rechtspolitik (2)
- Biopsychologie (1)
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
Aims: Fear of physical activity (PA) is discussed as a barrier to regular exercise in patients with heart failure (HF), but HF-specific theoretical concepts are lacking. This study examined associations of fear of PA, heart-focused anxiety and trait anxiety with clinical characteristics and self-reported PA in outpatients with chronic HF. It was also investigated whether personality-related coping styles for dealing with health threats impact fear of PA via symptom perception.
Methods and results: This cross-sectional study enrolled 185 HF outpatients from five hospitals (mean age 62 ± 11 years, mean ejection fraction 36.0 ± 12%, 24% women). Avoidance of PA, sports/exercise participation (yes/no) and the psychological characteristics were assessed by self-reports. Fear of PA was assessed by the Fear of Activity in Situations–Heart Failure (FActS-HF15) questionnaire. In multivariable regression analyses higher NYHA class (b = 0.26, p = 0.036) and a higher number of HF drugs including antidepressants (b = 0.25, p = 0.017) were independently associated with higher fear of PA, but not with heart-focused fear and trait anxiety. Of the three anxiety scores only increased fear of PA was independently associated with more avoidance behavior regarding PA (b = 0.45, SE = 0.06, p < 0.001) and with increased odds of no sports/exercise participation (OR = 1.34, 95% CI 1.03–1.74, p = 0.028). Attention towards cardiac symptoms and symptom distress were positively associated with fear of PA (p < 0.001), which explained higher fear of PA in patients with a vigilant (directing attention towards health threats) coping style (p = 0.004).
Conclusions: Fear of PA assessed by the FActS-HF15 is a specific type of anxiety in patients with HF. Attention towards and being distressed by HF symptoms appear to play a central role in fear of PA, particularly in vigilant patients who are used to direct their attention towards health threats. These findings provide approaches for tailored interventions to reduce fear of PA and to increase PA in patients with HF.
Au Luxembourg, des médiateur·trice·s scolaires externes apportent leur aide quand des conflits surgissent en milieu scolaire. Le Service de médiation scolaire fournit un soutien en cas de risque de décrochage scolaire et de conflits en lien avec l’inclusion et l’intégration d’élèves à besoins éducatifs spécifiques ou issu·e·s de l’immigration. Michèle Schilt s’est entretenue avec la directrice du service, Lis De Pina, sur le travail des médiateur·trice·s scolaires.
Investment theory and related theoretical approaches suggest a dynamic interplay between crystallized intelligence, fluid intelligence, and investment traits like need for cognition. Although cross-sectional studies have found positive correlations between these constructs, longitudinal research testing all of their relations over time is scarce. In our pre-registered longitudinal study, we examined whether initial levels of crystallized intelligence, fluid intelligence, and need for cognition predicted changes in each other. We analyzed data from 341 German students in grades 7–9 who were assessed twice, one year apart. Using multi-process latent change score models, we found that changes in fluid intelligence were positively predicted by prior need for cognition, and changes in need for cognition were positively predicted by prior fluid intelligence. Changes in crystallized intelligence were not significantly predicted by prior Gf, prior NFC, or their interaction, contrary to theoretical assumptions. This pattern of results was largely replicated in a model including all constructs simultaneously. Our findings support the notion that intelligence and investment traits, particularly need for cognition, positively interact during cognitive development, but this interplay was unexpectedly limited to Gf.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
The French Enlightenment is a pivotal period in European intellectual and literary history, which can be studied through this dataset of French novels first published between 1751 and 1800. This collection contains 200 French novels in TEI/XML, encoded according to the ‘level-1 schema’ of the European Literary Text Collection (ELTeC), and carefully compiled to reflect the known historical publication of French Novels in that period regarding publication year, gender of author and narrative form. The dataset is connected to a bigger knowledge graph of 331,671 Resource Description Framework triples (RDF) built within the project ‘Mining and Modeling Text’ at Trier University, Germany (2019–2023).
Der vorliegende Beitrag greift die öffentliche Diskussion um den rechtspolitischen Umgang mit Hass, Hetze und Antisemitismus auf, die insbesondere nach dem Terroranschlag der Hamas am 07.10.2023 an Intensität und Dringlichkeit zugenommen hat. Dabei beleuchtet er einerseits das Straf- und Zivilrecht, legt andererseits einen besonderen Fokus auf öffentlich-rechtliche Konstellationen. Auf jedem dieser Gebiete werden Schwächen und Potenziale des Rechts und der Rechtsprechung aufgezeigt, zugleich aber auch die Grenzen staatlicher Gewalt verdeutlicht. Denn letztlich handelt es sich um ein gesellschaftliches Problem, dem – trotz aller Notwendigkeit staatlichen Handelns – in erster Linie durch Information, und erst in zweiter Linie durch das Recht begegnet werden muss.
In machine learning, classification is the task of predicting a label for each point within a data set. When the class of each point in the labeled subset is already known, this information is used to recognize patterns and make predictions about the points in the remainder of the set, referred to as the unlabeled set. This scenario falls in the field of supervised learning.
However, the number of labeled points can be restricted, because, e.g., it is expensive to obtain this information. Besides, this subset may be biased, such as in the case of self-selection in a survey. Consequently, the classification performance for unlabeled points may be limited. To improve the reliability of the results, semi-supervised learning tackles the setting of labeled and unlabeled data. Moreover, in many cases, additional information about the size of each class can be available from undisclosed sources.
This cumulative thesis presents different studies to combine this external cardinality constraint information within three important algorithms for binary classification in the supervised context: support vector machines (SVM), classification trees, and random forests. From a mathematical point of view, we focus on mixed-integer programming (MIP) models for semi-supervised approaches that consider a cardinality constraint for each class for each algorithm.
Furthermore, since the proposed MIP models are computationally challenging, we also present techniques that simplify the process of solving these problems. In the SVM setting, we introduce a re-clustering method and further computational techniques to reduce the computational cost. In the context of classification trees, we provide correct values for certain bounds that play a crucial role for the solver performance. For the random forest model, we develop preprocessing techniques and an intuitive branching rule to reduce the solution time. For all three methods, our numerical results show that our approaches have better statistical performances for biased samples than the standard approach.
What does it mean when the future of one’s life is exposed to the inscrutable will of an intangible other? And what are the possibilities of still asserting oneself when pushed to the limit? Nuancing the feelings of different actors in a detention centre and analysing how everyday moods, affects and violence intertwine, I explore how the randomly cruel and often-inexplicable logic of the contemporary deportation regime pushes migrants to their limits. Taking as my starting point the argument that deportation practices are effective because they operate on an affective level, I show how affective experiences manifest themselves bodily and how violent practices and discourses reverberate in bodies. I argue that ‘bodies under pressure’ are testimonies of racialised histories of exclusion, and I show how they become calls for social recognition. Exploring small, often-unintended acts of rebellion against exhausting deportation practices, I stress the existential necessity and social importance of including oneself in the realm of meaning.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Physically-based distributed rainfall-runoff models as the standard analysis tools for hydro-logical processes have been used to simulate the water system in detail, which includes spa-tial patterns and temporal dynamics of hydrological variables and processes (Davison et al., 2015; Ek and Holtslag, 2004). In general, catchment models are parameterized with spatial information on soil, vegetation and topography. However, traditional approaches for eval-uation of the hydrological model performance are usually motivated with respect to dis-charge data alone. This may thus cloud model realism and hamper understanding of the catchment behavior. It is necessary to evaluate the model performance with respect to in-ternal hydrological processes within the catchment area as well as other components of wa-ter balance rather than runoff discharge at the catchment outlet only. In particular, a consid-erable amount of dynamics in a catchment occurs in the processes related to interactions of the water, soil and vegetation. Evapotranspiration process, for instance, is one of those key interactive elements, and the parameterization of soil and vegetation in water balance mod-eling strongly influences the simulation of evapotranspiration. Specifically, to parameterize the water flow in unsaturated soil zone, the functional relationships that describe the soil water retention and hydraulic conductivity characteristics are important. To define these functional relationships, Pedo-Transfer Functions (PTFs) are common to use in hydrologi-cal modeling. Opting the appropriate PTFs for the region under investigation is a crucial task in estimating the soil hydraulic parameters, but this choice in a hydrological model is often made arbitrary and without evaluating the spatial and temporal patterns of evapotran-spiration, soil moisture, and distribution and intensity of runoff processes. This may ulti-mately lead to implausible modeling results and possibly to incorrect decisions in regional water management. Therefore, the use of reliable evaluation approaches is continually re-quired to analyze the dynamics of the current interactive hydrological processes and predict the future changes in the water cycle, which eventually contributes to sustainable environ-mental planning and decisions in water management.
Remarkable endeavors have been made in development of modelling tools that provide insights into the current and future of hydrological patterns in different scales and their im-pacts on the water resources and climate changes (Doell et al., 2014; Wood et al., 2011). Although, there is a need to consider a proper balance between parameter identifiability and the model's ability to realistically represent the response of the natural system. Neverthe-less, tackling this issue entails investigation of additional information, which usually has to be elaborately assembled, for instance, by mapping the dominant runoff generation pro-cesses in the intended area, or retrieving the spatial patterns of soil moisture and evapotran-spiration by using remote sensing methods, and evaluation at a scale commensurate with hydrological model (Koch et al., 2022; Zink et al., 2018). The present work therefore aims to give insights into the modeling approaches to simulate water balance and to improve the soil and vegetation parameterization scheme in the hydrological model subject to producing more reliable spatial and temporal patterns of evapotranspiration and runoff processes in the catchment.
An important contribution to the overall body of work is a book chapter included among publications. The book chapter provides a comprehensive overview of the topic and valua-ble insights into the understanding the water balance and its estimation methods.
Moreover, the first paper aimed to evaluate the hydrological model behavior with re-spect to contribution of various sources of information. To do so, a multi-criteria evaluation metric including soft and hard data was used to define constraints on outputs of the 1-D hydrological model WaSiM-ETH. Applying this evaluation metric, we could identify the optimal soil and vegetation parameter sets that resulted in a “behavioral” forest stand water balance model. It was found out that even if simulations of transpiration and soil water con-tent are consistent with measured data, but still the dominant runoff generation processes or total water balance might be wrongly calculated. Therefore, only using an evaluation scheme which looks over different sources of data and embraces an understanding of the local controls of water loss through soil and plant, allowed us to exclude the unrealistic modeling outputs. The results suggested that we may need to question the generally accept-ed soil parameterization procedures that apply default parameter sets.
The second paper attempts to tackle the pointed model evaluation hindrance by getting down to the small-scale catchment (in Bavaria). Here, a methodology was introduced to analyze the sensitivity of the catchment water balance model to the choice of the Pedo-Transfer Functions (PTF). By varying the underlying PTFs in a calibrated and validated model, we could determine the resulting effects on the spatial distribution of soil hydraulic properties, total water balance in catchment outlet, and the spatial and temporal variation of the runoff components. Results revealed that the water distribution in the hydrologic system significantly differs amongst various PTFs. Moreover, the simulations of water balance components showed high sensitivity to the spatial distribution of soil hydraulic properties. Therefore, it was suggested that opting the PTFs in hydrological modeling should be care-fully tested by looking over the spatio-temporal distribution of simulated evapotranspira-tion and runoff generation processes, whether they are reasonably represented.
To fulfill the previous studies’ suggestions, the third paper then aims to focus on evalu-ating the hydrological model through improving the spatial representation of dominant run-off processes. It was implemented in a mesoscale catchment in southwestern Germany us-ing the hydrological model WaSiM-ETH. Dealing with the issues of inadequate spatial ob-servations for rigorous spatial model evaluation, we made use of a reference soil hydrologic map available for the study area to discern the expected dominant runoff processes across a wide range of hydrological conditions. The model was parameterized by applying 11 PTFs and run by multiple synthetic rainfall events. To compare the simulated spatial patterns to the patterns derived by digital soil map, a multiple-component spatial performance metric (SPAEF) was applied. The simulated DRPs showed a large variability with regard to land use, topography, applied rainfall rates, and the different PTFs, which highly influence the rapid runoff generation under wet conditions.
The three published manuscripts proceeded towards the model evaluation viewpoints that ultimately attain the behavioral model outputs. It was performed through obtaining information about internal hydrological processes that lead to certain model behaviors, and also about the function and sensitivity of some of the soil and vegetation parameters that may primarily influence those internal processes in a catchment. Accordingly, using this understanding on model reactions, and by setting multiple evaluation criteria, it was possi-ble to identify which parameterization could lead to behavioral model realization. This work, in fact, will contribute to solving some of the issues (e.g., spatial variability and modeling methods) identified as the 23 unsolved problems in hydrology in the 21st century (Blöschl et al., 2019). The results obtained in the present work encourage the further inves-tigations toward a comprehensive model calibration procedure considering multiple data sources simultaneously. This will enable developing the new perspectives to the current parameter estimation methods, which in essence, focus on reproducing the plausible dy-namics (spatio-temporal) of the other hydrological processes within the watershed.
Information in der vorvertraglichen Phase – das heißt, Informationspflichten sowie Rechtsfolgen von Informationserteilung und -nichterteilung – in Bezug auf Kaufvertrag und Wahl des optionalen Instruments hat im Vorschlag der Europäischen Kommission für ein Gemeinsames Europäisches Kaufrecht (GEK; KOM(2011) 635) vielfältige Regelungen erfahren. Die vorliegende Arbeit betrachtet diese Regelungen auch in ihrem Verhältnis zu den Textstufen des Europäischen Privatrechts – Modellregeln und verbraucherschützende EU-Richtlinien – und misst sie an ökonomischen Rahmenbedingungen, die die Effizienz von Transaktionen gebieten und Grenzen des Nutzens von (Pflicht-)Informationen aufzeigen.
Vom Grundsatz der Vertragsfreiheit ausgehend ist jeder Partei das Risiko zugewiesen, unzureichend informiert zu sein, während die Gegenseite nur punktuell zur Information verpflichtet ist. Zwischen Unternehmern bleibt es auch nach dem GEK hierbei, doch zwischen Unternehmer und Verbraucher wird dieses Verhältnis umgekehrt. Dort gelten, mit Differenzierung nach Vertragsschlusssituationen, umfassende Kataloge von Informationspflichten hinsichtlich des Kaufvertrags. Als Konzept ist dies grundsätzlich sinnvoll; die Pflichten dienen dem Verbraucherschutz, insbesondere der Informiertheit und Transparenz vor der Entscheidung über den Vertragsschluss. Teilweise gehen die Pflichten aber zu weit. Die Beeinträchtigung der Vertragsfreiheit des Unternehmers durch die Pflichten und die Folgen ihrer Verletzung lässt sich nicht vollständig mit dem Ziel des Verbraucherschutzes rechtfertigen. Durch das Übermaß an Information fördern die angeordneten Pflichten den Verbraucherschutz nur eingeschränkt; sie genügen nicht verhaltensökonomischen Maßstäben. Es empfiehlt sich daher, zwischen Unternehmern und Verbrauchern bestimmte verpflichtende Informationsinhalte ganz zu streichen, auf im konkreten Fall nicht erforderliche Information zu verzichten, erst nach Vertragsschluss relevante Informationen auf diese Zeit zu verschieben und die verbleibenden vorvertraglichen Pflichtinformationen in einer für den Verbraucher besser zu verarbeitenden Weise zu präsentieren. Von den einem Verbraucher zu erteilenden Informationen sollte stets verlangt werden, dass sie klar und verständlich sind; die Beweislast für ihre ordnungsgemäße Erteilung sollte generell dem Unternehmer obliegen.
Neben die ausdrücklich angeordneten Informationspflichten treten ungeachtet der Verbraucher- oder Unternehmereigenschaft sowie der Käufer- oder Verkäuferrolle stark einzelfallabhängige Informationspflichten nach Treu und Glauben, die im Recht der Willensmängel niedergelegt sind. Hier ist der Grundsatz verwirklicht, dass mangelnde Information zunächst das eigene Risiko jeder Partei ist; berechtigtes Vertrauen und freie Willensbildung werden geschützt. Diese Pflichten berücksichtigen auch das Ziel der Effizienz und achten die Vertragsfreiheit. Das Vertrauen auf jegliche erteilten Informationen wird zudem dadurch geschützt, dass sie den Vertragsinhalt – allerdings in Verbraucherverträgen nicht umfassend genug – mitbestimmen können und dass ihre Unrichtigkeit sanktioniert wird.
Die Verletzung jeglicher Arten von Informationspflichten kann insbesondere einen Schadensersatzanspruch sowie – über das Recht der Willensmängel – die Möglichkeit zur Lösung vom Vertrag nach sich ziehen. Das Zusammenspiel der unterschiedlichen Mechanismen führt allerdings zu Friktionen sowie zu Lücken in den Rechtsfolgen von Informationspflichtverletzungen. Daher empfiehlt sich die Schaffung eines Schadensersatzanspruchs für jede treuwidrig unterlassene Informationserteilung; hierdurch wird das Gebot von Treu und Glauben auch außerhalb des Rechts der Willensmängel zu einer eigentlichen einzelfallabhängigen Informationspflicht aufgewertet.
Left ventricular assist devices (LVADs) have become a valuable treatment for patients with advanced heart failure. Women appear to be disadvantaged in the usage of LVADs and concerning clinical outcomes such as death and adverse events after LVAD implant. Contrary to typical clinical characteristics (e.g., disease severity), device-related factors such as the intended device strategy, bridge to a heart transplantation or destination therapy, are often not considered in research on gender differences. In addition, the relevance of pre-implant psychosocial risk factors, such as substance abuse and limited social support, for LVAD outcomes is currently unclear. Thus, the aim of this dissertation is to explore the role of pre-implant psychosocial risk factors for gender differences in clinical outcomes, accounting for clinical and device-related risk factors.
In the first article, gender differences in pre-implant characteristics of patients registered in The European Registry for Patients with Mechanical Circulatory Support (EUROMACS) were investigated. It was found that women and men differed in multiple pre-implant characteristics depending on device strategy. In the second article, gender differences in major clinical outcomes (i.e., death, heart transplant, device explant due to cardiac recovery, device replacement due to complications) were evaluated for patients in the device strategy destination therapy in the Interagency Registry for Mechanically Assisted Circulation (INTERMACS). Additionally, the association of gender and psychosocial risk factors with the major outcomes were analyzed. Women had similar probabilities to die on LVAD support, and even higher probabilities to experience explant of the device due to cardiac recovery compared with men in the destination therapy subgroup. Pre-implant psychosocial risk factors were not associated with major outcomes. The third article focused on gender differences in 10 adverse events (e.g., device malfunction, bleeding) after LVAD implant in INTERMACS. The association of a psychosocial risk indicator with gender and adverse events after LVAD implantation was evaluated. Women were less likely to have psychosocial risk pre-implant but more likely to experience seven out of 10 adverse events compared with men. Pre-implant psychosocial risk was associated with adverse events, even suggesting a dose response-relationship. These associations appeared to be more pronounced in women.
In conclusion, women appear to have similar survival to men when accounting for device strategy. They have higher probabilities of recovery, but higher probabilities of device replacement and adverse events compared with men. Regarding these adverse events, women may be more susceptible to psychosocial risk factors than men. The results of this dissertation illustrate the importance of gender-sensitive research and suggest considering device strategy when studying gender differences in LVAD recipients. Further research is warranted to elucidate the role of specific psychosocial risk factors that lead to higher probabilities of adverse events, to intervene early and improve patient care in both, women and men
Strategien der Komik im Internet-Meme - Ambivalente Funktionen einer internationalen Populärkultur
(2024)
Internet-Memes sind ein globales, populäres Medium, oft und zumeist unproblematisiert rezipiert und in ihrer Komik meist nur fragmentarisch, auf bestimmte Aspekte fokussiert analysiert. Die vorliegende Arbeit bemüht sich um eine möglichst umfassende Darstellung der Komik in Memes basierend auf klassischen und modernen Komikkategorien. Auf Grundlage einer umfassend-kritischen Synthese der vorliegenden Fachliteratur und eines präzisen Analysemodells kann so eine begründete Diskussion über memetische Komik, ihre Funktionen und ihre positiven wie problematischen Aspekte geführt werden.
The turnover and stabilization of organic matter (OM) in soils depend on mass and energy fluxes. Understanding the energy content of soil organic matter (SOM) is therefore of crucial importance, but this has hardly been studied so far, especially in mineral soils. In this study, combustion calorimetry (bomb calorimetry) was applied to determine the energy content (combustion enthalpy, ΔCH) of various materials: litter inputs, forest floor layers (OL, OF, OH), and bulk soil and particulate organic matter (POM) from topsoils (0–5 cm). Samples were taken from 35-year-old monocultural stands of Douglas fir (Pseudotsuga menziesii), black pine (Pinus nigra), European beech (Fagus sylvatica), and red oak (Quercus rubra) grown under highly similar soil, landscape and boundary conditions. This allowed to investigate the influence of the degree of transformation and litter quality on the ΔCH of SOM. Tree species fuel the soil C cycle with high-energy litter (38.9 ± 1.1 kJ g−1C) and fine root biomass (35.9 ± 1.1 kJ g−1C). As plant material is transformed to SOM, ΔCH decreases in the order: OL (36.8 ± 1.6 kJ g−1C) ≥ OF (35.9 ± 3.7 kJ g−1C) > OH (30.6 ± 7.0 kJ g−1C) > 0–5 cm bulk soil (22.9 ± 8.2 kJ g−1C). It indicates that the energy content of OM decreases with transformation and stabilization, as microorganisms extract energy from organic compounds for growth and maintenance, resulting in lower-energy bulk SOM. The POM fraction has 1.6-fold higher ΔCH compared to the bulk SOM. Tree species significantly affect ΔCH of SOM in the mineral soil with the lowest values under beech (12.7 ± 3.4 kJ g−1C). The energy contents corresponded to stoichiometric and isotopic parameters as proxies for the degree of transformation. In conclusion, litter quality, in terms of elemental composition and energy content, defines the pathway and degree of the energy-driven microbially mediated transformation and stabilization of SOM.
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
Introduction: This study examined the sources and factors of resilience in Russian sexual and gender minorities. We hypothesized that, through their involvement in the lesbian, gay, bisexual, and transgender (LGBT) community (source of resilience), LGBT people establish friendships that provide them with social support (factor of resilience), which in turn should contribute to their mental health.
Method: The study sample consisted of 1,127 young and middle-aged LGBT adults (18 to 50 years) from Russia. We collected the data online and anonymously. Results: Partial mediation could be confirmed. LGBT people who were involved in “their” community reported more social support from friends, which partially mediated the positive association between community involvement and mental health. The mediation remained significant when we controlled for demographics and outness as potential covariates. Additional analyses showed that the present sample reported lower mental health but not less social support than Russian nonminority samples recruited in previous research.
Conclusion: Our study underlines the importance of the LGBT community in times of increasing stigmatization of sexual and gender minorities.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
Anmerkung: Es handelt sich um die 2. überarbeitete Auflage der Dissertation.
1. Auflage siehe:
"https://ubt.opus.hbz-nrw.de/frontdoor/index/index/docId/2083".
Ausgangspunkt der politisch-ikonographischen Untersuchung, in deren
Zentrum zwei Staatsporträts König Maximilians II. von Bayern stehen, ist die Beobachtung, dass diese beiden Bildnisse grundsätzlich unterschiedliche Inszenierungsformen wählen. Das erste von Max Hailer gefertigte Werk zeigt Maximilian II. im vollen bayerischen Krönungsornat und greift eine tradierte Darstellungsweise im Staatsporträt auf. Es entstand zwei Jahre nach Maximilians II. Thronbesteigung und damit nach den revolutionären Unruhen der Jahre 1848/49 im Jahr 1850. Das zweite wurde von Joseph Bernhardt 1857 bis 1858 gemalt und im Jahr 1858 zum zehnjährigen Thronjubiläum des Monarchen erstmals präsentiert. Die Inszenierung ändert sich im zweiten Bildnis: Das bayerische Krönungsornat ist der Generalsuniform gewichen, ebenso weitere Details, die sich noch in der ersten Darstellung finden: Draperie und Wappen fehlen, der übliche bayerisch-königliche Thronsessel ist durch einen anderen ersetzt. In den Hintergrund gedrängt ist die Verfassung, immerhin seit 1818 staatliche Rechtsgrundlage des bayerischen Königreichs. Die beiden Staatsporträts Maximilians II. leiten offensichtlich von den Herrscherbildnissen im vollen bayerischen Krönungsornat seines Großvaters Maximilian I. und Vaters Ludwig I. über zu einer solchen in Uniform mit Krönungsmantel wie sie sich bei Napoleon III. und Friedrich Wilhelm IV. finden und wie sie sein Sohn Ludwig II. weiterführte. Es stellt sich somit die Frage, welche Faktoren zu diesem prägnanten Wandel in der Inszenierung Maximilians II. als König von Bayern führten. Die Arbeit geht der These nach, dass beide Darstellungen grundlegend auf eine reaktionäre, gegen die Revolution 1848/49 gerichtete Politik ausgelegt sind, wobei dieser reaktionäre Charakter in Maximilians II. Bildnis von 1858 noch eine Steigerung im Vergleich zu derjenigen von 1850 erfährt. Zudem wandelt sich die innenpolitisch-historische Ausrichtung des ersten Porträts bei der zweiten Darstellung des bayerischen Monarchen in eine außenpolitisch-progressive. Die Legitimation Maximilians II. begründet sich nicht mehr, wie bei ersterem, in der Geschichte und der Herrschaft der Wittelsbacher, sondern in seinen eigenen Errungenschaften und seiner eigenen Herrschaft. Dieser Wechsel der politischen Bildaussage fußt sowohl auf den politischen Veränderungen und Entwicklungen innerhalb und außerhalb Bayerns als auch auf der Entwicklung des Staatsporträts in der Mitte des 19. Jahrhunderts. Nach nur zehn Jahren wird so eine veränderte Botschaft über Maximilians II. Position und Machtanspruch ausgesendet.
Der zentrale Gegenstand der Untersuchung ist die Rechtsfigur des Indigenats im Kontext der württembergischen und preußischen Staatenlandschaft. Das Indigenat lässt sich als ein Recht bestimmen, das seine potenziellen Rechtsträger maßgeblich über das Abstammungsprinzip definiert und ein Verhältnis zwischen Rechtsträger und einem übergeordneten Rechtssubjekt zum Ausdruck bringt, sei es lehns- oder standes-, staats- oder auch bundes- beziehungsweise reichsrechtlicher Natur. Der zeitliche Schwerpunkt der Betrachtung liegt auf dem 19. Jahrhundert. Es werden jedoch auch Rückblicke in die Frühe Neuzeit geworfen, weil Wandel und Kontinuität in der Entwicklung des Indigenats in einer solch langen Perspektive besonders klar hervortreten können. Die zentrale These dieser Arbeit ist, dass ein enger Zusammenhang zwischen der im 19. Jahrhundert entstehenden und bis heute geläufigen Form der Zuordnung von Menschen zum Staat und den aus diesem Verhältnis entspringenden Rechten einerseits und dem frühneuzeitlichen Indigenat andererseits besteht. Dabei kann gezeigt werden, dass Gesellschaften ihre politischen Machtpositionen gegenüber „fremdstämmigen“, etwa zuwandernden Personen abschirmten, indem sie sich auf indigenatrechtliche, ethnische Bestimmungen beriefen.
Social entrepreneurship is a successful activity to solve social problems and economic challenges. Social entrepreneurship uses for-profit industry techniques and tools to build financially sound businesses that provide nonprofit services. Social entrepreneurial activities also lead to the achievement of sustainable development goals. However, due to the complex, hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel determinants. To expand our knowledge of how beneficial macro-level determinants can be, this work examines empirical evidence about the impact of macro-level determinants on social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning, the introductory section is explained in Chapter 1, which contains the motivation for the research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship. Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that a variety of determinants at the micro and macro levels are essential for the emergence of social entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the determinants and consequences of social entrepreneurship from different methodological perspectives. The theoretical foundations of the micro- and macro-level determinants influencing social entrepreneurial activities were discussed in Chapter 3. The purpose of reproducibility in research is to confirm previously published results (Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of transparency of methodology, reluctance to publish, and lack of interest from researchers, there is a lack of promoting replication of the existing research study (Baker, 2016; Hedges & Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that provide replicability of the reported results are considered rare in previous research (Burman et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand macro-level determinants on social inclusion, although it is still unclear whether these studies accurately reflect reality. It is important to provide conceptual underpinnings to the field through a reassessment of published results (Bettis et al., 2016). The results of their research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention, Chapter 5 considered the reproducibility of previous results, particularly on the topic of social entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of reproducibility and validate the specific conclusions they drew. The literal and constructive replication in the dissertation inspired us to explore technical replication research on social entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the growth of social ventures. The current debate reviews and references literature that has specifically focused on the development of social entrepreneurship. An empirical analysis of factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter 6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as consequences at the micro level. This study examined many characteristics of social and commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the growth of social entrepreneurship differs from commercial entrepreneurial activities due to objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative research has been used in studies to support the evidence on related topics, including Gupta et al (2020) emphasized that research needs to focus on specific concepts of social entrepreneurship for the field to advance. Therefore, this study provides a quantitative, analysis-based assessment of facts and data. For this purpose, a data set from the Global Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from 38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence of various social and commercial characteristics of entrepreneurship on economic growth in developing countries. Chapter 7 briefly explains future directions and practical/theoretical implications.
In the present study, we tested whether processing information in the context of an ancestral survival scenario enhances episodic memory performance in older adults and in stroke patients. In an online study (Experiment 1), healthy young and older adults rated words according to their relevance to an ancestral survival scenario, and subsequent free recall performance was compared to a pleasantness judgment task and a moving scenario task in a within-subject design. The typical survival processing effect was replicated: Recall rates were highest in the survival task, followed by the moving and the pleasantness judgment task. Although older adults showed overall lower recall rates, there was no evidence for differences between the age groups in the condition effects. Experiment 2 was conducted in a neurological rehabilitation clinic with a sample of patients who had suffered from a stroke within the past 5 months. On the group level, Experiment 2 revealed no significant difference in recall rates between the three conditions. However, when accounting for overall memory abilities and executive function, independently measured in standardized neuropsychological tests, patients showed a significant survival processing effect. Furthermore, only patients with high executive function scores benefitted from the scenario tasks, suggesting that intact executive function may be necessary for a mnemonic benefit. Taken together, our results support the idea that the survival processing task – a well-studied task in the field of experimental psychology – may be incorporated into a strategy to compensate for memory dysfunction.
Older adults who worry about their own cognitive capabilities declining, but who do not show evidence of actual cognitive decline in neuropsychological tests, are at an increased risk of being diagnosed with dementia at a later time. Since neural markers may be more sensitive to early stages of cognitive decline, in the present study we examined whether event-related potential responses
of feedback processing, elicited in a probabilistic learning task, differ between healthy older adults recruited from the community, who either did (subjective cognitive decline/SCD-group) or did not report (No-SCD group) worry about their own cognition declining beyond the normal age-related development. In the absence of group differences in learning from emotionally charged feedback in the probabilistic learning task, the amplitude of the feedback-related negativity (FRN) varied with feedback valence differently in the two groups: In the No-SCD group, the FRN was larger for positive than negative feedback, while in the SCD group, FRN amplitude did not differ between positive and negative feedback. The P3b was enhanced for negative feedback in both groups, and group differences in P3b amplitude were not significant. Altered sensitivity in neural processing of negative versus positive feedback may be a marker of SCD.
Im Rahmen psychologischer Wissenschaftskommunikation werden Plain Language Summaries (PLS, Kerwer et al., 2021) zunehmend bedeutsamer. Es handelt sich hierbei um
zugängliche, überblicksartige Zusammenfassungen, welche das Verständnis von Lai:innen
potenziell unterstützen und ihr Vertrauen in wissenschaftliche Forschung fördern können.
Dies erscheint speziell vor dem Hintergrund der Replikationskrise (Wingen et al., 2019) sowie Fehlinformationen in Online-Kontexten (Swire-Thompson & Lazer, 2020) relevant. Die
positiven Auswirkungen zweier Effekte auf Vertrauen sowie ihre mögliche Interaktion fanden im Kontext von PLS bisher kaum Berücksichtigung: Zum einen die einfache Darstellung von Informationen (Easiness-Effekt, Scharrer et al., 2012), zum anderen ein möglichst wissenschaftlicher Stil (Scientificness-Effekt, Thomm & Bromme, 2012). Diese Dissertation hat zum Ziel, im Kontext psychologischer PLS genauere Bestandteile beider Effekte zu identifizieren und den Einfluss von Einfachheit und Wissenschaftlichkeit auf Vertrauen zu beleuchten. Dazu werden drei Artikel zu präregistrierten Online-Studien mit deutschsprachigen Stichproben vorgestellt.
Im ersten Artikel wurden in zwei Studien verschiedene Textelemente psychologischer PLS systematisch variiert. Es konnte ein signifikanter Einfluss von Fachtermini, Informationen zur
Operationalisierung, Statistiken und dem Grad an Strukturierung auf die von Lai:innen berichtete Einfachheit der PLS beobachtet werden. Darauf aufbauend wurden im zweiten Artikel vier PLS, die von Peer-Review-Arbeiten abgeleitet wurden, in ihrer Einfachheit und
Wissenschaftlichkeit variiert und Lai:innen zu ihrem Vertrauen in die Texte und Autor:innen befragt. Hier ergab sich zunächst nur ein positiver Einfluss von Wissenschaftlichkeit auf
Vertrauen, während der Easiness-Effekt entgegen der Hypothesen ausblieb. Exploratorische Analysen legten jedoch einen positiven Einfluss der von Lai:innen subjektiv wahrgenommenen Einfachheit auf ihr Vertrauen sowie eine signifikante Interaktion mit der
wahrgenommenen Wissenschaftlichkeit nahe. Diese Befunde lassen eine vermittelnde Rolle der subjektiven Wahrnehmung von Lai:innen für beide Effekte vermuten. Im letzten Artikel
wurde diese Hypothese über Mediationsanalysen geprüft. Erneut wurden zwei PLS
präsentiert und sowohl die Wissenschaftlichkeit des Textes als auch die der Autor:in manipuliert. Der Einfluss höherer Wissenschaftlichkeit auf Vertrauen wurde durch die
subjektiv von Lai:innen wahrgenommene Wissenschaftlichkeit mediiert. Zudem konnten
dimensionsübergreifende Mediationseffekte beobachtet werden.
Damit trägt diese Arbeit über bestehende Forschung hinaus zur Klärung von Rahmenbedingungen des Easiness- und Scientificness-Effektes bei. Theoretische
Implikationen zur zukünftigen Definition von Einfachheit und Wissenschaftlichkeit, sowie
praktische Konsequenzen hinsichtlich unterschiedlicher Zielgruppen von
Wissenschaftskommunikation und dem Einfluss von PLS auf die Entscheidungsbildung von
Lai:innen werden diskutiert.
Introduction: Across various cultural contexts, success in goal realization relates to individuals’ well-being. Moreover, commitment to and successful pursuance of goals are crucial when searching for a meaningful identity in adolescence. However, individuals’ goals differ in how much they match their implicit motive dispositions. We hypothesized that successful pursuance of affiliation goals positively relates to commitment-related dimensions of interpersonal identity development (domain: close friends) that, in turn, predict adolescents’ level of well-being. However, we further assumed that the links between goal success and identity commitment are particularly pronounced among adolescents who are characterized by a high implicit affiliation motive.
Methods: To scrutinize the generalizability of the assumed relationships, data were assessed among adolescents in individualistic (Germany) and collectivistic (Zambia) cultural contexts.
Results: Regardless of adolescents’ cultural background, we found that commitment-related dimensions of interpersonal identity development mediate the link between successful attainment of affiliation goals and well-being, particularly among adolescents with a pronounced implicit affiliation motive; that is, the strength of the implicit affiliation motive moderates the association
between goal success and identity commitment.
Conclusion: We discuss findings concerning universal effects of implicit motives on identity commitment and well-being.
Using validated stimulus material is crucial for ensuring research comparability and replicability. However, many databases rely solely on bidimensional valence ratings, ranging from negative to positive. While this material might be appropriate for certain studies, it does not reflect the complexity of attitudes and therefore might hamper the unambiguous interpretation of some study results. In fact, most databases cannot differentiate between neutral (i.e., neither positive nor negative) and ambivalent (i.e., simultaneously positive and negative) attitudes. Consequently, even presumably univalent (only positive or negative) stimuli cannot be clearly distinguished from ambivalent ones when selected via bipolar rating scales. In the present research, we introduce the Trier Univalence Neutrality Ambivalence (TUNA) database, a database containing 304,262 validation ratings from heterogeneous samples of 3,232 participants and at least 20 (M = 27.3, SD = 4.84) ratings per self-report scale per picture for a variety of attitude objects on split semantic differential scales. As these scales measure positive and negative evaluations independently, the TUNA database allows to distinguish univalence, neutrality, and ambivalence (i.e., potential ambivalence). TUNA also goes beyond previous databases by validating the stimulus materials on affective outcomes such as experiences of conflict (i.e., felt ambivalence), arousal, anger, disgust, and empathy. The TUNA database consists of 796 pictures and is compatible with other popular databases. It sets a focus on food pictures in various forms (e.g., raw vs. cooked, non-processed vs. highly processed), but includes pictures of other objects that are typically used in research to study univalent (e.g., flowers) and ambivalent (e.g., money, cars) attitudes for comparison. Furthermore, to facilitate the stimulus selection the TUNA database has an accompanying desktop app that allows easy stimulus selection via a ultitude of filter options.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
This dissertation examines the relevance of regimes for stock markets. In three research articles, we cover the identification and predictability of regimes and their relationships to macroeconomic and financial variables in the United States.
The initial two chapters contribute to the debate on the predictability of stock markets. While various approaches can demonstrate in-sample predictability, their predictive power diminishes substantially in out-of-sample studies. Parameter instability and model uncertainty are the primary challenges. However, certain methods have demonstrated efficacy in addressing these issues. In Chapter 1 and 2, we present frameworks that combine these methods meaningfully. Chapter 3 focuses on the role of regimes in explaining macro-financial relationships and examines the state-dependent effects of macroeconomic expectations on cross-sectional stock returns. Although it is common to capture the variation in stock returns using factor models, their macroeconomic risk sources are unclear. According to macro-financial asset pricing, expectations about state variables may be viable candidates to explain these sources. We examine their usefulness in explaining factor premia and assess their suitability for pricing stock portfolios.
In summary, this dissertation improves our understanding of stock market regimes in three ways. First, we show that it is worthwhile to exploit the regime dependence of stock markets. Markov-switching models and their extensions are valuable tools for filtering the stock market dynamics and identifying and predicting regimes in real-time. Moreover, accounting for regime-dependent relationships helps to examine the dynamic impact of macroeconomic shocks on stock returns. Second, we emphasize the usefulness of macro-financial variables for the stock market. Regime identification and forecasting benefit from their inclusion. This is particularly true in periods of high uncertainty when information processing in financial markets is less efficient. Finally, we recommend to address parameter instability, estimation risk, and model uncertainty in empirical models. Because it is difficult to find a single approach that meets all of these challenges simultaneously, it is advisable to combine appropriate methods in a meaningful way. The framework should be as complex as necessary but as parsimonious as possible to mitigate additional estimation risk. This is especially recommended when working with financial market data with a typically low signal-to-noise ratio.
Nachdem er in den 1750er und 1760er Jahren graphische Bildsatiren zu aktuellen innen- und außenpolitischen Themen veröffentlich hatte, wurde William Hogarth selbst in zahlreichen Karikaturen verspottet und verleumdet. Ausgehend von dieser Beobachtung fragt die vorliegende Dissertation, welche Haltung sich den politischen Blättern des Künstlers entnehmen lässt und mit welchen künstlerischen Mitteln er dieser Ausdruck verlieh. Durch Analyse der politischen Ikonographie lassen sich die Themen und Akteure beschreiben. Mit der rezeptionsästhetischen Methode unter Hinzunahme der Sprech- und Bildakttheorie und der Propaganda Studies werden ihre tendenziösen Aussagen und manipulative Absichten entschlüsselt.
In ihrer Regierungsaffinität unterscheidet sich Hogarths politische Kunst maßgeblich von der oppositionellen Bildsatire Londons. Die Differenz spiegelt sich v. a. in den persönlichen Angriffen, mit denen zeitgenössische Satiriker Hogarth kritisierten. Als erstes reagierte Paul Sandby („The Painter’s March from Finchly“, 1753) auf Hogarths Darstellung des Jakobitischen Aufstandes 1745, womit er eine Begründung für die von William Augustus, Duke of Cumberland angestrebte Militärreform lieferte („March of the Guards to Finchley“, 1751); Für seine Gin Act-Kampagne („Gin Lane“ und „Beer Street“, 1750/51) erweiterte er die Pro-Gin-Ikonographie der 1730er Jahre (Anonymous: „The lamentable Fall of Madam Geneva”, 1736, Anonymous: „To those melancholly Sufferers the Destillers […] The Funeral Procession of Madam Geneva“, 1751), um sich für die staatliche Reglementierung der Destillen auszusprechen. In seinen Publikationen zum Siebenjährigen Krieg, mit denen er die Politik der jeweiligen Regierungen unter Thomas Pellham-Holles, Duke of Newcastle und William Pitt (the Elder) („The Invasion“, 1756) oder John Stuart, Earl of Bute („The Times Pl. 1“, 1763) unterstützte, zeigt sich Hogarths Opportunismus. Letztlich wurde seine Fürsprache für die unbeliebte Tory-Regierung und seine Kritik an William Pitt Anlass für Hogarths Herabwürdigung durch die Whig-treue Satire. Nach diesem Bruch publizierten beide Seiten verunglimpfende Portraitkarikaturen, die auf Rufmord des Gegners durch Kriminalisierung, Deformation und Dämonisierung setzten (William Hogarth: „John Wilkes Esqr.“, 1763, Anonymous „Tit for Tat“, 1763, Anonymous: „An Answer to the Print of John Wilkes Esqr. by WM Hogarth“, 1763, Anonymous: „Pug the snarling cur chastised Or a Cure for the Mange“, 1763).
Die Bildvergleiche zwischen Hogarths politischen Werken und den Reaktionen, die sie hervorriefen, zeigen, dass der Unterschied nicht im Bildgegenstand oder der politischen Ikonographie liegt, sondern in der Ausrichtung ihrer politischen Einflussnahme. Dabei ist vor allem Hogarths regierungsloyale Haltung hervorzuheben. Folglich muss die Forschungsmeinung von einer grundsätzlich kritischen Haltung Hogarths redigiert werden, da er sich nachweislich konservativ positioniert und dem Regierungshandeln und Machterhalt der Eliten Vorschub leistete.
Das vorliegende Dissertationsvorhaben untersucht die propagandistische Qualität der Werke Hogarths im Vergleich zu den zeitgenössischen Satirikern und macht die unterschiedliche politische Stoßrichtung sichtbar. Aufschluss gibt dabei die Anwendung künstlerischer und karikaturesker Mittel (das „Wie“) zum Zweck der burlesque (Posse/Parodie), des ridicule (Lächerlichmachung/Spott) bis bin zur Agitation, sowohl in Hogarths Werken als auch in den Karikaturen, die gegen ihn gerichtet waren. Da William Hogarth diese Stilmittel maßgeblich prägte und ihre Entwicklung forcierte, werden sie in der vorliegenden Arbeit unter dem Begriff Hogarthian Wit summiert. Mithilfe der Methode und Begriffe der Propaganda Studies lassen sich Intention und Zweck (das „Was“) als Bildakte beschreiben: Während es sich bei den Werken grundsätzlich um bias handelte, die basierend auf einer Ideologie die öffentliche Meinung beeinflusste, nahm ihre Schlagkraft in den 1760er Jahren stark zu; auf verrätselte Stellungnahmen folgte persönliche und offene Kritik an öffentlichen Personen, bis hin zum Rufmord. Dabei rezipierten sich die Künstler gegenseitig und bildeten Thesen und Antithesen aus. Hogarths einseitige Darstellungen wurden korrigiert und ergänzt, seine politische Kunst als Propaganda enttarnt. Schließlich wurden ihm Lügen und üble Nachrede vorgeworfen. Indem sie ihn anklagten oder durch Sekundärstigmatisierung eine Bestrafung in effigie vornahmen, forderten die Werke vom Rezipienten ein strafendes Urteil. Zu den künstlerischen Mitteln, die dabei zur Anwendung kommen, gehören eine politische Ikonographie und stereotype Feindbilder sowie nationale Konstruktionen, rezeptionsästhetische Mittel wie Juxtapositionen, Rezeptions- und Identifikationsfiguren sowie rhetorische und Mittel des Sprechakts, bis hin zu Perlokutionen. Die Werke lassen sich als Propaganda und somit als hierarchische Kommunikation beschreiben, die manipulative Bildstrategien nutzten, welche nicht nur der Beeinflussung der öffentlichen Meinung dienten, sondern politische Handlungen forcierten. Bezeichnend ist, dass beide Seiten dieselben Ikonographie, Stil-, Kompositions- und Kommunikationsmittel anwendeten, unabhängig von ihrer politischen Aussage, wodurch der Hogarthian Wit gefestigt und stetig weiterentwickelt wurde.
Sozialunternehmen haben mindestens zwei Ziele: die Erfüllung ihrer sozialen bzw. ökologischen Mission und finanzielle Ziele. Zwischen diesen Zielen können Spannungen entstehen. Wenn sie sich in diesem Spannungsfeld wiederholt zugunsten der finanziellen Ziele entscheiden, kommt es zum Mission Drift. Die Priorisierung der finanziellen Ziele überlagert dabei die soziale Mission. Auch wenn das Phänomen in der Praxis mehrfach beobachtet und in Einzelfallanalysen beschrieben wurde, gibt es bislang wenig Forschung zu Mission Drift. Der Fokus der vorliegenden Arbeit liegt darauf, diese Forschungslücke zu schließen und eigene Erkenntnisse für die Auslöser und Treiber des Mission Drifts von Sozialunternehmen zu ermitteln. Ein Augenmerk liegt auf den verhaltensökonomischen Theorien und der Mixed-Gamble-Logik. Dieser Logik zufolge liegt bei Entscheidungen immer eine Gleichzeitigkeit von Gewinnen und Verlusten vor, sodass Entscheidungsträger die Furcht vor Verlusten gegenüber der Aussicht auf Gewinne abwägen müssen. Das Modell wird genutzt, um eine neue theoretische Betrachtungsweise auf die Abwägung zwischen sozialen und finanziellen Zielen bzw. Mission Drift zu erhalten. Mit einem Conjoint Experiment werden Daten über das Entscheidungsverhalten von Sozialunternehmern generiert. Im Zentrum steht die Abwägung zwischen sozialen und finanziellen Zielen in verschiedenen Szenarien (Krisen- und Wachstumssituationen). Mithilfe einer eigens erstellten Stichprobe von 1.222 Sozialunternehmen aus Deutschland, Österreich und der Schweiz wurden 187 Teilnehmende für die Studie gewonnen. Die Ergebnisse dieser Arbeit zeigen, dass eine Krisensituation Auslöser für Mission Drift von Sozialunternehmen sein kann, weil in diesem Szenario den finanziellen Zielen die größte Bedeutung zugemessen wird. Für eine Wachstumssituation konnten hingegen keine solche Belege gefunden werden. Hinzu kommen weitere Einflussfaktoren, welche die finanzielle Orientierung verstärken können, nämlich die Gründeridentitäten der Sozialunternehmer, eine hohe Innovativität der Unternehmen und bestimmte Stakeholder. Die Arbeit schließt mit einer ausführlichen Diskussion der Ergebnisse. Es werden Empfehlungen gegeben, wie Sozialunternehmen ihren Zielen bestmöglich treu bleiben können. Außerdem werden die Limitationen der Studie und Wege für zukünftige Forschung im Bereich Mission Drift aufgezeigt.
Job crafting is the behavior that employees engage in to create personally better fitting work environments, for example, by increasing challenging job demands. To better understand the driving forces behind employees’ engagement in job crafting, we investigated implicit and explicit power motives. While implicit motives tend to operate at the unconscious, explicit motives operate at the unconscious level. We focused on power motives, as power is an agentic motive characterized by the need to influence your environment. Although power is relevant to job crafting in its entirety, in this study, we link it to increasing challenging job demands due to its relevance to job control, which falls under the umbrella of power. Using a cross-sectional design, we collected survey data from a sample of Lebanese nurses (N = 360) working in 18 different hospitals across the country. In both implicit and explicit power motive measures, we focused on integrative power that enable people to stay calm and integrate opposition. The results showed that explicit power predicted job crafting (H1) and that implicit power amplified this effect (H2). Furthermore, job crafting mediated the relationship between congruently high power motives and positive work-related outcomes (H3) that were interrelated (H4). Our findings unravel the driving forces behind one of the most important dimensions of job crafting and extend the benefits of motive congruence to work-related outcomes.
Des sentiments puissants : Aborder avec les enfants la question des émotions dans les conflits
(2024)
Les émotions sont le reflet de nos besoins personnels. Dans les discussions sur les conflits ou dans la médiation, en particulier, il est important de ne pas se concentrer uniquement sur le moment où un conflit est survenu, mais de déceler aussi les besoins et les émotions qui ont eu un impact sur nos actions, nos réflexions et notre ressenti. Le matériel que nous allons vous présenter vous permettra de découvrir comment aborder, en tant qu’enseignant·e, les émotions et les disputes avec des enfants dans l’enseignement fondamental.