Refine
Year of publication
- 2024 (95) (remove)
Document Type
- Part of a Book (34)
- Doctoral Thesis (29)
- Article (22)
- Contribution to a Periodical (6)
- Book (3)
- Part of Periodical (1)
Keywords
- Deutschland (11)
- Demokratie (6)
- Europäische Union (4)
- Alternative für Deutschland (3)
- China (3)
- Konflikt (3)
- Schule (3)
- Aktienmarkt (2)
- Klimaschutz (2)
- Luxemburg (2)
Institute
- Fachbereich 3 (39)
- Fachbereich 4 (9)
- Psychologie (9)
- Politikwissenschaft (7)
- Fachbereich 1 (6)
- Biogeographie (3)
- Bodenkunde (2)
- Fachbereich 6 (2)
- Institut für Rechtspolitik (2)
- Biopsychologie (1)
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
Aims: Fear of physical activity (PA) is discussed as a barrier to regular exercise in patients with heart failure (HF), but HF-specific theoretical concepts are lacking. This study examined associations of fear of PA, heart-focused anxiety and trait anxiety with clinical characteristics and self-reported PA in outpatients with chronic HF. It was also investigated whether personality-related coping styles for dealing with health threats impact fear of PA via symptom perception.
Methods and results: This cross-sectional study enrolled 185 HF outpatients from five hospitals (mean age 62 ± 11 years, mean ejection fraction 36.0 ± 12%, 24% women). Avoidance of PA, sports/exercise participation (yes/no) and the psychological characteristics were assessed by self-reports. Fear of PA was assessed by the Fear of Activity in Situations–Heart Failure (FActS-HF15) questionnaire. In multivariable regression analyses higher NYHA class (b = 0.26, p = 0.036) and a higher number of HF drugs including antidepressants (b = 0.25, p = 0.017) were independently associated with higher fear of PA, but not with heart-focused fear and trait anxiety. Of the three anxiety scores only increased fear of PA was independently associated with more avoidance behavior regarding PA (b = 0.45, SE = 0.06, p < 0.001) and with increased odds of no sports/exercise participation (OR = 1.34, 95% CI 1.03–1.74, p = 0.028). Attention towards cardiac symptoms and symptom distress were positively associated with fear of PA (p < 0.001), which explained higher fear of PA in patients with a vigilant (directing attention towards health threats) coping style (p = 0.004).
Conclusions: Fear of PA assessed by the FActS-HF15 is a specific type of anxiety in patients with HF. Attention towards and being distressed by HF symptoms appear to play a central role in fear of PA, particularly in vigilant patients who are used to direct their attention towards health threats. These findings provide approaches for tailored interventions to reduce fear of PA and to increase PA in patients with HF.
Au Luxembourg, des médiateur·trice·s scolaires externes apportent leur aide quand des conflits surgissent en milieu scolaire. Le Service de médiation scolaire fournit un soutien en cas de risque de décrochage scolaire et de conflits en lien avec l’inclusion et l’intégration d’élèves à besoins éducatifs spécifiques ou issu·e·s de l’immigration. Michèle Schilt s’est entretenue avec la directrice du service, Lis De Pina, sur le travail des médiateur·trice·s scolaires.
Investment theory and related theoretical approaches suggest a dynamic interplay between crystallized intelligence, fluid intelligence, and investment traits like need for cognition. Although cross-sectional studies have found positive correlations between these constructs, longitudinal research testing all of their relations over time is scarce. In our pre-registered longitudinal study, we examined whether initial levels of crystallized intelligence, fluid intelligence, and need for cognition predicted changes in each other. We analyzed data from 341 German students in grades 7–9 who were assessed twice, one year apart. Using multi-process latent change score models, we found that changes in fluid intelligence were positively predicted by prior need for cognition, and changes in need for cognition were positively predicted by prior fluid intelligence. Changes in crystallized intelligence were not significantly predicted by prior Gf, prior NFC, or their interaction, contrary to theoretical assumptions. This pattern of results was largely replicated in a model including all constructs simultaneously. Our findings support the notion that intelligence and investment traits, particularly need for cognition, positively interact during cognitive development, but this interplay was unexpectedly limited to Gf.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
The French Enlightenment is a pivotal period in European intellectual and literary history, which can be studied through this dataset of French novels first published between 1751 and 1800. This collection contains 200 French novels in TEI/XML, encoded according to the ‘level-1 schema’ of the European Literary Text Collection (ELTeC), and carefully compiled to reflect the known historical publication of French Novels in that period regarding publication year, gender of author and narrative form. The dataset is connected to a bigger knowledge graph of 331,671 Resource Description Framework triples (RDF) built within the project ‘Mining and Modeling Text’ at Trier University, Germany (2019–2023).
Der vorliegende Beitrag greift die öffentliche Diskussion um den rechtspolitischen Umgang mit Hass, Hetze und Antisemitismus auf, die insbesondere nach dem Terroranschlag der Hamas am 07.10.2023 an Intensität und Dringlichkeit zugenommen hat. Dabei beleuchtet er einerseits das Straf- und Zivilrecht, legt andererseits einen besonderen Fokus auf öffentlich-rechtliche Konstellationen. Auf jedem dieser Gebiete werden Schwächen und Potenziale des Rechts und der Rechtsprechung aufgezeigt, zugleich aber auch die Grenzen staatlicher Gewalt verdeutlicht. Denn letztlich handelt es sich um ein gesellschaftliches Problem, dem – trotz aller Notwendigkeit staatlichen Handelns – in erster Linie durch Information, und erst in zweiter Linie durch das Recht begegnet werden muss.
In machine learning, classification is the task of predicting a label for each point within a data set. When the class of each point in the labeled subset is already known, this information is used to recognize patterns and make predictions about the points in the remainder of the set, referred to as the unlabeled set. This scenario falls in the field of supervised learning.
However, the number of labeled points can be restricted, because, e.g., it is expensive to obtain this information. Besides, this subset may be biased, such as in the case of self-selection in a survey. Consequently, the classification performance for unlabeled points may be limited. To improve the reliability of the results, semi-supervised learning tackles the setting of labeled and unlabeled data. Moreover, in many cases, additional information about the size of each class can be available from undisclosed sources.
This cumulative thesis presents different studies to combine this external cardinality constraint information within three important algorithms for binary classification in the supervised context: support vector machines (SVM), classification trees, and random forests. From a mathematical point of view, we focus on mixed-integer programming (MIP) models for semi-supervised approaches that consider a cardinality constraint for each class for each algorithm.
Furthermore, since the proposed MIP models are computationally challenging, we also present techniques that simplify the process of solving these problems. In the SVM setting, we introduce a re-clustering method and further computational techniques to reduce the computational cost. In the context of classification trees, we provide correct values for certain bounds that play a crucial role for the solver performance. For the random forest model, we develop preprocessing techniques and an intuitive branching rule to reduce the solution time. For all three methods, our numerical results show that our approaches have better statistical performances for biased samples than the standard approach.
What does it mean when the future of one’s life is exposed to the inscrutable will of an intangible other? And what are the possibilities of still asserting oneself when pushed to the limit? Nuancing the feelings of different actors in a detention centre and analysing how everyday moods, affects and violence intertwine, I explore how the randomly cruel and often-inexplicable logic of the contemporary deportation regime pushes migrants to their limits. Taking as my starting point the argument that deportation practices are effective because they operate on an affective level, I show how affective experiences manifest themselves bodily and how violent practices and discourses reverberate in bodies. I argue that ‘bodies under pressure’ are testimonies of racialised histories of exclusion, and I show how they become calls for social recognition. Exploring small, often-unintended acts of rebellion against exhausting deportation practices, I stress the existential necessity and social importance of including oneself in the realm of meaning.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.