Refine
Year of publication
Document Type
- Doctoral Thesis (899) (remove)
Language
- German (505)
- English (383)
- Multiple languages (7)
- French (4)
Keywords
- Deutschland (38)
- Stress (37)
- Optimierung (22)
- Modellierung (19)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (16)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (182)
- Raum- und Umweltwissenschaften (148)
- Fachbereich 4 (72)
- Mathematik (64)
- Wirtschaftswissenschaften (61)
- Fachbereich 1 (34)
- Geschichte, mittlere und neuere (28)
- Informatik (28)
- Germanistik (26)
- Fachbereich 6 (23)
- Kunstgeschichte (22)
- Politikwissenschaft (18)
- Anglistik (17)
- Fachbereich 2 (16)
- Soziologie (16)
- Fachbereich 3 (12)
- Philosophie (9)
- Romanistik (9)
- Computerlinguistik und Digital Humanities (7)
- Medienwissenschaft (6)
- Geschichte, alte (5)
- Allgemeine Sprach- und Literaturwissenschaft (4)
- Fachbereich 5 (4)
- Klassische Philologie (4)
- Pädagogik (4)
- Ethnologie (3)
- Japanologie (3)
- Sinologie (3)
- Archäologie (2)
- Rechtswissenschaft (2)
- Bodenkunde (1)
- Phonetik (1)
- Slavistik (1)
- Universitätsbibliothek (1)
The application of machine learning and deep learning methods to hydrological modelling has advanced significantly in recent years, offering alternatives to traditional conceptual and physically based approaches. Within the numerous algorithms, long short-term memory (LSTM) networks have proven themselves particularly useful for the task of streamflow modelling. This thesis provides a collection of publications that investigate the capabilities, limitations and interpretability of LSTM for the purpose of streamflow modelling and climate change impact assessment within the lowland Ems catchment in Northwest Germany.
Within a comparative performance evaluation, LSTM and its predecessor, the recurrent neural network, demonstrate superior accuracy compared to the conceptual HBV model across various statistical performance metrics. However, a decline in performance was observed during low-flow conditions in certain sub-catchments. The evaluation of the flow duration curve revealed that the ML models more effectively capture the water balance, while HBV better represents streamflow dynamics.
To enhance the interpretability of LSTM, six explainable artificial intelligence techniques were applied. These methods consistently identified seasonal patterns in the temporal relevance of hydroclimatic input data. In combination with an observed correlation between the internal LSTM states and catchment-scale soil moisture dynamics, the findings suggest that LSTM models are capable of implicitly learning the relevant hydrological processes.
Following, the capabilities of LSTM to model climate change impact scenarios, particularly when they extend beyond historically observed climate conditions, are addressed. An ensemble of climate change projections is provided as hydroclimatic input to evaluate the performance of LSTMs and conceptual models. While all models reveal heterogeneous alterations in streamflow under future climate conditions, significant differences emerge based on the model type. Results provide evidence that LSTMs, in combination with the temperature-based Haude formula for estimating potential evaporation, work inadequately under altered climatic regimes, raising concerns about their applicability in long-term projections. The study also indicates the potential need to incorporate physical constraints into LSTM architectures to ensure model robustness and hydrological plausibility beyond the historical training range.
Collectively, this thesis contributes important insights into the applicability and interpretability of LSTM models in streamflow modelling. Despite the presence of a physically realistic representation of soil moisture dynamics of the Ems catchment, no robust change signals for streamflow under climate change can be derived. Those results underscore the potential of LSTM model approaches for accurate streamflow simulation, however, they require us to always critically question LSTM results, particularly when they are applied outside the training range.
Modellierung von o-PO4- Einträgen in saarländische Oberflächenwasserkörper im Trockenwetterfall
(2025)
Die Verfügbarkeit von ortho-Phosphat (o-PO₄) trägt wesentlich zur Eutrophierung von Fließgewässern bei und gefährdet damit das Erreichen des „guten ökologischen Zustands“ gemäß der EU-Wasserrahmenrichtlinie. Da die kommunalen Kläranlagen zentrale Eintragsquellen darstellen, gewinnt die Reduktion von o-PO₄ an dieser Stelle an Bedeutung. Neben der chemischen Phosphorelimination bietet insbesondere die vierte Reinigungsstufe, primär zur Entfernung von Mikroschadstoffen konzipiert, einen Synergieeffekt mit potenziellen Phosphorentfernungsraten von bis zu 85 %.
Zur Bewertung des Einflusses einer solchen Reinigungsstufe wurde ein Modell für ausgewählte saarländische Oberflächenwasserkörper (OWK) entwickelt, das den Trockenwetterfall als eutrophierungsrelevantes Szenario abbildet. Ein zentraler Bestandteil ist ein neu erarbeiteter Retentionsansatz, der biochemische und physikalische Prozesse wie Adsorption, Sedimentation und biologische Assimilation berücksichtigt. Auf Basis der Differenz zwischen emissionsseitig bilanziertem und gemessenem o-PO₄-Gehalt wurden für jeden OWK Verminderungsraten je Fließmeter abgeleitet und schließlich eine Gleichung zur Abschätzung der Retention in Abhängigkeit der Einzugsgebietsgröße formuliert. Die Validierung zeigt hinreichende Modellgenauigkeit, wenngleich negative Frachtdifferenzen in einigen Gewässern auf zusätzliche, nicht eindeutig quantifizierbare Einträge – etwa aus Landwirtschaft oder Kanalverlusten – hindeuten.
Die Szenarienanalyse belegt, dass eine vierte Reinigungsstufe grundsätzlich zur Reduktion von o-PO₄ an den Messstellen beiträgt. Eine Unterschreitung des geltenden Orientierungswertes wird jedoch nur erreicht, wenn sämtliche Kläranlagen eines OWK nachgerüstet werden – und auch dann nur in einigen Fällen. Damit stellt die vierte Reinigungsstufe allein keine ausreichende Alternative zu den Maßnahmen des 3. Bewirtschaftungsplans des Saarlandes dar, kann jedoch als ergänzende Strategie zur Verringerung der Phosphoreinträge dienen.
Spatial microsimulation is an important tool for integrating geographical information into the evaluation of public policies and the analysis of social phenomena in urban regions. These models simulate the behavior and interaction between units of the region, such as individuals, households or firms, under specific conditions that may or not involve projections over time. This requires a representative base data set for their respective units.
In this thesis, we focus on the geo-referencing step of the population in the construction of this data set, where we define the location of the individuals so that the allocation obtained is representative in relation to the population of the region. To do this, we consider the assignment of households to dwellings with specific coordinates by solving a maximum weight matching problem where side constraints are included so that the allocation obtained satisfies statistical structures intrinsic to the considered region.
The model of this problem represents each feasible assignment of household to dwelling as a binary variable, which results in billions of variables for medium-sized municipalities such as the city of Trier, Germany. Therefore, standard solvers for mixed-integer linear optimization are not able to solve it due to their high time and memory consumption. Hence, we develop two approaches capable of producing high-quality allocations using a reasonable amount of computational resources, one based on specific decomposition algorithms, and the other characterized by the application of an approximation algorithm in the framework of Lagrangian relaxation of the side constraints.
We theoretically explore the allocations obtained by both approaches and perform an extensive computational study using synthetic data sets and real-world data sets associated with the city of Trier. The results show that the developed methods are able to obtain near-optimal solutions using significantly less memory and time than the solver Gurobi, which enables them to tackle significantly larger instances, with approximately 100 000 households and dwellings. Furthermore, the allocations obtained for the real-world data sets correspond to a realistic population distribution, which strengthens the practical applicability of our methods.
Einige Forschungsergebnisse zeigen, dass emotionale Empfindungen kognitive Bereiche beeinflussen oder mit diesen im Zusammenhang stehen. Aufbauend auf den Ergebnissen wurden zwei Studien konzipiert. In Studie 1 wurde der Zusammenhang zwischen den Valenzen der dispositionalen emotionalen Empfindungen und der globalen Selbstbewertung des Gedächtnisses (Metagedächtnis) bei Lehramtsstudierenden (N = 218) untersucht. Die dispositionalen Empfindungen wurden mittels des deutschen Positive and Negativ Affect Schedule (PANAS) (Krohne, Egloff, Kohlmann & Tausch, 1996) und die globale Selbstbewertung des Gedächtnisses mit dem deutschen Squire Subjective Memory Questionnaire (SSMQ) (Wolf, 2017) erfasst. Angenommen wurde, dass die positive Valenz im Gegensatz zu der negativen Valenz im positiven Zusammenhang mit der höheren Gedächtniseinschätzung stehen. Die Ergebnisse bestätigen die Hypothesen. In Studie 2 wurde die aktuelle Valenz mittels des Open Affective Standardized Image Set (OASIS) (Kurdi, Lozano & Banaji, 2017) induziert, um Veränderungen des Metagedächtnisses und der tatsächlichen Gedächtnisleistung bei Lehramtsstudierenden (N = 44) zu untersuchen. Angenommen wurde, dass die positive Valenz positiv, die negative Valenz negativ und die neutrale Valenz nicht auf das Metagedächtnis und die Gedächtnisleistung wirkt. Weitere Zusammenhänge zwischen dem Metagedächtnis und der Gedächtnisleistung sowie der induzierten Valenz und der Gedächtnisleistung wurden angenommen. Die Messinstrumente aus Studie 1 blieben dieselben. Die Gedächtnisleistung wurde mittels eines sinnarmen Silbentests nach Ebbinghaus (1885) operationalisiert. Die Ergebnisse bestätigen die Hypothesen nicht. Die Emotionsinduktion hatte keinen Erfolg. Die Ergebnisse können damit nicht auf eine veränderte Valenz bezogen werden. Wie in Studie 1 zeigte sich ein Zusammenhang zwischen den dispositionalen Empfindungen und dem Metagedächtnis. Weitere explorative Ergebnisse, vor allem im Bezug auf das Geschlecht, wurden dargestellt. Die Ergebnisse sind bedeutsam für die Professionalisierung von Lehramtsstudierenden.
Three-Point Difference Schemes of High Order of Accuracy for Solving the Sturm-Liouville Problem
(2025)
The dissertation is devoted to the construction and justification of three-point difference schemes of high order of accuracy for solving the Sturm-Liouville problem. A new algorithmic realization of the exact three-point difference scheme on a non-uniform grid has been developed. We show that to compute the coefficients of the exact scheme in an arbitrary grid node, it is necessary to solve two auxiliary Cauchy problems for the system of three linear ordinary differential equations of the first order. The coefficient stability of the exact three-point difference scheme is proved. If the Cauchy problems are solved numerically using any one-step method, we obtain the truncated three-point difference scheme. The accuracy estimate of three-point difference schemes was obtained and the algorithm for finding their solution was developed.
We also developed a new algorithmic realization of the exact three-point difference scheme for the Sturm-Liouville problem with singularities at the ends of the interval. As in the case of the classical Sturm-Liouville problem, to find the coefficients of the exact three-point difference scheme, it is necessary to solve two auxiliary Cauchy problems for each grid node. The coefficient stability of the exact three-point difference scheme is proved. Since the Cauchy problems for the first and last grid nodes are singular, the Taylor series method has been developed to solve them. The accuracy estimate of truncated three-point difference schemes was obtained. To solve the difference scheme, the Newton's iterative method is used.
Numerical experiments are presented which confirm the efficiency of the proposed approach.
Biodiversity is threatened by a wide range of anthropogenic activities. Monitoring offers critical insights into how and why biodiversity is changing, helping to identify effective measures for maintaining biodiversity and its ecosystem services. However, conventional biodiversity monitoring methods are labor-intensive, and standardized long-term monitoring series are scarce. DNA-based approaches like metabarcoding environmental DNA (eDNA) promise rapid, cost-efficient, and highly resolved community data. At the same time, scientists are looking for alternative data sources that can compensate for the lack of long-term monitoring data to study past biodiversity changes. This work explores the potential of the German Environmental Specimen Bank (ESB), a pollution monitoring archive, which appears particularly promising for retrospective biodiversity monitoring. Biota samples from different ecosystems across the country are collected and archived at an exceptional level of standardization. Sampling species act as natural eDNA samplers, accumulating genetic traces from surrounding organisms. The cryogenic storage at the ESB preserves any eDNA in the samples in its original state. In this thesis, Chapter I serves as an introductory chapter, outlining the general chances and challenges of metabarcoding for assessing biodiversity. Chapter II focuses on primer design and testing the utility of ESB sampling species like mussels and macroalgae for characterizing the surrounding community. Both chapters form the basis for Chapters III to V, which report the use of ESB time series to uncover sample-associated communities and the changes they undergo. Chapter III illustrates the value of these time series by revealing the invasion trajectory of an alien barnacle into German coastal waters and linking the process to climate change. Chapter IV forms the core of this thesis by presenting an expanded measurement of biodiversity change in ESB time series across different taxonomic groups and ecosystem types. Here, a gradual compositional change (turnover) is reported from bacterial, fungal, microeukaryotic, and metazoan communities tending to either spatial homogenization or differentiation. Observed trends are tested for significance using a dynamic model of community ecology based on the equilibrium theory of island biogeography. The model reveals significantly accelerated turnover rates across all taxonomic groups and ecosystems investigated, suggesting a common, anthropogenically induced driver of biodiversity change. Since these analyses most likely include DNA derived from dead as well as from living organisms, Chapter V aims to separate both groups by metabarcoding both DNA and less stable ribosomal RNA from mussel samples. Contrary to the hypothesis, RNA is detectable from both living endobionts and dietary taxa. However, it outcompetes DNA in detecting microeukaryotic biodiversity. In summary, this thesis demonstrates the outstanding potential of archived ESB samples for retrospective biodiversity monitoring, a resource that offers many further untapped opportunities for future biodiversity research at multiple scales.
In most textbooks optimal sample allocation is tailored to rather theoretical examples. However, in practice we often face large-scale surveys with conflicting objectives and many restrictions on the quality and cost at population and subpopulation levels. This multiobjectiveness results in a multitude of efficient sample allocations, each giving different weight to a single survey purpose. Additionally, since the input data to the allocation problem often relies on supplementary information derived from estimation, historical data, or expert knowledge, allocations might be inefficient when specified for sampling.
This doctoral thesis presents a framework for optimal allocation to standard sampling schemes that allows for specifying the tradeoff between different objectives and analyzing their sensitivity to other problem components, aiming to support a decision-maker in identifying an at-most preferred sample allocation. It dedicates a full chapter to each of the following core questions: 1) How to efficiently incorporate quality and cost constraints for large-scale surveys, say, for thousands of strata with hundreds of precision and cost constraints? 2) How to handle vector-valued objectives with their components addressing different, possibly conflicting survey purposes? 3) How to consider uncertainty in the input data?
The techniques presented can be used separately or in combination as a general problem-solving framework for constrained multivariate and multidomain, possibly uncertain, sample allocation. The main problem is formulated in a way that highlights the different components of optimal sample allocation and can be taken as a gateway to develop solution strategies to each of the questions above, while shifting the focus between different problem aspects. The first question is addressed through a conic quadratic reformulation, which can be efficiently solved for large problem instances using interior-point methods. Based on this the second question is tackled using a weighted Chebyshev minimization, which provides insight into the sensitivity of the problem and enables a stepwise procedure for considering nonlinear decision functionals. Lastly, uncertainty in the input data is addressed through regularization, chance constraints and robust problem formulations.
Building on Social Virtual Reality to Support Flexible Collaboration and Enrich Therapy Sessions
(2025)
Social virtual environments allow their users to meet and collaborate in a shared three-dimensional space, even when far apart from each other in the real world. Within these spaces, the appearance and interaction capabilities of both users and environments can be adapted and changed in a myriad of ways. To enable virtual environments to fulfill their potential of supporting a wide variety of collaboration use-cases, both the impacts of basic interaction design decisions and the individual needs of specific usage areas need to be explored further.
This thesis approaches this topic in two ways. First, the basic building blocks of collaboration in social virtual environments are explored by asking the question: "How can social virtual spaces that allow interaction beyond real-world constraints utilize the potential of mutual assistance and shared workflows between multiple users?". Going into further detail for a serious use-case in which direct collaborative interactions and their effect on the included users are especially important, it then explores the potential of collaborative virtual spaces in the therapy domain by asking "How can the potential of social virtual spaces be utilized to support and improve therapy encounters?"
With regards to the first research question, the thesis presents two theoretical frameworks detailing different aspects of supporting smooth and varied collaboration processes. In addition, several user studies on the topic of collaborative virtual interaction are described, focusing on the role that different users can play during shared interaction and the effects that this distribution of roles and responsibilities has on both the performance and experience of the involved user pairs.
The results presented for this first research question show that social virtual spaces have the potential to provide dedicated support for collaborative workflows. To enable users to adapt their working mode individually and as a team, interaction techniques should complement a team's natural interaction and communication. When presenting novel interactions to users, providing them with a way to support each other can ease their adaptation to these interactions. In these cases, the inclusion of all interested collaborators as active participators should be prioritized in order to let all users benefit from being immersed in a virtual environment.
Addressing the combination of social virtual spaces with therapy in relation to the second research question, this thesis presents the result of a series of interviews with practicing physio- and psychotherapists. Motivated by the recorded expert feedback, it also reports on two more detailed explorations of specific areas of interest. The work presented for the second research question demonstrated the promise of using virtual environments in both exercise- and conversation-based therapy practice. Investigating the potential of shared interactions, the exploration of virtual recordings and the adaptation of virtual appearances, the presented work uncovered several topic areas that could be further explored regarding their possible use in the treatment of patients.
Taken together, the six research articles presented in this thesis show both the value of supporting and understanding shared interactions in virtual spaces and their potential place in serious use-cases like the therapy domain. When introducing shared virtual environments to new user groups, the opportunity for mutual support through shared interaction techniques could be a crucial building block towards making virtual spaces both accessible and attractive to a variety of users.
The present dissertation deals with variable stress patterns in English complex adjectives such as celebratory, identifiable or imaginative. This variation is usually described in terms of retaining the stress from the embedded base (idéntify -> idéntifiable) or deviating from the stress of the embedded base (idéntify -> identifíable). While several accounts have explored this variation, none of them have been able to identify a plausible reason for why it occurs. Additionally, the role of individual speaker differences has been disregarded in the discussion. This dissertation therefore explores the empirically observable extent of the variation and investigates possible causes of it with a special focus on individual differences between speakers. It uses data from a complex online experiment that included five different tasks to assess speakers' stress production, perception, morphological processing, vocabulary size and other factors. It furthermore tests the predictions of previous accounts on the large set of authentic utterances from speakers collected using this online experiment. The data show that individual differences in vocabulary size between speakers are a significant predictor of a speaker's tendency to retain the stress of the embedded base.
The new millennium has been characterized by rising digitalization, the proliferation of shadow banking, and significant advancements in machine learning and natural language processing. These trends present both challenges and opportunities, which my dissertation addresses. This cumulative dissertation investigates critical aspects of financial stability, monetary policy, and the transition towards cashless economies through three distinct but interrelated studies.
The first paper examines the risk-taking channel of monetary policy transmission within the euro area, focusing on shadow banks. Through vector autoregressive models, it assesses the impact of conventional and unconventional monetary policy shocks on shadow banks' asset growth and risk asset ratios. The results indicate that lower interest rates lead to a portfolio reallocation towards riskier assets and a general expansion of assets in shadow banks. In the case of conventional monetary policy shocks, both effects last three times as long as in the case of unconventional monetary policy shocks. Country-specific as well as sector-specific estimations confirm these findings. This study bridges gaps in the existing literature, especially in the eurozone, by highlighting the significant role shadow banks play in monetary policy transmission, suggesting implications for financial regulation and stability.
The second paper explores the influence of financial stability considerations on US monetary policy, particularly during the Great Recession. Utilizing natural language processing and machine learning techniques on congressional hearings, this study constructs indicators for financial stability sentiment expressed by the Federal Reserve Chairs. Empirical analysis is conducted using Taylor-rule models, revealing that negative financial stability sentiment is associated with a more accommodative monetary policy stance, even before the Great Recession. This work provides new insights into the integration of financial stability concerns into monetary policy frameworks, demonstrating the need for a balanced approach to economic stability. The article suggests that under a dual mandate, such as that of the Federal Reserve, financial stability can, to some extent, already be factored into monetary policy deliberations.
The third paper sheds new light on ``cash paradox'' by uncovering the factors of the cashless transition that has not been entirely understood so far. Using a comprehensive dataset across 65 countries, the study employs panel data models to explain the paradox (increasing demand for central bank money despite soaring digitalization), especially among technologically advanced countries, e.g., Japan. Empirical evidence suggests that digitalization is not significantly associated with higher reliance on physical cash. It uncovers a unique non-linear relationship between trust and cash usage (``Arch of Trust'') which holds after addressing potential endogeneity issues using 2SLS estimation. Opposed to the widespread misinterpretations of Keynes' (1937) reasons for holding cash, the findings highlight that distrust is the key factor unlocking two distinct puzzles in economics, linking cash hoarding with ``missing'' funds on capital markets and slower shift toward digital payments in low-trust societies. A key insight is the role of trust as a (social) insurance, cushion or safety net, dampening the perception of risk and reducing precautionary and transactionary demand for physical cash, while encouraging a shift towards riskier alternatives. This, in turn, is connected to the third puzzle, the ``paradox of prudence.'' A shift from riskier investments to safer assets, cash, may be prudent at the individual level but risky for the overall economy, a concern for macroprudential policymakers. Additionally, the research highlights the critical role of culture in driving the global movement towards cashless economies. Moreover, cultures that are more self-expression-oriented (which is the main cultural dimension) and culturally closer to Sweden are associated with less cash-intensive economies. These insights are vital for macroprudential regulators as well as for policymakers designing payment systems and CBDC in culturally diverse regions like the Eurozone.
Collectively, these papers contribute to a deeper understanding of monetary policy, financial stability, and the transition from cash-based to (nearly) cashless societies, offering significant theoretical and practical implications for academics, regulators and central bankers.