Refine
Year of publication
Document Type
- Doctoral Thesis (900) (remove)
Language
- German (505)
- English (384)
- Multiple languages (7)
- French (4)
Keywords
- Deutschland (38)
- Stress (37)
- Optimierung (22)
- Modellierung (19)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (16)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (182)
- Raum- und Umweltwissenschaften (148)
- Fachbereich 4 (73)
- Mathematik (64)
- Wirtschaftswissenschaften (61)
- Fachbereich 1 (34)
- Geschichte, mittlere und neuere (28)
- Informatik (28)
- Germanistik (26)
- Fachbereich 6 (23)
- Kunstgeschichte (22)
- Politikwissenschaft (18)
- Anglistik (17)
- Fachbereich 2 (16)
- Soziologie (16)
- Fachbereich 3 (12)
- Philosophie (9)
- Romanistik (9)
- Computerlinguistik und Digital Humanities (7)
- Medienwissenschaft (6)
- Geschichte, alte (5)
- Allgemeine Sprach- und Literaturwissenschaft (4)
- Fachbereich 5 (4)
- Klassische Philologie (4)
- Pädagogik (4)
- Ethnologie (3)
- Japanologie (3)
- Sinologie (3)
- Archäologie (2)
- Rechtswissenschaft (2)
- Bodenkunde (1)
- Phonetik (1)
- Slavistik (1)
- Universitätsbibliothek (1)
There is a wide range of methodologies for policy evaluation and socio-economic impact assessment. A fundamental distinction can be made between micro and macro approaches. In contrast to micro models, which focus on the micro-unit, macro models are used to analyze aggregate variables. The ability of microsimulation models to capture interactions occurring at the micro-level makes them particularly suitable for modeling complex real-world phenomena. The inclusion of a behavioral component into microsimulation models provides a framework for assessing the behavioral effects of policy changes.
The labor market is a primary area of interest for both economists and policy makers. The projection of labor-related variables is particularly important for assessing economic and social development needs, as it provides insight into the potential trajectory of these variables and can be used to design effective policy responses. As a result, the analysis of labor market behavior is a primary area of application for behavioral microsimulation models. Behavioral microsimulation models allow for the study of second-round effects, including changes in hours worked and participation rates resulting from policy reforms. It is important to note, however, that most microsimulation models do not consider the demand side of the labor market.
The combination of micro and macro models offers a possible solution as it constitutes a promising way to integrate the strengths of both models. Of particular relevance is the combination of microsimulation models with general equilibrium models, especially computable general equilibrium (CGE) models. CGE models are classified as structural macroeconomic models, which are defined by their basis in economic theory. Another important category of macroeconomic models are time series models. This thesis examines the potential for linking micro and macro models. The different types of microsimulation models are presented, with special emphasis on discrete-time dynamic microsimulation models. The concept of behavioral microsimulation is introduced to demonstrate the integration of a behavioral element into microsimulation models. For this reason, the concept of utility is introduced and the random utility approach is described in detail. In addition, a brief overview of macro models is given with a focus on general equilibrium models and time series models. Various approaches for linking micro and macro models, which can either be categorized as sequential approaches or integrated approaches, are presented. Furthermore, the concept of link variables is introduced, which play a central role in combining both models. The focus is on the most complex sequential approach, i.e., the bi-directional linking of behavioral microsimulation models with general equilibrium macro models.
In den letzten Jahren hat die Nutzung von Drohnen deutlich zugenommen. Dies liegt unter anderem an der Leistungssteigerung, der guten Verfügbarkeit und an dem einfachen Einsatz von Drohnen. Damit sind auch Anwendungen in der Forschung möglich geworden, die zuvor unmöglich oder mit hohen Kosten verbunden waren. Als Sensor zur Datenaufzeichnung findet im Bereich der Forschung häufig eine Kamera Verwendung. Zusammen mit einer Drohne können Bereiche einfach und kostengünstig überflogen und dabei erkundet, beobachtet oder überwacht werden. Neben der Kamera als Sensor werden auch häufig Multispektralkameras und Lidar eingesetzt. Dagegen findet Radar im Bereich von kleinen Drohnen kaum Anwendung. Ziel dieser Forschungsarbeit war es zu untersuchen, ob neuste Radartechnik einen Mehrwert in der Fernerkundung mit kleinen Drohnen bieten kann.
Hierfür wurden moderne Radarsensoren aus dem Automobilbereich ausgewählt. Als Drohnen wurden sowohl Quadrocopter als auch eine Starrflügler-Drohne eingesetzt. Für die Analyse, Berechnung und Auswertung der Daten wurde MATLAB verwendet. Der erste Ansatz beruhte auf einer Starrflügler-Drohne, die sich durch ihren freien Zugriff auf die Steuerung auszeichnet. Dadurch können auch spezielle Anforderungen an die Flugregelung berücksichtigt werden. Allerdings können mit einer Starrflügler-Drohne keine langsamen oder sogar statische Luftaufnahmen erstellt werden, um Erfahrung mit den Radardaten zu erlangen. Aus diesem Grund wurde anschließend ein Radar-Messsystem entworfen, das unabhängig von der Drohne eingesetzt werden kann. Zusammen mit einem Quadrocopter konnten so statische Radarmessungen durchgeführt werden, um die Verwendbarkeit der Radardaten in der Fernerkundung zu bestätigen. Das Messsystem konnte so aber nur für 2-dimensionale Anwendungen eingesetzt werden. In der weiteren Forschungsarbeit wurde untersucht, ob es möglich ist, mit einem Radarsensor der nur in 2-dimensionen misst eine 3-dimensionale Aufzeichnungen zu erstellen. Als Versuchsobjekt wurde eine Hütte gewählt, die Anhand der Radardaten dargestellt werden sollte. Dafür wurde ein Prozess zur Datenverarbeitung mit elf Schritten entworfen, womit die Hütte auf 0,6 Meter genau rekonstruiert werden konnte. Im letzten Teil der Forschungsarbeit wurde untersucht, ob sich die Genauigkeit des Messsystems erhöhen lässt, um noch mehr Anwendungsfälle bedienen zu können. Dafür wurde ein neuer Radarsensor eingesetzt, der eine höhere Genauigkeit besitzt. Die Forschungsarbeit konzentrierte sich darauf, die Abhängigkeit der Radardaten zum ungenauen Lagesensor aufzulösen. Dabei wurde die Fluglage über die Radardaten selbst berechnet, womit die Fluglage genauer bestimmt werden kann als allein über den Lagesensor. Erst damit kann die höhere Genauigkeit des neuen Radarsensors auch tatsächlich ausgenutzt werden.
Mit den Ergebnissen der Forschungsarbeit sowie den vorgestellten Radarsensoren, stehen der Fernerkundung mit kleinen Drohnen, neben den klassischen Sensoren, zukünftig auch Radarsensoren zur Verfügung. Mit dem Messsystem und den Erkenntnissen aus der Forschungsarbeit werden bereits erste spezifische Anwendungen in Forschungsprojekten untersucht. Darüber hinaus konnten auch Anwendungsfälle außerhalb der Fernerkundung identifiziert werden. Die Weiterentwicklung im Bereich des autonomen Fahrens wird für Leistungssteigerungen bei Radarsensoren sorgen. Damit stehen auch der Fernerkundung zukünftig noch bessere Radarsensoren zur Verfügung.
Within this thesis the hedging behaviour of airlines from 2005 to 2019 is analysed by using an unbalanced panel dataset consisting of a total of 78 airlines from 39 countries. The focus of the analysis is on financial and operational hedging as well as the influence of both on CO2 emissions and the development of emitted CO2 emissions. For the analysis Probit models with random effects and OLS models with fixed effects were used.
The results regarding the relationship between leverage and financial hedging indicate a negative relationship between everage and financial fuel hedging and a non-linear convex relationship for highly leveraged airlines, which is contrary to the theory of financial distress.
In addition, the study provides evidence that airlines using other types of derivatives, such as interest rate derivatives, engage in more fuel hedging.
In terms of operational hedging, the analysis suggests that operating a diversified fleet is a complement to, rather than a substitute for, financial hedging. With regard to alliance membership, the results do not show that alliance membership is a substitute for financial hedging, as members of alliances are more likely to engage in hedging transactions and to a greater extent.
The analysis shows that the relative CO2 emissions fall in the period under review, but this does not apply to the absolute amount. No general statement can be made about the influence of financial and operational hedging on CO2 emissions, as the results are mixed.
Zirkularität und zirkulare Geschäftsmodelle in der Holzindustrie: eine empirische Untersuchung
(2025)
Der ökologische Zustand der Erde befindet sich infolge von Umweltverschmutzung, Abfallaufkommen und CO₂-bedingtem Klimawandel in einem kritischen Zustand. Mit rund 40 % trägt der Bau- und Gebäudesektor erheblich zu den globalen Treibhausgasemissionen bei. Holz gilt als klimafreundliche Alternative zu Beton und Stahl, bedarf jedoch ebenfalls einer nachhaltigen Nutzung. Die Kreislaufwirtschaft bietet mit der Wiederverwendung ein zukunftsweisendes Konzept: So sind etwa 45% des beim Rückbau von Gebäuden anfallenden Holzes potenziell als Rohstoff nutzbar. Dadurch werden alternative Rohstoffquellen erschlossen und das Abfallaufkommen reduziert.
Trotz dieses Potenzials liegt der Zirkularitätsgrad der Weltwirtschaft derzeit nur bei 7,2 %. Vor diesem Hintergrund untersucht die Dissertation, welche Wettbewerbsstrategien und welche organisationalen Fähigkeiten die Entwicklung zirkulärer Geschäftsmodelle fördern. Der Fokus liegt auf der Holzindustrie der DACH-Region, die historisch durch forstwirtschaftliche Nachhaltigkeit geprägt ist, jedoch bislang überwiegend linearen Strukturen folgt.
Die Arbeit kombiniert theoretische Fundierung, eine vierjährige Literaturrecherche, Experteninterviews sowie im Zentrum eine quantitative Unternehmensbefragung (n = 200). Daraus wurde eine aktivitätsorientierte Skala zur Bewertung der Zirkularität eines Geschäftsmodells entwickelt. Analysiert wurden drei Perspektiven: Fähigkeiten, Strategien und Stakeholder.
Im Kontext der Fähigkeitsperspektive wurde ermittelt, dass die dynamischen Fähigkeiten positive Implikationen auf die Umsetzung von Zirkularität haben. Im Forschungsfeld der Strategieperspektive wurde deutlich, dass die Innovationsführerschaft positive Effekte auf die Umsetzung der Kreislaufwirtschaft besitzt. Zudem weisen sowohl die Innovationsführerschaft als auch die Qualitätsführerschaft einen positiven indirekten Effekt über die dynamischen Fähigkeiten auf die Entwicklung zirkulärer Geschäftsmodelle auf. Im Rahmen der Stakeholderperspektive wurde eruiert, dass der Stakeholder-Druck im Zusammenwirken mit einem grünen Unternehmensimage eine Katalysator-Wirkung besitzt. Der Einfluss der Interessengruppen führt dazu, dass die Unternehmen ein grünes Image in eine substanzielle Umsetzungsphase überführen. Darüber hinaus wurde ersichtlich, dass der Stakeholder-Druck als zentraler Veränderungsfaktor wirkt. Während die direkten Auswirkungen der dynamischen Fähigkeiten durch den Druck zurückgehen, nehmen die indirekten Effekte auf die Erreichung von Zirkularität zu. Abschließend werden Handlungsempfehlungen für Unternehmen sowie wissenschaftliche Implikationen und zukünftige Forschungsmöglichkeiten abgeleitet.
Case-Based Reasoning (CBR) is a symbolic Artificial Intelligence (AI) approach that has been successfully applied across various domains, including medical diagnosis, product configuration, and customer support, to solve problems based on experiential knowledge and analogy. A key aspect of CBR is its problem-solving procedure, where new solutions are created by referencing similar experiences, which makes CBR explainable and effective even with small amounts of data. However, one of the most significant challenges in CBR lies in defining and computing meaningful similarities between new and past problems, which heavily relies on domain-specific knowledge. This knowledge, typically only available through human experts, must be manually acquired, leading to what is commonly known as the knowledge-acquisition bottleneck.
One way to mitigate the knowledge-acquisition bottleneck is through a hybrid approach that combines the symbolic reasoning strengths of CBR with the learning capabilities of Deep Learning (DL), a sub-symbolic AI method. DL, which utilizes deep neural networks, has gained immense popularity due to its ability to automatically learn from raw data to solve complex AI problems such as object detection, question answering, and machine translation. While DL minimizes manual knowledge acquisition by automatically training models from data, it comes with its own limitations, such as requiring large datasets, and being difficult to explain, often functioning as a "black box". By bringing together the symbolic nature of CBR and the data-driven learning abilities of DL, a neuro-symbolic, hybrid AI approach can potentially overcome the limitations of both methods, resulting in systems that are both explainable and capable of learning from data.
The focus of this thesis is on integrating DL into the core task of similarity assessment within CBR, specifically in the domain of process management. Processes are fundamental to numerous industries and sectors, with process management techniques, particularly Business Process Management (BPM), being widely applied to optimize organizational workflows. Process-Oriented Case-Based Reasoning (POCBR) extends traditional CBR to handle procedural data, enabling applications such as adaptive manufacturing, where past processes are analyzed to find alternative solutions when problems arise. However, applying CBR to process management introduces additional complexity, as procedural cases are typically represented as semantically annotated graphs, increasing the knowledge-acquisition effort for both case modeling and similarity assessment.
The key contributions of this thesis are as follows: It presents a method for preparing procedural cases, represented as semantic graphs, to be used as input for neural networks. Handling such complex, structured data represents a significant challenge, particularly given the scarcity of available process data in most organizations. To overcome the issue of data scarcity, the thesis proposes data augmentation techniques to artificially expand the process datasets, enabling more effective training of DL models. Moreover, it explores several deep learning architectures and training setups for learning similarity measures between procedural cases in POCBR applications. This includes the use of experience-based Hyperparameter Optimization (HPO) methods to fine-tune the deep learning models.
Additionally, the thesis addresses the computational challenges posed by graph-based similarity assessments in CBR. The traditional method of determining similarity through subgraph isomorphism checks, which compare nodes and edges across graphs, is computationally expensive. To alleviate this issue, the hybrid approach seeks to use DL models to approximate these similarity calculations more efficiently, thus reducing the computational complexity involved in graph matching.
The experimental evaluations of the corresponding contributions provide consistent results that indicate the benefits of using DL-based similarity measures and case retrieval methods in POCBR applications. The comparison with existing methods, e.g., based on subgraph isomorphism, shows several advantages but also some disadvantages of the compared methods. In summary, the methods and contributions outlined in this work enable more efficient and robust applications of hybrid CBR and DL in process management applications.
When natural phenomena and data-based relations are driven by dynamics which are not purely local, they cannot be described satisfactorily by partial differential equations. As a consequence, mathematical models governed by nonlocal operators are of interest. This thesis is concerned with nonlocal operators of the form
$\mathcal{L}u(x) = PV \int_{\mathbb{R}^d} (u(x)-u(y)) K(x,dy), x \in \mathbb{R}^d$,
which are determined through a family of Borel measures $K=(K(x, \cdot))_{x \in \mathbb{R}^d}$ on $\mathbb{R}^d$ and which act on the vector space of Borel measurable functions $u: \mathbb{R}^d \rightarrow \mathbb{R}$. For a large class of families $K$, namely those where $K$ is a symmetric transition kernel satisfying a specific non-degeneracy condition, a variational theory for nonlocal equations of the type $\mathcal{L}u=f$ is established which builds upon gadgets from both measure theory and classical analysis. While measure theory is used to provide a nonlocal integration by parts formula that allows to set up a reasonable variational formulation of the above equation in dependency of the particular boundary condition (Dirichlet, Robin, Neumann) considered, Hilbert space theory and fixed-point approaches are utilized to develop sufficient conditions for the existence of variational solutions. This theory is then applied to two specific realizations of $\mathcal{L}$ of interest before a weak maximum principle is established which is finally used to study overlapping domain decomposition methods for the nonlocal and homogeneous Dirichlet problem.
The application of machine learning and deep learning methods to hydrological modelling has advanced significantly in recent years, offering alternatives to traditional conceptual and physically based approaches. Within the numerous algorithms, long short-term memory (LSTM) networks have proven themselves particularly useful for the task of streamflow modelling. This thesis provides a collection of publications that investigate the capabilities, limitations and interpretability of LSTM for the purpose of streamflow modelling and climate change impact assessment within the lowland Ems catchment in Northwest Germany.
Within a comparative performance evaluation, LSTM and its predecessor, the recurrent neural network, demonstrate superior accuracy compared to the conceptual HBV model across various statistical performance metrics. However, a decline in performance was observed during low-flow conditions in certain sub-catchments. The evaluation of the flow duration curve revealed that the ML models more effectively capture the water balance, while HBV better represents streamflow dynamics.
To enhance the interpretability of LSTM, six explainable artificial intelligence techniques were applied. These methods consistently identified seasonal patterns in the temporal relevance of hydroclimatic input data. In combination with an observed correlation between the internal LSTM states and catchment-scale soil moisture dynamics, the findings suggest that LSTM models are capable of implicitly learning the relevant hydrological processes.
Following, the capabilities of LSTM to model climate change impact scenarios, particularly when they extend beyond historically observed climate conditions, are addressed. An ensemble of climate change projections is provided as hydroclimatic input to evaluate the performance of LSTMs and conceptual models. While all models reveal heterogeneous alterations in streamflow under future climate conditions, significant differences emerge based on the model type. Results provide evidence that LSTMs, in combination with the temperature-based Haude formula for estimating potential evaporation, work inadequately under altered climatic regimes, raising concerns about their applicability in long-term projections. The study also indicates the potential need to incorporate physical constraints into LSTM architectures to ensure model robustness and hydrological plausibility beyond the historical training range.
Collectively, this thesis contributes important insights into the applicability and interpretability of LSTM models in streamflow modelling. Despite the presence of a physically realistic representation of soil moisture dynamics of the Ems catchment, no robust change signals for streamflow under climate change can be derived. Those results underscore the potential of LSTM model approaches for accurate streamflow simulation, however, they require us to always critically question LSTM results, particularly when they are applied outside the training range.
Bilevel problems are optimization problems for which parts of the variables
are constrained to be an optimal solution to another nested optimization
problem. This structure renders bilevel problems particularly well-suited for
modeling hierarchical decision-making processes. They are widely applicable
in areas such as energy markets, transportation systems, security planning,
and pricing. However, the hierarchical nature of these problems also makes
them inherently challenging to solve, both in theory and in practice.
In this thesis, we study different nonlinear problem settings for the
nested optimization problem. First, we focus on nonlinear but convex bilevel
problems with purely integer variables. We propose a solution algorithm that
uses a branch-and-cut framework with tailored cutting planes. We prove
correctness and finite termination of the method under suitable assumptions
and put it into context of existing literature. Moreover, we provide an
extensive numerical study to showcase the applicability of our method and
we compare it to the state-of-the-art approach for a less general setting on
suitable instances from the literature. Furthermore, we discuss challenges that
arise when we try to generalize our approach to the mixed-integer setting.
Next, we study mixed-integer bilevel problems for which the nested
problem has a nonconvex and quadratic objective function, linear constraints,
and continuous variables. We state and prove a complexity-theoretical hardness result for this
problem class and develop a lower and upper bounding scheme to solve
these problems. We prove correctness and finite termination of the proposed
method under suitable assumptions and test its applicability in a numerical
study.
Finally, we consider bilevel problems with continuous variables, where
the nested problem has a convex-quadratic objective function and linear
constraints. We reformulate them as single-level optimization problems using
necessary and sufficient optimality conditions for the nested problem. Then,
we explore the family of so-called P-split reformulations for this single-level
problem and test their applicability in a preliminary numerical study.
Spatial microsimulation is an important tool for integrating geographical information into the evaluation of public policies and the analysis of social phenomena in urban regions. These models simulate the behavior and interaction between units of the region, such as individuals, households or firms, under specific conditions that may or not involve projections over time. This requires a representative base data set for their respective units.
In this thesis, we focus on the geo-referencing step of the population in the construction of this data set, where we define the location of the individuals so that the allocation obtained is representative in relation to the population of the region. To do this, we consider the assignment of households to dwellings with specific coordinates by solving a maximum weight matching problem where side constraints are included so that the allocation obtained satisfies statistical structures intrinsic to the considered region.
The model of this problem represents each feasible assignment of household to dwelling as a binary variable, which results in billions of variables for medium-sized municipalities such as the city of Trier, Germany. Therefore, standard solvers for mixed-integer linear optimization are not able to solve it due to their high time and memory consumption. Hence, we develop two approaches capable of producing high-quality allocations using a reasonable amount of computational resources, one based on specific decomposition algorithms, and the other characterized by the application of an approximation algorithm in the framework of Lagrangian relaxation of the side constraints.
We theoretically explore the allocations obtained by both approaches and perform an extensive computational study using synthetic data sets and real-world data sets associated with the city of Trier. The results show that the developed methods are able to obtain near-optimal solutions using significantly less memory and time than the solver Gurobi, which enables them to tackle significantly larger instances, with approximately 100 000 households and dwellings. Furthermore, the allocations obtained for the real-world data sets correspond to a realistic population distribution, which strengthens the practical applicability of our methods.
In Vielfalt geeint? Europäische Identitätskonstruktionen im bundesdeutschen Diskurs seit 1990
(2025)
Die Arbeit untersucht den bundesdeutschen Diskurs zur europäischen Integration seit 1990 aus diskurslinguistischer Perspektive und versteht ihn als Aushandlungsraum europäischer Identitätskonstruktionen. Ausgangspunkt ist die Annahme, dass institutionelle Vertiefung und geografische Erweiterung der EU nicht allein als verrechtlichte Integrationsschritte zu begreifen sind, sondern stets auch identitätspolitische Dimensionen tragen. Ziel der Studie ist es, die sprachliche Konstituierung der EU als identitätspolitisches Referenzsystem sichtbar zu machen und damit eine diskurslinguistische Ergänzung zur interdisziplinären Integrationsforschung zu leisten. Auf Grundlage eines diachronen Korpus, das zentrale integrationspolitische Etappen und Krisenphasen umfasst, wird ein Mixed-Methods-Ansatz entwickelt, der korpusgeleitete Verfahren mit der hermeneutischen Annotation diskurslinguistischer Kategorien verbindet. Analysiert werden nicht nur lexikalisch-semantische Repräsentationen Europas, sondern vor allem diskursive Grundfiguren wie Einheit, Vielfalt, Eigenes und Fremdes sowie deren Verbindung zu politischen Sinnzuschreibungen. Die Ergebnisse zeigen, in welchem Maße sich im deutschen Diskurs ein stabiler identitätspolitischer Bezugspunkt zur EU herausgebildet hat, wie sich normative Leitbilder und funktionale Rationalitäten überlagern und wie europäische Integration sprachlich zwischen symbolischer Aufladung und strategischer Instrumentalisierung verhandelt wird.
Extracellular enzymes in microbial communities play a central role in nutrient cycling and the degradation of (pollutant) substances in various natural and anthropogenic systems. Bound in aquatic biofilms, sludge aggregates, or even unbound at their interfaces, they are of great importance for both the environment and human health. In particular, in wastewater treatment plants and inland waters, hydrolytic activities influence the wide-reaching efficiency of nutrient removal and self-purification, thus contributing significantly to overall water quality.
The main goal of this dissertation project was to investigate the factors that influence enzymatic activity and the health of microbial communities in activated sludge and river systems, particularly in relation to anthropogenic influences and natural environmental conditions. The aim was to contribute to a better understanding of the sensitivity of our freshwater ecosystems and to support the long-term preservation of water quality and ecological stability. The development and optimization of appropriate methods, as well as their testing and applicability, were the focal points.
For this purpose, a fluorometric microplate assay was developed and adapted to determine both extracellular enzyme activities (EEAs) in activated sludge samples and in intact biofilms. Its suitability for field studies was subsequently tested. Inhibition and activity of selected hydrolases under different conditions were investigated to better understand the mechanisms and potential environmental risks posed by anthropogenic influences and seasonal fluctuations of hydrochemical and climatic parameters.
The first phase of the doctoral thesis involved studies on the inhibition of alkaline phosphatase in activated sludge by oxyanions. Using the fluorometric microplate assay, the inhibitory effect was sensitively detected over a pH range of 7.0 to 8.5. IC50- and IC20-concentrations were calculated from modeled dose-response functions. It was found that vanadate and tungstate caused strong inhibitory effects, while molybdate moderately inhibited the enzyme. An increasing pH led to a reduction in the inhibitory effect of tungstate and molybdate. The inhibition effects of vanadate were not significantly affected by the pH. In municipal wastewater, the concentrations of such metal ions are usually low, but industrial wastewater may have pollutant loads that can significantly impact the removal of phosphorus-containing compounds, and thus the efficiency of treatment plants.
In the second phase, an attempt was made to further adapt the developed methodology to investigate EEA and kinetics in intact freshwater biofilms. Four different types of bead materials (lava, glass, sintered quartz, and ceramics) fitting into a 96-well microplate were tested as carriers for biofilms on both the laboratory and field scale. The analysis included a total of seven hydrolases as representatives of key nutrient cycles such as phosphorus, carbon, and nitrogen: phosphatases, glucosidases, peptidases (two different types), and sulfatase. Experiments with increasing substrate concentrations led to classical kinetic profiles according to the Michaelis-Menten mechanism. This allowed for the prediction of the biofilm enzymes’ response to different substrate concentrations. Parameters such as Vmax and Km could be derived from the modeled curves.
Ceramic beads are particularly suitable for long-term studies due to their high stability, while sintered quartz beads should be preferred for the use in stagnant media (material loss under turbulent conditions). Lava and glass beads, on the other and, proved suboptimal for uniform biofilm development due to their surface properties. The potential use of this fast and sensitive test for ecotoxicological or even human-toxicological studies was demonstrated by the effects of caffeine on the activity of PDE. The result of this part of the research represents a powerful tool for assessing environmental pollution and monitoring water quality.
The high application potential was clearly highlighted in the final phase of the project. The goal here was to deepen the understanding of interactions between seasonal factors, anthropogenic influences, and biofilm processes in rivers by investigating EEA and biofilm parameters such as biomass and relating them to hydrochemical and climatic factors. Ceramic beads were exposed both upstream and downstream of a wastewater treatment plant discharge and sampled over a period of seven months. EEAs and biomass varied depending on the season and location, with higher microbial activity observed upstream in winter. Winter conditions led to the dilution of most nutrients as well as in an increse of dissolved oxygen. Nutrient concentrations analyzed downstream were significantly higher in the summer. Accumulation of nutrient or pollutants during the summer months cannot be excluded, which may have led to a general reduction in enzyme activities.
Potential causes could be inhibitory effects on the enzymes, or a reduced enzyme activity due to a sufficiently high nutrient supply. In general, the sampling site upstream showed a more pronounced seasonal dynamics, with a significant proportion of the variance in biological parameters (activity and biomass) attributable to seasonal factors. A secondary component, likely reflecting the impact of the treatment plant discharge, explained another portion of the data variance. Regardless of the season, high correlations between biological parameters were observed upstream, while downstream the data were more decorrelated. This could be because the biofilms, under chronic stress, respond less dynamically to seasonal fluctuations.
This dissertation illustrates that in addition to anthropogenic stress factors, seasonal fluctuations of hydrochemical and climatic parameters should also be considered in "stress downstream the pipe" studies. The selected methods are recommended for explaining and considering the data variance, as they highlight the complex interplay between microbial enzymatic activity, environmental factors, and pollutants in the activated sludge of wastewater treatment plants and also in aquatic systems. The novel bead assay could pave the way for the future standardization of effect-oriented studies on intact aquatic biofilms.
Perennial crops eliminate soil disturbance and reduce the amount of synthetic chemicals that are applied to the soil, improving soil biodiversity and food web structure. Additionally, perennial cropping is characterised by all year-round surface coverage which benefits soil biota in terms of habitat and food sources. Perennial intermediate wheatgrass (Thinopyrum intermedium, IWG) was domesticated and commercialised by The Land Institute in Kansas as Kernza® and serves as an example for these nature-based solutions. It develops an extensive root system that has a higher nutrient retention, possibly reducing nutrient runoff. It thereby follows a more resource-conservative strategy with improved belowground-oriented resource allocation in its root system. This may reduce the need for excessive fertiliser as the crop has a higher nitrogen efficiency, among other things.
IWG promoted the earthworm community and its diversity, more specifically, the occurrence of epigeic species (litter inhabitants), since those species benefit from the increased soil coverage and elimination of disturbances in the soil. As IWG creates a dense and extensive root system, as shown by the increased occurrence of root-feeding nematodes, endogeic species (horizontal burrowers) are supported through the provision of a reliable food source. IWG was characterised as a mostly undisturbed system with a highly structured food web through nematode analysis, as expressed through the promotion of structure indicators, for example, that are sensitive to disturbances in the soil and are therefore supported under no-till management. The root microbiome is continuously being shaped by the host as the crop regrows from the roots each vegetation period. This creates a symbiotic relationship and a beneficial feedback loop for the crop. Resultantly, the root-endophytic microbiome under IWG had a higher network complexity, connectivity and stability compared to annual wheat. The regrowth from the roots for IWG requires increased nutrient and energy storage, which was indicated by increased starch values. Correspondingly, the longer residence time of the roots in the soil resulted in higher lignin values. Furthermore, the decomposition pathway was dominated by fungivorous nematodes which may correspond to stimulated nutrient cycling and a heterogeneous resource environment, as seen for low input systems.
Overall, perennial wheat cultivation improved soil biodiversity already after an establishment of 3-6 years. As those benefits were present for all three countries, the varying soil and climate conditions do not seem to interfere with the positive effect of perennial wheat on the soil ecosystem, demonstrating a wide transferability and adaptability of the crop onto other study sites as well. Enhanced complexity and connectivity of the food web in comparison to annual wheat may indicate a resistance against abiotic stress, suggesting IWG cultivation as a viable option for a sustainable and resilient agriculture. The improvement in nutrient cycling and the resource-efficient cultivation strategy for IWG could enable cultivation on marginal land where annual crop cultivation is not possible as the soils are susceptible to erosion and nutrient runoff. This opens up new possibilities for agricultural cultivation on previously unused land, thus contributing to food security in the future.
Modellierung von o-PO4- Einträgen in saarländische Oberflächenwasserkörper im Trockenwetterfall
(2025)
Die Verfügbarkeit von ortho-Phosphat (o-PO₄) trägt wesentlich zur Eutrophierung von Fließgewässern bei und gefährdet damit das Erreichen des „guten ökologischen Zustands“ gemäß der EU-Wasserrahmenrichtlinie. Da die kommunalen Kläranlagen zentrale Eintragsquellen darstellen, gewinnt die Reduktion von o-PO₄ an dieser Stelle an Bedeutung. Neben der chemischen Phosphorelimination bietet insbesondere die vierte Reinigungsstufe, primär zur Entfernung von Mikroschadstoffen konzipiert, einen Synergieeffekt mit potenziellen Phosphorentfernungsraten von bis zu 85 %.
Zur Bewertung des Einflusses einer solchen Reinigungsstufe wurde ein Modell für ausgewählte saarländische Oberflächenwasserkörper (OWK) entwickelt, das den Trockenwetterfall als eutrophierungsrelevantes Szenario abbildet. Ein zentraler Bestandteil ist ein neu erarbeiteter Retentionsansatz, der biochemische und physikalische Prozesse wie Adsorption, Sedimentation und biologische Assimilation berücksichtigt. Auf Basis der Differenz zwischen emissionsseitig bilanziertem und gemessenem o-PO₄-Gehalt wurden für jeden OWK Verminderungsraten je Fließmeter abgeleitet und schließlich eine Gleichung zur Abschätzung der Retention in Abhängigkeit der Einzugsgebietsgröße formuliert. Die Validierung zeigt hinreichende Modellgenauigkeit, wenngleich negative Frachtdifferenzen in einigen Gewässern auf zusätzliche, nicht eindeutig quantifizierbare Einträge – etwa aus Landwirtschaft oder Kanalverlusten – hindeuten.
Die Szenarienanalyse belegt, dass eine vierte Reinigungsstufe grundsätzlich zur Reduktion von o-PO₄ an den Messstellen beiträgt. Eine Unterschreitung des geltenden Orientierungswertes wird jedoch nur erreicht, wenn sämtliche Kläranlagen eines OWK nachgerüstet werden – und auch dann nur in einigen Fällen. Damit stellt die vierte Reinigungsstufe allein keine ausreichende Alternative zu den Maßnahmen des 3. Bewirtschaftungsplans des Saarlandes dar, kann jedoch als ergänzende Strategie zur Verringerung der Phosphoreinträge dienen.
Price indices play a vital role in economic measurement as they reflect price levels
and measure price fluctuations. Price level measures are used with macroeconomic
indicators to express them in real terms. These measures are also used to index wages,
rents, and pensions. Furthermore, they are used as a reference for monetary policy
conducted by central banks. Therefore, the provision of accurate price indices is one
of the most important goals of National Statistical Institutes (NSIs), and numerous
studies have been devoted to this goal.
This cumulative dissertation also contributes to this goal. It contains four chapters,
each of which represents a separate research. The first two studies are devoted to
the treatment of seasonal products by using different price index methods. The first
research is co-authored with Ken van Loon. The third research is dedicated to finding
the most accurate method to make price predictions for missing products. The fourth
research is focused on the treatment of products by using different price index methods
when products’ quality characteristics are available.
Measuring the economic activity of a country requires high-quality data of businesses. In the case of Germany, this is not only required at national level, but also at federal state level and for different economic sectors. Important sources for high-quality business data are the business register and, among others, also 14 business surveys which are conducted by the Federal Statistical Office of Germany. However, the quality requirements of the Federal Statistical Office are in contrast to the interests of the businesses themselves. For them, answering to a survey's questionnaire is an additional cost factor, also known as response burden. A high response burden should be avoided, since it can have a negative impact on the quality of the businesses' responses to the surveys. Therefore, sample coordination can be used as a method to control the distribution of response burden while securing high-quality data.
When applying already existing business survey coordination systems, developed by different statistical institutes, legal and administrative standards of German official statistics have to be taken into account. These standards consider different sampling fractions, rotation fractions, periodicity, and stratification of the aforementioned 14 business surveys. Therefore, the aim of this doctoral thesis is to check the existing business survey coordination systems for their applicability in the context of German official statistics and, if necessary, to modify them accordingly. These modifications include the introduction of individual burden indicators which aim to take the individual perception of response burden into account.
For this purpose, several synthetic data sets have been created to test the application of the modified versions of the different business survey coordination systems through Monte Carlo simulation studies. These data sets include a large panel data set, reflecting the landscape of businesses in Rhineland-Palatinate and three smaller, synthetic data sets. The latter have been created with the help of the R package BuSuCo which has been developed within the scope of this thesis. The above mentioned simulation studies are evaluated based on different measures for estimation quality as well as for the concentration and distribution of response burden.
Income composition can have a significant impact on workers’ well-being, productivity, and career paths. Wages often include a variety of components, such as unconditional bonuses, profit-sharing payments, and incentives based on the individual performance of employees. Each of these may influence employee labour outcomes differently and the worker composition may matter for managers when designing the salary package. Simultaneously, workers’ employment choices and well-being are influenced by income outside the job, such as inheritances and lottery winnings, as well as by external factors like technological change. This dissertation includes five empirical studies that investigate these issues, yielding new insights on the role of monetary gifts, financial incentives, labour market institutions, and technology disruptions in affecting employees’ labour and well-being outcomes.
Many developed countries, including Germany, face a steady rise in the share of
individuals obtaining higher education. While rising education itself bears a series
of advantages as extensively studied in previous literature, it is also conceptually
linked to a higher likelihood of working in an occupation that does not match
one’s formal qualifications. Previous studies have predominantly evaluated
how demographic or job‐related aspects correlate with the likelihood of being
educationally ﴾mis﴿matched. However, they have largely ignored institutional
facets of the educational system or industrial organization. Moreover, little is
known about how private wealth affects educational mismatch or whether job
satisfaction is homogenously affected among individuals once such a mismatch
occurs. The five projects collected in this thesis aim to answer these open
questions in the literature for Germany, using data from the Socio‐Economic Panel
and employing different time intervals between 1984 and 2022.
Beginning with the educational system in early childhood, Chapter 2 evaluates
the impact of school‐starting age on the likelihood of over‐ and undereducation.
It exploits the exogenous variation in school‐entry rules across federal states
and years in Germany with regression discontinuity designs. The results report
a negative impact of school‐starting age on the likelihood of undereducation,
but no systematic relationship with overeducation.
Subsequently, Chapter 3 explores the variation in education costs by leveraging
the quasi‐experimental setting induced by the time‐limited introduction of tuition
fees in several German federal states between 2006 and 2014. The increase
in education costs among treated graduates results in a significantly higher
likelihood of overeducation, which endures even several years post‐graduation.
Chapter 4 focuses on the industrial relations system and examines the
correlation between trade union membership and the likelihood and extent of
educational ﴾mis﴿match. The results reveal that trade union members report
significantly less overeducation at both the intensive and extensive margin
and also a higher likelihood of being matched compared to non‐members. Furthermore, the heterogeneity analysis provides evidence that this correlation
is driven by improved bargaining power instead of informational advantages.
Chapter 5 focuses on private wealth as a determinant of educational mismatch
by investigating the impact of a wealth shock through inheritances, lottery
winnings or gifts on the likelihood of over‐ and undereducation. Due to
the diminishing marginal returns of wages with increasing windfall gains the
likelihood of undereducation is expected to decrease, while that of overeducation
is expected to increase. Empirically, these suppositions are supported for
overeducation, as its likelihood increases significantly after the windfall gain.
Further analyses reveal that this effect is driven by individuals switching
occupations while increasing their leisure time, and it materializes only for
medium to large windfall gains.
Contrary to the previous chapters, Chapter 6 focuses on educational mismatch,
more precisely on overeducation, as the independent variable. In particular, it
investigates the correlation between overeducation and job satisfaction. The
results align with the previously established negative correlation for private sector
employees exclusively. In contrast, interaction and subsample analyses reveal a
positive correlation for public sector employees. This link is driven by individuals
with a high degree of altruistic motivation and family orientation.
This dissertation examines how individuals unlock their personal power by investigating individual differences in self-regulation, in particular, how situational conditions interact with the personality dispositions of action versus state orientation. Action-oriented individuals are well able to regulate their affective states and to bridge the intention–behavior gap, showing initiative, implementing demanding intentions, and resisting temptations. State-oriented individuals, by contrast, often struggle to regulate affect and experience difficulties enacting intentions, especially under demanding conditions, tending to hesitate and ruminate. While extensive research has highlighted the advantages of action orientation across various domains such as education and health, this thesis challenges the prevailing one-sided perspective that presents action orientation as inherently superior and frames state orientation negatively. Drawing on Personality Systems Interactions theory, the dissertation adopts a dynamic view that understands these dispositions as context-sensitive rather than fixed. The central assumption is that action and state orientation each require different kinds of situational conditions to fully unlock their potential. Across six empirical studies (overall N = 1,067) using a multimethod approach that combines experimental and survey-based research in diverse populations and contextual settings, this dissertation examines (1) action and state orientation as distinct dispositions, (2) their dynamic interaction with situational factors, and (3) ways to support each in mobilizing personal power. Overall, the findings show that each disposition offers unique advantages - they simply require different situational conditions for their potential to unfold.
The role of implicit motives for affective, cognitive and behavioral processes has been a focal part of psychological research for decades. Yet, the majority of research in this field has been concentrated on processes involving implicit motives in adulthood. The systematic investigation of developmental correlates of implicit motives remains largely uncharted. The studies cumulated in this thesis aim to add to the sparse research on implicit motives in childhood and adolescence. Specifically, the development of the implicit power motive in the transition of middle to late childhood as a function of parenting behavior (Chapter 4), the predictive value of the implicit achievement motive for objective swimming performance in children and adolescents (Chapter 5) and the role of motive congruence for successful goal realization in adolescent samples across two cultures (Chapter 6) were investigated. Results of Study 1 (Chapter 4) indicate a negative longitudinal association of authoritarian parenting with the implicit power motive in children that is moderated by children’s perception of psychologically controlling parenting. Study 2 (Chapter 5) extends existing research on the assumed positive association of the implicit achievement motive and sports performance and demonstrates the moderating role of competitive anxiety on this association. Finally, Study 3 (Chapter 6) illustrates a moderating effect of implicit motives on the association of goal commitment and successful goal realization in German and Zambian adolescents, however, this effect was only observed in the domain of power motivation. Findings from all three studies are discussed in the context of the significance of implicit motives for psychological research.
Transnational protest movements continue to expose the enduring legacies of colonial exploitation and institutionalised racism within and beyond European cities. They foreground the systemic conditions under which Black lives are rendered disproportionately vulnerable to premature death. In doing so, they expose the enduring entanglements of racial capitalism, state violence and spatial exclusion. Through their ongoing political agitation these movements highlight the need for spatio-temporally situated and relationally embedded engagements with Black urban lives. My thesis responds to that call by examining place-making practices of enclosure and refusal throughout Black London’s post-World War II development.
Grounded in the ethnographic narrative of “being halfway while shooting”, I explore how Black lives are enclosed by institutional racism, how this enclosure is spatialised and how Black and differently racialised Londoners refuse these spatial enclosures through everyday and collective place-making practices. At the intersection of structural constraint and the desire to enact Black freedom in London, I specifically foreground the emergence of fugitive place-making practices.
Conceptually, I bring (critical) urban geography scholarship, Black studies and Black (British)Geographies scholarship into conversation. I develop “being halfway while shooting” as a relational concept that foregrounds the production of racialised urban knowledges, the multiplicity of Black enclosures, and the plurality of place-based strategies committed to refusal. I do so by stressing the relevance of Black fugitive thinking to account for the ongoing refusals that mark the relationship between Blackness and the British city. Methodologically, I adopt a research-activist ethnographic approach, grounded in my long-term engagement with a housing campaign in East London that organises around the housing needs of London’s racialised and gendered urban poor. Using qualitative methods - archival research, interviews, (non-)participant observations, document and media analysis - I embed contemporary struggles into long and ongoing histories of racial-capitalist urban development as well as Black and multi-ethnic refusal.
The empirical chapters trace place-making practices of enclosure and refusal across London’s post-World War II urban development. By examining the aftermath of urban revolts and changing urban welfare regimes, I explore how racialised urban governance has been historically materialised in and through the city. At the same time, I foreground how within this racialised construction of the British city, Black and differently racialised Londoners continue to hold open the possibility of refusal through places in which communal care and self-determination can be enacted. I then turn to the struggle over housing in East London, showing how contemporary processes of racialised dehumanisation and ongoing displacement are both historically rooted and actively contested. In the final empirical chapter I accentuate the relevance of these findings for German-speaking critical urban geography debates.
The research shows that racial capitalist urbanism reproduces enclosures through practices of value extraction, spatial displacement, and the policing of Black subjectivities. In response, Black and differently racialised Londoners engage in fugitive place-making. Rooted in communal care, political organisation, collective education and cultural affirmation, these practices reassert Black presence and belonging. They offer an enduring mode of place-based refusal and the ongoing possibility to stay in the city differently. These findings not only demonstrate the academic significance of my research but also underscore the urgent need to support the place-making practices of Black and differently racialised urban communities, who continue to refuse the racialised enclosure of the British city from within.
From these empirical insights, I propose the concept of a fugitive sense of place - a theoretical lens that accounts for the racialised reproduction of urban space and the transformative place-making practices of those who refuse its logics. Rather than offering prescriptive policy recommendations, I call for a reorientation of urban geographical enquiry by centring Black spatial practices, knowledges and imaginations. Through the lens of “being halfway while shooting”, I argue for a rethinking of human habitation and urban theory through the lived experiences of Black survival and refusal. Attending to a fugitive sense of place, I propose new avenues for human geography research to explore how fugitive place-making practices reshape the meanings, conditions, and possibilities of urban life.
Das Thema des Erlebnisses steht bereits seit langem im Fokus von Anbietern von Dienstleistungen. Dies gilt insbesondere für den Tourismus, einer Branche, deren Produkte zu einem signifikanten Teil aus solchen bestehen. Entsprechend der Prominenz des Themas, vor allem in den Bereichen touristischer Produktentwicklung und Marketing, ist dieses bereits breit in der Forschung diskutiert worden.
Trotz ausgiebiger Publikationsaktivitäten ist der tatsächliche Wissensstand in diesem Thema dennoch auffällig gering. Ein wichtiges Problem liegt darin begründet, dass die Terminologie im Bereich von Erlebnissen noch nicht allgemein akzeptiert und scharf abgegrenzt ist. So muss zwischen Erlebnissen und Erfahrungen unterschieden werden. Erstere treten während des Prozesses der Wahrnehmung einer touristischen Dienstleistung auf und bilden die Basis für Erfahrungen, welche prägend hinsichtlich der Wahrnehmung wirken und im Gesamtkontext der Reise betrachtet werden. Dieser Unterscheidung wird nicht nur in der englischsprachigen Literatur, in der beide Begriffe mit dem Begriff Experience beschrieben werden, sondern auch in der deutschsprachigen Literatur zumeist zu wenig Rechnung getragen, was dazu führt, dass häufig zu Erlebnissen publiziert wird, obwohl eigentlich Erfahrungen beschrieben werden. Problematisch ist dies vor allem, weil damit ein Phänomen untersucht wird, dessen Basis nahezu gänzlich unbekannt ist. Wichtige Fragen, welche zum Verständnis von Erlebnissen und damit auch von Erfahrungen bleiben unbeantwortet:
1) Welche Faktoren werden in der Genese von Erlebnissen wirksam?
2) Wie wirken diese zusammen?
3) Wie wird die Stärke eines Erlebnisses determiniert?
4) Wie werden Erlebnisse stark genug um den Konsum einer touristischen Dienstleistung zu prägen und damit gegebenenfalls zu Erfahrungen zu werden?
In der vorliegenden Arbeit wurden diese Fragen beantwortet, womit ein erster Schritt in Richtung der Füllung einer für die Tourismuswissenschaft nicht unbedeutenden Forschungslücke gelungen ist.
Um Erlebnisse, den Prozess der Genese dieser und deren Bewertung durch den Gast verstehen zu können, wurde ein triangulierter, zweistufiger Forschungsprozess ersonnen und in einem naturtouristischen Setting im Nationalpark Vorpommersche Boddenlandschaft zur Anwendung gebracht. Es handelt sich dabei um einen Mixed-Methods-Ansatz:
1) Induktive-qualitative Studie auf Basis der Grounded Theory
a. Ziel: Identifikation von Wirkkomponenten und deren Zusammenspiel und Generierung eines Modells
b. Methoden: Verdeckte Beobachtung und narrative Interviews
c. Ergebnisse: Modelle der Genese punktueller Erlebnisse und prägender Erlebnisse
2) Deduktive-quantitative Studie
a. Ziel: Überprüfung und Konkretisierung der in 1) generierten Modelle
b. Methoden: Fragebogengestützte, quantitative Befragung und Auswertung mittels multivariater Verfahren
c. Ergebnisse: Zusammenfassung der beiden Modelle zu einem finalen Modell der Erlebnis- und Erfahrungsgenese
Das Ergebnis des Vorgehens ist ein empirisch erarbeitetes und validiertes, detailliertes Modell der Genese von Erlebnissen und der Bewertung dieser durch den Erlebenden in Bezug auf deren Fähigkeit zu Erfahrungen zu werden.
Neben der Aufarbeitung und Konkretisierung dieses Prozesses konnte zusätzlich die in viele Richtungen diskutierte Bedeutung von Erwartungen und Produktzufriedenheit mit Blick auf die Bewertung von Erlebnissen geklärt werden. So konnte empirisch nachgewiesen werden, dass Erlebnisse, die auf Überraschungen, dem Unerwarteten, basierten besonders resistent gegenüber Störfaktoren waren und positive Erlebnisse zwar durchaus im Zusammenhang mit Produktzufriedenheit stehen aber sich vor allem durch eine zumindest temporär gesteigerte Lebenszufriedenheit manifestieren. Damit konnte das Hauptkriterium für die Bewertung von Erlebnissen mit Blick auf ihre Tauglichkeit zu Erfahrungen identifiziert werden.
Für die weitere Forschung kann die vorliegende Arbeit mit dem finalen Modell der Erlebnis- und Erfahrungsgenese einen soliden Ausgangspunkt bilden. So bieten zahlreiche Faktoren im Modell die Möglichkeit zur weiteren Forschung. Auch sollten die Ergebnisse in weiteren touristischen Kontexten überprüft werden.
Für die touristische Praxis kann die vorliegende Arbeit zahlreiche Hinweise geben. So bedeutet die Generierung von Erlebnissen im touristischen Kontext mehr als nur die Erfüllung von Erwartungen. Die widerstandsfähigsten Erlebnisse sind jene, die den Gast zu überraschen vermögen. Ein qualitativ hochwertiges, den Gast zufriedenstellendes Produkt ist dabei nicht mehr als ein Basisfaktor. Wirklich erfolgreich ist ein erlebnisbasierender Ansatz nur dann, wenn dieser es vermag die Lebenszufriedenheit des Gastes zu steigern.