Refine
Year of publication
Document Type
- Doctoral Thesis (844) (remove)
Language
- German (493)
- English (340)
- Multiple languages (7)
- French (4)
Keywords
- Stress (37)
- Deutschland (33)
- Modellierung (19)
- Optimierung (18)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (15)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (181)
- Raum- und Umweltwissenschaften (148)
- Mathematik (62)
- Wirtschaftswissenschaften (61)
- Fachbereich 4 (59)
- Fachbereich 1 (31)
- Geschichte, mittlere und neuere (28)
- Germanistik (26)
- Informatik (26)
- Kunstgeschichte (22)
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.