Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (518) (entfernen)
Volltext vorhanden
- ja (518) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (18)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (54)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (14)
- Rechtswissenschaft (14)
Measurements of the atmospheric boundary layer (ABL) structure were performed for three years (October 2017–August 2020) at the Russian observatory “Ice Base Cape Baranova” (79.280° N, 101.620° E) using SODAR (Sound Detection And Ranging). These measurements were part of the YOPP (Year of Polar Prediction) project “Boundary layer measurements in the high Arctic” (CATS_BL) within the scope of a joint German–Russian project. In addition to SODAR-derived vertical profiles of wind speed and direction, a suite of complementary measurements at the observatory was available. ABL measurements were used for verification of the regional climate model COSMO-CLM (CCLM) with a 5 km resolution for 2017–2020. The CCLM was run with nesting in ERA5 data in a forecast mode for the measurement period. SODAR measurements were mostly limited to wind speeds <12 m/s since the signal was often lost for higher winds. The SODAR data showed a topographical channeling effect for the wind field in the lowest 100 m and some low-level jets (LLJs). The verification of the CCLM with near-surface data of the observatory showed good agreement for the wind and a negative bias for the 2 m temperature. The comparison with SODAR data showed a positive bias for the wind speed of about 1 m/s below 100 m, which increased to 1.5 m/s for higher levels. In contrast to the SODAR data, the CCLM data showed the frequent presence of LLJs associated with the topographic channeling in Shokalsky Strait. Although SODAR wind profiles are limited in range and have a lot of gaps, they represent a valuable data set for model verification. However, a full picture of the ABL structure and the climatology of channeling events could be obtained only with the model data. The climatological evaluation showed that the wind field at Cape Baranova was not only influenced by direct topographic channeling under conditions of southerly winds through the Shokalsky Strait but also by channeling through a mountain gap for westerly winds. LLJs were detected in 37% of all profiles and most LLJs were associated with channeling, particularly LLJs with a jet speed ≥ 15 m/s (which were 29% of all LLJs). The analysis of the simulated 10 m wind field showed that the 99%-tile of the wind speed reached 18 m/s and clearly showed a dipole structure of channeled wind at both exits of Shokalsky Strait. The climatology of channeling events showed that this dipole structure was caused by the frequent occurrence of channeling at both exits. Channeling events lasting at least 12 h occurred on about 62 days per year at both exits of Shokalsky Strait.
The unrestrainable evolution of medical science and technology is drastically changing health-care, enabling new medical procedures and remedies, which are increasingly intertwined with moral principles. Although a uniform European approach on assisted suicide is lacking, a common trend is developing: the boundary between euthanasia, assisted suicide and end-of-life care and the frontiers of legitimate medicine are becoming increasingly blurred. In Italy, a ruling of the Constitutional Court, no. 242/2019, declared the partial unconstitutionality of article 580 of the Italian Criminal Code, which prohibited assistance in suicide.
Specifically, article 580 excluded the criminal liability for the person who, in the manner provided for in Articles 1 and 2 of the law 22 December 2017, no. 219, “facilitates the execution of intention of suicide, autonomously and freely formed, of one person kept alive by life-sustaining treatments and suffering from an irreversible pathology, source of physical or psychological suffering that he/she deems intolerable, but fully capable of making free aware decisions, provided that such conditions and methods of execution have been verified by a public structure of the national health service, following the opinion of the territorially competent ethics committee.” The present paper analyzes the legal regime of assisted suicide in Italy, the role of the rule of law, and the crucial boundary between the branches of government with regard to this delicate issue, and investigates current legal challenges and potential future legal tracks.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.
Modern decision making in the digital age is highly driven by the massive amount of
data collected from different technologies and thus affects both individuals as well as
economic businesses. The benefit of using these data and turning them into knowledge
requires appropriate statistical models that describe the underlying observations well.
Imposing a certain parametric statistical model goes along with the need of finding
optimal parameters such that the model describes the data best. This often results in
challenging mathematical optimization problems with respect to the model’s parameters
which potentially involve covariance matrices. Positive definiteness of covariance matrices
is required for many advanced statistical models and these constraints must be imposed
for standard Euclidean nonlinear optimization methods which often results in a high
computational effort. As Riemannian optimization techniques proved efficient to handle
difficult matrix-valued geometric constraints, we consider optimization over the manifold
of positive definite matrices to estimate parameters of statistical models. The statistical
models treated in this thesis assume that the underlying data sets used for parameter
fitting have a clustering structure which results in complex optimization problems. This
motivates to use the intrinsic geometric structure of the parameter space. In this thesis,
we analyze the appropriateness of Riemannian optimization over the manifold of positive
definite matrices on two advanced statistical models. We establish important problem-
specific Riemannian characteristics of the two problems and demonstrate the importance
of exploiting the Riemannian geometry of covariance matrices based on numerical studies.
Stress position in English words is well-known to correlate with both their morphological properties and their phonological organisation in terms of non-segmental, prosodic categories like syllable structure. While two generalisations capturing this correlation, directionality and stratification, are well established, the exact nature of the interaction of phonological and morphological factors in English stress assignment is a much debated issue in the literature. The present study investigates if and how directionality and stratification effects in English can be learned by means of Naive Discriminative Learning, a computational model that is trained using error-driven learning and that does not make any a-priori assumptions about the higher-level phonological organisation and morphological structure of words. Based on a series of simulation studies we show that neither directionality nor stratification need to be stipulated as a-priori properties of words or constraints in the lexicon. Stress can be learned solely on the basis of very flat word representations. Morphological stratification emerges as an effect of the model learning that informativity with regard to stress position is unevenly distributed across all trigrams constituting a word. Morphological affix classes like stress-preserving and stress-shifting affixes are, hence, not predefined classes but sets of trigrams that have similar informativity values with regard to stress position. Directionality, by contrast, emerges as spurious in our simulations; no syllable counting or recourse to abstract prosodic representations seems to be necessary to learn stress position in English.
This study scrutinizes press photographs published during the first 6 weeks of the Russian War in Ukraine, beginning February 24th, 2022. Its objective is to shed light on the emotions evoked in Internet-savvy audiences. This empirical research aims to contribute to the understanding of emotional media effects that shape attitudes and actions of ordinary citizens. Main research questions are: What kind of empathic reactions are observed during the Q-sort study? Which visual patterns are relevant for which emotional evaluations and attributions? The assumption is that the evaluations and attributions of empathy are not random, but follow specific patterns. The empathic reactions are based on visual patterns which, in turn, influence the type of empathic reaction. The identification of specific categories for visual and emotional reaction patterns are arrived at in different methodological processes. Visual pattern categories were developed inductively, using the art history method of iconography-iconology to identify six distinct types of visual motifs in a final sample of 33 war photographs. The overarching categories for empathic reactions—empty empathy, vicarious traumatization and witnessing—were applied deductively, building on E. Ann Kaplan's pivotal distinctions. The main result of this research are three novel categories that combine visual patterns with empathic reaction patterns. The labels for these categories are a direct result of the Q-factorial analysis, interpreted through the lense of iconography-iconology. An exploratory nine-scale forced-choice Q-sort study (Nstimuli = 33) was implemented, followed by self-report interviews with a total of 25 participants [F = 16 (64%), M = 9 (36%), Mage = 26.4 years]. Results from this exploratory research include motivational statements on the meanings of war photography from semi-structured post-sort-interviews. The major result of this study are three types of visual patterns (“factors”) that govern distinct empathic reactions in participants: Factor 1 is “veiled empathy” with highest empathy being attributed to photos showing victims whose corpses or faces were veiled. Additional features of “veiled empathy” are a strong anti-politician bias and a heightened awareness of potential visual manipulation. Factor 2 is “mirrored empathy” with highest empathy attributions to photos displaying human suffering openly. Factor 3 focused on the context. It showed a proclivity for documentary style photography. This pattern ranked photos without clear contextualization lower in empathy than those photos displaying the fully contextualized setting. To the best of our knowledge, no study has tested empathic reactions to war photography empirically. In this respect, the study is novel, but also exploratory. Findings like the three patterns of visual empathy might be helpful for photo selection processes in journalism, for political decision-making, for the promotion of relief efforts, and for coping strategies in civil society to deal with the potentially numbing or traumatizing visual legacy of the War in Ukraine.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
In spite of the wide agreement among linguists as to the significance of spoken language data, actual speech data have not formed the basis of empirical work on English as much as one would think. The present paper is intended to contribute to changing this situation, on a theoretical and on a practical level. On a theoretical level, we discuss different research traditions within (English) linguistics. Whereas speech data have become increasingly important in various linguistic disciplines, major corpora of English developed within the corpus-linguistic community, carefully sampled to be representative of language usage, are usually restricted to orthographic transcriptions of spoken language. As a result, phonological phenomena have remained conspicuously understudied within traditional corpus linguistics. At the same time, work with current speech corpora often requires a considerable level of specialist knowledge and tailor-made solutions. On a practical level, we present a new feature of BNCweb (Hoffmann et al. 2008), a user-friendly interface to the British National Corpus, which gives users access to audio and phonemic transcriptions of more than five million words of spontaneous speech. With the help of a pilot study on the variability of intrusive r we illustrate the scope of the new possibilities.
This paper tested the ability of Mandarin learners of German, whose native language has lexical tone, to imitate pitch accent contrasts in German, an intonation language. In intonation languages, pitch accents do not convey lexical information; also, pitch accents are sparser than lexical tones as they only associate with prominent words in the utterance. We compared two kinds of German pitch-accent contrasts: (1) a “non-merger” contrast, which Mandarin listeners perceive as different and (2) a “merger” contrast, which sounds more similar to Mandarin listeners. Speakers of a tone language are generally very sensitive to pitch. Hypothesis 1 (H1) therefore stated that Mandarin learners produce the two kinds of contrasts similarly to native German speakers. However, the documented sensitivity to tonal contrasts, at the expense of processing phrase-level intonational contrasts, may generally hinder target-like production of intonational pitch accents in the L2 (Hypothesis 2, H2). Finally, cross-linguistic influence (CLI) predicts a difference in the realization of these two contrasts as well as improvement with higher proficiency (Hypothesis 3, H3). We used a delayed imitation paradigm, which is well-suited for assessing L2-phonetics and -phonology because it does not necessitate access to intonational meaning. We investigated the imitation of three kinds of accents, which were associated with the sentence-final noun in short wh-questions (e.g., Wer malt denn Mandalas, lit: “Who draws PRT mandalas?” “Who likes drawing mandalas?”). In Experiment 1, 28 native speakers of Mandarin participated (14 low- and 14 high-proficient). The learners’ productions of the two kinds of contrasts were analyzed using General Additive Mixed Models to evaluate differences in pitch accent contrasts over time, in comparison to the productions of native German participants from an earlier study in our lab. Results showed a more pronounced realization of the non-merger contrast compared to German natives and a less distinct realization of the merger contrast, with beneficial effects of proficiency, lending support to H3. Experiment 2 tested low-proficient Italian learners of German (whose L1 is an intonation language) to contextualize the Mandarin data and further investigate CLI. Italian learners realized the non-merger contrast more target-like than Mandarin learners, lending additional support to CLI (H3).
Three Kinds of Rising-Falling Contours in German wh-Questions: Evidence From Form and Function
(2022)
The intonational realization of utterances is generally characterized by regional as well as inter- and intra-speaker variability in f0. Category boundaries thus remain “fuzzy” and it is non-trivial how the (continuous) acoustic space maps onto (discrete) pitch accent categories. We focus on three types of rising-falling contours, which differ in the alignment of L(ow) and H(igh) tones with respect to the stressed syllable. Most of the intonational systems on German have described two rising accent categories, e.g., L+H* and L*+H in the German ToBI system. L+H* has a high-pitched stressed syllable and a low leading tone aligned in the pre-tonic syllable; L*+H a low-pitched stressed syllable and a high trailing tone in the post-tonic syllable. There are indications for the existence of a third category which lies between these two categories, with both L and H aligned within the stressed syllable, henceforth termed (LH)*. In the present paper, we empirically investigate the distinctiveness of three rising-falling contours [L+H*, (LH)*, and L*+H, all with a subsequent low boundary tone] in German wh-questions. We employ an approach that addresses both the form and the function of the contours, also taking regional variation into account. In Experiment 1 (form), we used a delayed imitation paradigm to test whether Northern and Southern German speakers can imitate the three rising-falling contours in wh-questions as distinct contours. In Experiment 2 (function), we used a free association task to investigate whether listeners interpret the pragmatic meaning of the three contours differently. Imitation results showed that German speakers—both from the North and the South—reproduced the three contours. There was a small but significant effect of regional variety such that contours produced by speakers from the North were slightly more distinct than those by speakers from the South. In the association task, listeners from both varieties attributed distinct meanings to the (LH)* accent as opposed to the two ToBI accents L+H* and L*+H. Combined evidence from form and function suggests that three distinct contours can be found in the acoustic and perceptual space of German rising-falling contours.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
Climate fluctuations and the pyroclastic depositions from volcanic activity both influence ecosystem functioning and biogeochemical cycling in terrestrial and marine environments globally. These controlling factors are crucial for the evolution and fate of the pristine but fragile fjord ecosystem in the Magellanic moorlands (~53°S) of southernmost Patagonia, which is considered a critical hotspot for organic carbon burial and marine bioproductivity. At this active continental margin in the core zone of the southern westerly wind belt (SWW), frequent Plinian eruptions and the extremely variable, hyper-humid climate should have efficiently shaped ecosystem functioning and land-to-fjord mass transfer throughout the Late Holocene. However, a better understanding of the complex process network defining the biogeochemical cycling at this land-to-fjord continuum principally requires a detailed knowledge of substrate weathering and pedogenesis in the context of the extreme climate. Yet, research on soils, the ubiquitous presence of tephra and the associated chemical weathering, secondary mineral (trans)formation and organic matter (OM) turnover processes is rare in this remote region. This complicates an accurate reconstruction of the ecosystem´s potentially sensitive response to past environmental impacts, including the dynamics of Late Holocene land-to-fjord fluxes as a function of volcanic activity and strong hydroclimate variability.
Against this background, this PhD thesis aims to disentangle the controlling factors that modulate the terrigenous element mobilization and export mechanisms in the hyper-humid Patagonian Andes and assesses their significance for fjord primary productivity over the past 4.5 kyrs BP. For the first time, distinct biogeochemical characteristics of the regional weathering system serve as major criterion in paleoenvironmental reconstruction in the area. This approach includes broad-scale mineralogical and geochemical analyses of basement lithologies, four soil profiles, volcanic ash deposits, the non-karst stalagmite MA1 and two lacustrine sediment cores. In order to pay special attention to the possibly important temporal variations of pedosphere-atmosphere interaction and ecological consequences initiated by volcanic eruptions, the novel data were evaluated together with previously published reconstructions of paleoclimate and paleoenvironmental conditions.
The devastative high-tephra loading of a single eruption from Mt. Burney volcano (MB2 at 4.216 kyrs BP) sustainably transformed this vulnerable fjord ecosystem, while acidic peaty Andosols developed from ~2.5 kyrs BP onwards after the recovery from millennium-scale acidification. The special setting is dominated by most variable redox-pH conditions, profound volcanic ash weathering and intense OM turnover processes, which are closely linked and ultimately regulated by SWW-induced water-level fluctuations. Constant nutrient supply though sea spray deposition represents a further important control on peat accumulation and OM turnover dynamics. These extreme environmental conditions constrain the biogeochemical framework for an extended land-to-fjord export of leachates comprising various organic and inorganic colloids (i.e., Al-humus complexes and Fe-(hydr)oxides). Such tephra- and/or Andosol-sourced flux contains high proportions of terrigenous organic carbon (OCterr) and mobilized essential (micro)nutrients, e.g., bio-available Fe, that are beneficial for fjord bioproductivity. It can be assumed that this supply of bio-available Fe produced by specific Fe-(hydr)oxide (trans)formation processes from tephra components may outlast more than 6 kyrs and surpasses the contribution from basement rock weathering and glacial meltwaters. However, the land-to-fjord exports of OCterr and bio-available Fe occur mostly asynchronous and are determined by the frequency and duration of redox cycles in soils or are initiated by SWW-induced extreme weather events.
The verification of (crypto)tephra layers embedded stalagmite MA1 enabled the accurate dating of three smaller Late Holocene eruptions from Mt. Burney (MB3 at 2.291 kyrs BP and MB4 at 0.853 kyrs BP) and Aguilera (A1 at 2.978 kyrs BP) volcanoes. Irrespective of the improvement of the regional tephrochronology, the obtained precise 230Th/U-ages allowed constraints on the ecological consequences caused by these Plinian eruptions. The deposition of these thin tephra layers should have entailed a very beneficial short-term stimulation of fjord bioproductivity with bio-available Fe and other (micro)nutrients, which affected the entire area between 52°S and 53°S 30´, respectively. For such beneficial effects, the thickness of tephra deposited to this highly vulnerable peatland ecosystem should be below a threshold of 1 cm.
The Late Holocene element mobilization and land-to-fjord transport was mainly controlled by (i) volcanic activity and tephra thickness, (ii) SWW-induced and southern hemispheric climate variability and (iii) the current state of the ecosystem. The influence of cascading climate and environmental impacts on OCterr and Fe-(hydr)oxide fluxes to can be categorized by four individual, in part overlapping scenarios. These different scenarios take into account the previously specified fundamental biogeochemical mechanisms and define frequently recurring patterns of ecosystem feedbacks governing the land-to-fjord mass transfer in the hyper-humid Patagonian Andes on the centennial-scale. This PhD thesis provides first evidence for a primarily tephra-sourced, continuous and long-lasting (micro)nutrient fertilization for phytoplankton growth in South Patagonian fjords, which is ultimately modulated by variations in SWW-intensity. It highlights the climate sensitivity of such critical land-to-fjord element transport and particularly emphasizes the important but so far underappreciated significance of volcanic ash inputs for biogeochemical cycles at active continental margins.
Up until May 2021, the post-election insecurity in Belarus had mostly been a national affair, but with Lukashenka’s regime starting to retaliate against foreign actors, the crisis internationalised. This article follows the development of Belarus-Lithuania border dynamics between the 2020 Belarusian presidential election and the start of the 2022 Russian invasion of Ukraine. A qualitative content analysis of English-language articles published by Lithuanian public broadcaster LRT shows that shows that there were relatively few changes to the border dynamics in the period between 9 August 2020 and 26 May 2021. After 26 May 2021, the border dynamics changed significantly: The Belarusian regime started facilitating migration, and more than 4,200 irregular migrants crossed into Lithuania from Belarus in 2021. In response, Lithuania reinforced its border protection and tried to deal with the irregular migration flows. Calls for action were made, protests were held, and the country received international support.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
The present dissertation was developed to emphasize the importance of self-regulatory abilities and to derive novel opportunities to empower self-regulation. From the perspective of PSI (Personality Systems Interactions) theory (Kuhl, 2001), interindividual differences in self-regulation (action vs. state orientation) and their underlying mechanisms are examined in detail. Based on these insights, target-oriented interventions are derived, developed, and scientifically evaluated. The present work comprises a total of four studies which, on the one hand, highlight the advantages of a good self-regulation (e.g., enacting difficult intentions under demands; relation with prosocial power motive enactment and well-being). On the other hand, mental contrasting (Oettingen et al., 2001), an established self-regulation method, is examined from a PSI perspective and evaluated as a method to support individuals that struggle with self-regulatory deficits. Further, derived from PSI theory`s assumptions, I developed and evaluated a novel method (affective shifting) that aims to support individuals in overcoming self-regulatory deficits. Thereby affective shifting supports the decisive changes in positive affect for successful intention enactment (Baumann & Scheffer, 2010). The results of the present dissertation show that self-regulated changes between high and low positive affect are crucial for efficient intention enactment and that methods such as mental contrasting and affective shifting can empower self-regulation to support individuals to successfully close the gap between intention and action.
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
Der digitale Fortschritt der vergangenen Jahrzehnte beruht zu einem großen Teil auf der Innovationskraft junger aufstrebender Unternehmen. Während diese Unternehmen auf der einen Seite ihr hohes Maß an Innovativität eint, entsteht für diese zeitgleich auch ein hoher Bedarf an finanziellen Mitteln, um ihre geplanten Innovations- und Wachstumsziele auch in die Tat umsetzen zu können. Da diese Unternehmen häufig nur wenige bis keine Unternehmenswerte, Umsätze oder auch Profitabilität vorweisen können, gestaltet sich die Aufnahme von externem Kapital häufig schwierig bis unmöglich. Aus diesem Umstand entstand in der Mitte des zwanzigsten Jahrhunderts das Geschäftsmodell der Risikofinanzierung, des sogenannten „Venture Capitals“. Dabei investieren Risikokapitalgeber in aussichtsreiche junge Unternehmen, unterstützen diese in ihrem Wachstum und verkaufen nach einer festgelegten Dauer ihre Unternehmensanteile, im Idealfall zu einem Vielfachen ihres ursprünglichen Wertes. Zahlreiche junge Unternehmen bewerben sich um Investitionen dieser Risikokapitalgeber, doch nur eine sehr geringe Zahl erhält diese auch. Um die aussichtsreichsten Unternehmen zu identifizieren, sichten die Investoren die Bewerbungen anhand verschiedener Kriterien, wodurch bereits im ersten Schritt der Bewerbungsphase zahlreiche Unternehmen aus dem Kreis potenzieller Investmentobjekte ausscheiden. Die bisherige Forschung diskutiert, welche Kriterien Investoren zu einer Investition bewegen. Daran anschließend verfolgt diese Dissertation das Ziel, ein tiefergehendes Verständnis darüber zu erlangen, welche Faktoren die Entscheidungsfindung der Investoren beeinflussen. Dabei wird vor allem auch untersucht, wie sich persönliche Faktoren der Investoren, sowie auch der Unternehmensgründer, auf die Investitionsentscheidung auswirken. Ergänzt werden diese Untersuchungen zudem durch die Analyse der Wirkung des digitalen Auftretens von Unternehmensgründern auf die Entscheidungsfindung von Risikokapitalgebern. Des Weiteren verfolgt diese Dissertation als zweites Ziel einen Erkenntnisgewinn über die Auswirkungen einer erfolgreichen Investition auf den Unternehmensgründer. Insgesamt umfasst diese Dissertation vier Studien, die im Folgenden näher beschrieben werden.
In Kapitel 2 wird untersucht, inwiefern sich bestimmte Humankapitaleigenschaften des Investors auf dessen Entscheidungsverhalten auswirken. Mithilfe vorangegangener Interviews und Literaturrecherchen wurden insgesamt sieben Kriterien identifiziert, die Risikokapitalinvestoren in ihrer Entscheidungsfindung nutzen. Daraufhin nahmen 229 Investoren an einem Conjoint Experiment teil, mithilfe dessen gezeigt werden konnte, wie wichtig die jeweiligen Kriterien im Rahmen der Entscheidung sind. Von besonderem Interesse ist dabei, wie sich die Wichtigkeit der Kriterien in Abhängigkeit der Humankapitaleigenschaften der Investoren unterscheiden. Dabei kann gezeigt werden, dass sich die Wichtigkeit der Kriterien je nach Bildungshintergrund und Erfahrung der Investoren unterscheidet. So legen beispielsweise Investoren mit einem höheren Bildungsabschluss und Investoren mit unternehmerischer Erfahrung deutlich mehr Wert auf die internationale Skalierbarkeit der Unternehmen. Zudem unterscheidet sich die Wichtigkeit der Kriterien auch in Abhängigkeit der fachlichen Ausbildung. So legen etwa Investoren mit einer fachlichen Ausbildung in Naturwissenschaften einen deutlich stärkeren Fokus auf den Mehrwert des Produktes beziehungsweise der Dienstleistung. Zudem kann gezeigt werden, dass Investoren mit mehr Investitionserfahrung die Erfahrung des Managementteams wesentlich wichtiger einschätzen als Investoren mit geringerer Investitionserfahrung. Diese Ergebnisse ermöglichen es Unternehmensgründern ihre Bewerbungen um eine Risikokapitalfinanzierung zielgenauer auszurichten, etwa durch eine Analyse des beruflichen Hintergrunds der potentiellen Investoren und eine damit einhergehende Anpassung der Bewerbungsunterlagen, zum Beispiel durch eine stärkere Schwerpunktsetzung besonders relevanter Kriterien.
Die in Kapitel 3 vorgestellte Studie bedient sich der Daten des gleichen Conjoint Experiments aus Kapitel 2, legt hierbei allerdings einen Fokus auf den Unterschied zwischen Investoren aus den USA und Investoren aus Kontinentaleuropa. Dazu wurden Subsamples kreiert, in denen 128 Experimentteilnehmer in den USA angesiedelt sind und 302 in Kontinentaleuropa. Die Analyse der Daten zeigt, dass US-amerikanische Investoren, im Vergleich zu Investoren in Kontinentaleuropa, einen signifikant stärkeren Fokus auf das Umsatzwachstum der Unternehmen legen. Zudem legen kontinentaleuropäische Investoren einen deutlich stärkeren Fokus auf die internationale Skalierbarkeit der Unternehmen. Um die Ergebnisse der Analyse besser interpretieren zu können, wurden diese im Anschluss mit vier amerikanischen und sieben europäischen Investoren diskutiert. Dabei bestätigen die europäischen Investoren die Wichtigkeit der hohen internationalen Skalierbarkeit aufgrund der teilweise geringen Größe europäischer Länder und dem damit zusammenhängenden Zwang, schnell international skalieren zu können, um so zufriedenstellende Wachstumsraten zu erreichen. Des Weiteren wurde der vergleichsweise geringere Fokus auf das Umsatzwachstum in Europa mit fehlenden Mitteln für eine schnelle Expansion begründet. Gleichzeitig wird der starke Fokus der US-amerikanischen Investoren auf Umsatzwachstum mit der höheren Tendenz zu einem Börsengang in den USA begründet, bei dem hohe Umsätze als Werttreiber dienen. Die Ergebnisse dieses Kapitels versetzen Unternehmensgründer in die Lage, ihre Bewerbung stärker an die wichtigsten Kriterien der potenziellen Investoren auszurichten, um so die Wahrscheinlichkeit einer erfolgreichen Investitionsentscheidung zu erhöhen. Des Weiteren bieten die Ergebnisse des Kapitels Investoren, die sich an grenzüberschreitenden syndizierten Investitionen beteiligen, die Möglichkeit, die Präferenzen der anderen Investoren besser zu verstehen und die Investitionskriterien besser auf potenzielle Partner abzustimmen.
Kapitel 4 untersucht ob bestimmte Charaktereigenschaften des sogenannten Schumpeterschen Entrepreneurs einen Einfluss auf die Wahrscheinlichkeit eines zweiten Risikokapitalinvestments haben. Dazu wurden von Gründern auf Twitter gepostete Nachrichten sowie Information von Investitionsrunden genutzt, die auf der Plattform Crunchbase zur Verfügung stehen. Insgesamt wurden mithilfe einer Textanalysesoftware mehr als zwei Millionen Tweets von 3313 Gründern analysiert. Die Ergebnisse der Studie deuten an, dass einige Eigenschaften, die typisch für Schumpetersche Gründer sind, die Chancen für eine weitere Investition erhöhen, während andere keine oder negative Auswirkungen haben. So erhöhen Gründer, die auf Twitter einen starken Optimismus sowie ihre unternehmerische Vision zur Schau stellen die Chancen auf eine zweite Risikokapitalfinanzierung, gleichzeitig werden diese aber durch ein zu starkes Streben nach Erfolg reduziert. Diese Ergebnisse haben eine hohe praktische Relevanz für Unternehmensgründer, die sich auf der Suche nach Risikokapital befinden. Diese können dadurch ihr virtuelles Auftreten („digital identity“) zielgerichteter steuern, um so die Wahrscheinlichkeit einer weiteren Investition zu erhöhen.
Abschließend wird in Kapitel 5 untersucht, wie sich die digitale Identität der Gründer verändert, nachdem diese eine erfolgreiche Risikokapitalinvestition erhalten haben. Dazu wurden sowohl Twitter-Daten als auch Crunchbase-Daten genutzt, die im Rahmen der Erstellung der Studie in Kapitel 4 erhoben wurden. Mithilfe von Textanalyse und Paneldatenregressionen wurden die Tweets von 2094 Gründern vor und nach Erhalt der Investition untersucht. Dabei kann gezeigt werden, dass der Erhalt einer Risikokapitalinvestition das Selbstvertrauen, die positiven Emotionen, die Professionalisierung und die Führungsqualitäten der Gründer erhöhen. Gleichzeitig verringert sich allerdings die Authentizität der von den Gründern verfassten Nachrichten. Durch die Verwendung von Interaktionseffekten kann zudem gezeigt werden, dass die Steigerung des Selbstvertrauens positiv durch die Reputation des Investors moderiert wird, während die Höhe der Investition die Authentizität negativ moderiert. Investoren haben durch diese Erkenntnisse die Möglichkeit, den Weiterentwicklungsprozess der Gründer nach einer erfolgreichen Investition besser nachvollziehen zu können, wodurch sie in die Lage versetzt werden, die Aktivitäten ihrer Gründer auf Social Media Plattformen besser zu kontrollieren und im Bedarfsfall bei ihrer Anpassung zu unterstützen.
Die in den Kapiteln 2 bis 5 vorgestellten Studien dieser Dissertation tragen damit zu einem besseren Verständnis der Entscheidungsfindung im Venture Capital Prozess bei. Der bisherige Stand der Forschung wird um Erkenntnisse erweitert, die sowohl den Einfluss der Eigenschaften der Investoren als auch der Gründer betreffen. Zudem wird auch gezeigt, wie sich die Investition auf den Gründer selbst auswirken kann. Die Implikationen der Ergebnisse, sowie Limitationen und Möglichkeiten künftiger Forschung werden in Kapitel 6 näher beschrieben. Da die in dieser Dissertation verwendeten Methoden und Daten erst seit wenigen Jahren im Kontext der Venture Capital Forschung genutzt werden, beziehungsweise überhaupt verfügbar sind, bietet sie sich als eine Grundlage für weitere Forschung an.
For decades, academics and practitioners aim to understand whether and how (economic) events affect firm value. Optimally, these events occur exogenously, i.e. suddenly and unexpectedly, so that an accurate evaluation of the effects on firm value can be conducted. However, recent studies show that even the evaluation of exogenous events is often prone to many challenges that can lead to diverse interpretations, resulting in heated debates. Recently, there have been intense debates in particular on the impact of takeover defenses and of Covid-19 on firm value. The announcements of takeover defenses and the propagation of Covid-19 are exogenous events that occur worldwide and are economically important, but have been insufficiently examined. By answering open research questions, this dissertation aims to provide a greater understanding about the heterogeneous effects that exogenous events such as the announcements of takeover defenses and the propagation of Covid-19 have on firm value. In addition, this dissertation analyzes the influence of certain firm characteristics on the effects of these two exogenous events and identifies influencing factors that explain contradictory results in the existing literature and thus can reconcile different views.
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
Zeitgleich mit stetig wachsenden gesellschaftlichen Herausforderungen haben im vergangenen Jahrzehnt Sozialunternehmen stark an Bedeutung gewonnen. Sozialunternehmen verfolgen das Ziel, mit unternehmerischen Mitteln gesellschaftliche Probleme zu lösen. Da der Fokus von Sozialunternehmen nicht hauptsächlich auf der eigenen Gewinnmaximierung liegt, haben sie oftmals Probleme, geeignete Unternehmensfinanzierungen zu erhalten und Wachstumspotenziale zu verwirklichen.
Zur Erlangung eines tiefergehenden Verständnisses des Phänomens der Sozialunternehmen untersucht der erste Teil dieser Dissertation anhand von zwei Studien auf der Basis eines Experiments das Entscheidungsverhalten der Investoren von Sozialunternehmen. Kapitel 2 betrachtet daher das Entscheidungsverhalten von Impact-Investoren. Der von diesen Investoren verfolgte Investmentansatz „Impact Investing“ geht über eine reine Orientierung an Renditen hinaus. Anhand eines Experiments mit 179 Impact Investoren, die insgesamt 4.296 Investitionsentscheidungen getroffen haben, identifiziert eine Conjoint-Studie deren wichtigste Entscheidungskriterien bei der Auswahl der Sozialunternehmen. Kapitel 3 analysiert mit dem Fokus auf sozialen Inkubatoren eine weitere spezifische Gruppe von Unterstützern von Sozialunternehmen. Dieses Kapitel veranschaulicht auf der Basis des Experiments die Motive und Entscheidungskriterien der Inkubatoren bei der Auswahl von Sozialunternehmen sowie die von ihnen angebotenen Formen der nichtfinanziellen Unterstützung. Die Ergebnisse zeigen unter anderem, dass die Motive von sozialen Inkubatoren bei der Unterstützung von Sozialunternehmen unter anderem gesellschaftlicher, finanzieller oder reputationsbezogener Natur sind.
Der zweite Teil erörtert auf der Basis von zwei quantitativ empirischen Studien, inwiefern die Registrierung von Markenrechten sich zur Messung sozialer Innovationen eignet und mit finanziellem und sozialem Wachstum von sozialen Startups in Verbindung steht. Kapitel 4 erörtert, inwiefern Markenregistrierungen zur Messung von sozialen Innovationen dienen können. Basierend auf einer Textanalyse der Webseiten von 925 Sozialunternehmen (> 35.000 Unterseiten) werden in einem ersten Schritt vier Dimensionen sozialer Innovationen (Innovations-, Impact-, Finanz- und Skalierbarkeitsdimension) ermittelt. Darauf aufbauend betrachtet dieses Kapitel, wie verschiedene Markencharakteristiken mit den Dimensionen sozialer Innovationen zusammenhängen. Die Ergebnisse zeigen, dass insbesondere die Anzahl an registrierten Marken als Indikator für soziale Innovationen (alle Dimensionen) dient. Weiterhin spielt die geografische Reichweite der registrierten Marken eine wichtige Rolle. Aufbauend auf den Ergebnissen von Kapitel 4 untersucht Kapitel 5 den Einfluss von Markenregistrierungen in frühen Unternehmensphasen auf die weitere Entwicklung der hybriden Ergebnisse von sozialen Startups. Im Detail argumentiert Kapitel 5, dass sowohl die Registrierung von Marken an sich als auch deren verschiedene Charakteristiken unterschiedlich mit den sozialen und ökonomischen Ergebnissen von sozialen Startups in Verbindung stehen. Anhand eines Datensatzes von 485 Sozialunternehmen zeigen die Analysen aus Kapitel 5, dass soziale Startups mit einer registrierten Marke ein vergleichsweise höheres Mitarbeiterwachstum aufweisen und einen größeren gesellschaftlichen Beitrag leisten.
Die Ergebnisse dieser Dissertation weiten die Forschung im Social Entrepreneurship-Bereich weiter aus und bieten zahlreiche Implikationen für die Praxis. Während Kapitel 2 und 3 das Verständnis über die Eigenschaften von nichtfinanziellen und finanziellen Unterstützungsorganisationen von Sozialunternehmen vergrößern, schaffen Kapitel 4 und 5 ein größeres Verständnis über die Bedeutung von Markenanmeldungen für Sozialunternehmen.
The Covid-19 pandemic and the related border restrictions have had numerous social, economic and political consequences for border regions. The temporary border closures impacted not only the lives of borderlanders whose everyday practices are embedded in cross-border spaces, but also the func-tioning of institutional actors involved in cross-border activities. The aim here is to investigate the communication surrounding the pandemic and the reactions and (new) strategies of cross-border insti-tutional actors in the context of (re)bordering. Applying the concept of resilience, this paper explores coping mechanisms and modes of adaptation as well as strategies developed to adjust to new circum-stances. Against this backdrop, factors that enhanced or hindered the adaptation process were identi-fied. The German-Polish borderland serves here as a case study, although it will be situated within a wider European context.
The paper aims to recognize the changes in the barriers to cross-border educational projects, especially in the context of the COVID-19 pandemic. The research focused on the European borderlands, where the level of maturity of cross-border cooperation is diverse (the Franco-German and Polish-Czech bor-derlands). The author utilised qualitative research methods (desk research, in-depth interview, case study). An exploratory study covered the barriers existing before the pandemic that stayed stable or have changed during the pandemic, and the new types of barriers that have appeared then. Within both borderlands, the identified barriers were similar in general; however, their intensity was varied. The key difference was the approach to these barriers within each borderland. On the Franco-German border, cross-border cooperation is more complex and deeper, and on the Polish-Czech border, it is more su-perficial and focused on specific issues only. These differences reveal the solutions that should be im-plemented to mitigate the impact of the pandemic on those projects within each borderland.
This thesis focus on threats as an experience of stress. Threats are distinguished from challenges and hindrances as another dimension of stress in challenge-hindrance models (CHM) of work stress (Tuckey et al., 2015). Multiple disciplines of psychology (e.g. stereotype, Fingerhut & Abdou, 2017; identity, Petriglieri, 2011) provide a variety of possible events that can trigger threats (e.g., failure expe-riences, social devaluation; Leary et al., 2009). However, systematic consideration of triggers and thus, an overview of when does the danger of threats arises, has been lacking to date. The explanation why events are appraised as threats is related to frustrated needs (e.g., Quested et al., 2011; Semmer et al., 2007), but empirical evidence is rare and needs can cover a wide range of content (e.g., relatedness, competence, power), depending on need approaches (e.g., Deci & Ryan, 2000; McClelland, 1961). This thesis aims to shed light on triggers (when) and the need-based mechanism (why) of threats.
In the introduction, I introduce threats as a dimension of stress experience (cf. Tuckey et al., 2015) and give insights into the diverse field of threat triggers (the when of threats). Further, I explain threats in terms of a frustrated need for positive self-view, before presenting specific needs as possible deter-minants in the threat mechanism (the why of threats). Study 1 represents a literature review based on 122 papers from interdisciplinary threat research and provides a classification of five triggers and five needs identified in explanations and operationalizations of threats. In Study 2, the five triggers and needs are ecologically validated in interviews with police officers (n = 20), paramedics (n = 10), teach-ers (n = 10), and employees of the German federal employment agency (n = 8). The mediating role of needs in the relationship between triggers and threats is confirmed in a correlative survey design (N = 101 Leaders working part-time, Study 3) and in a controlled laboratory experiment (N = 60 two-person student teams, Study 4). The thesis ends with a general discussion of the results of the four studies, providing theoretical and practical implications.
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
The forward testing effect is an indirect benefit of retrieval practice. It refers to the finding that retrieval practice of previously studied information enhances learning and retention of subsequently studied other information in episodic memory tasks. Here, two experiments were conducted that investigated whether retrieval practice influences participants’ performance in other tasks, i.e., arithmetic tasks. Participants studied three lists of words in anticipation of a final recall test. In the testing condition, participants were immediately tested on lists 1 and 2 after study of each list, whereas in the restudy condition, they restudied lists 1 and 2 after initial study. Before and after study of list 3, participants did an arithmetic task. Finally, participants were tested on list 3, list 2, and list 1. Different arithmetic tasks were used in the two experiments. Participants did a modular arithmetic task in Experiment 1a and a single-digit multiplication task in Experiment 1b. The results of both experiments showed a forward testing effect with interim testing of lists 1 and 2 enhancing list 3 recall in the list 3 recall test, but no effects of recall testing of lists 1 and 2 for participants’ performance in the arithmetic tasks. The findings are discussed with respect to cognitive load theory and current theories of the forward testing effect.
Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to the quality of gaze data and other factors potentially threatening the validity of this signal. In this study, we evaluated the impact of accuracy and areas of interest (AOIs) size on the classification of simulated gaze (fixation) data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying accuracy. As hypothesized, we found that accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed in falsely classified gaze inside AOIs (Type I errors; false alarms) and falsely classified gaze outside the predefined AOIs (Type II errors; misses). Our results indicate that smaller AOIs generally minimize false classifications as long as accuracy is good enough. For studies with lower accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of more probable Type I errors. Proper estimation of accuracy is therefore essential for making informed decisions regarding the size of AOIs in eye tracking research.
The temporal stability of psychological test scores is one prerequisite for their practical usability. This is especially true for intelligence test scores. In educational contexts, high stakes decisions with long-term consequences, such as placement in special education programs, are often based on intelligence test results. There are four different types of temporal stability: mean-level change, individual-level change, differential continuity, and ipsative continuity. We present statistical methods for investigating each type of stability. Where necessary, the methods were adapted for the specific challenges posed by intelligence research (e.g., controlling for general intelligence in lower order test scores). We provide step-by-step guidance for the application of the statistical methods and apply them to a real data set of 114 gifted students tested twice with a test-retest interval of 6 months.
• Four different types of stability need to be investigated for a full picture of temporal stability in psychological research
• Selection and adaption of the methods for the use in intelligence research
• Complete protocol of the implementation
We examined the long-term relationship of psychosocial risk and health behaviors on clinical events in patients awaiting heart transplantation (HTx). Psychosocial characteristics (e.g., depression), health behaviors (e.g., dietary habits, smoking), medical factors (e.g., creatinine), and demographics (e.g., age, sex) were collected at the time of listing in 318 patients (82% male, mean age = 53 years) enrolled in the Waiting for a New Heart Study. Clinical events were death/delisting due to deterioration, high-urgency status transplantation (HU-HTx), elective transplantation, and delisting due to clinical improvement. Within 7 years of follow-up, 92 patients died or were delisted due to deterioration, 121 received HU-HTx, 43 received elective transplantation, and 39 were delisted due to improvement. Adjusting for demographic and medical characteristics, the results indicated that frequent consumption of healthy foods (i.e., foods high in unsaturated fats) and being physically active increased the likelihood of delisting due improvement, while smoking and depressive symptoms were related to death/delisting due to clinical deterioration while awaiting HTx. In conclusion, psychosocial and behavioral characteristics are clearly associated with clinical outcomes in this population. Interventions that target psychosocial risk, smoking, dietary habits, and physical activity may be beneficial for patients with advanced heart failure waiting for a cardiac transplant.
The Belt and Road Initiative (BRI) has had a significant impact on China in political, economic, and cultural terms. This study focuses on the cultural domain, especially on scholarship students from the countries that signed bilateral cooperation agreements with China under the BRI. Using an integrated approach combining the difference-in-differences method and the gravity model, we explore the correlation between the BRI and the increasing number of international scholarship students funded by the Chinese government, as well as the determinants of students' decision to study in China. The panel data from 2010 to 2018 show that the launch of BRI has had a positive impact on the number of scholarship students from BRI countries. The number of scholarship recipients from non-BRI countries also increased, but at a much slower rate than those from BRI countries. The sole exception is the United States, which has trended downward for both state-funded and self-funded students.
The outbreak of the COVID-19 pandemic has also led to many conspiracy theories. While the origin of the pandemic in China led some, including former US president Donald Trump, to dub the pathogen “Chinese virus” and to support anti-Chinese conspiracy narratives, it caused Chinese state officials to openly support anti-US conspiracy theories about the “true” origin of the virus. In this article, we study whether nationalism, or more precisely uncritical patriotism, is related to belief in conspiracy theories among normal people. We hypothesize based on group identity theory and motivated reasoning that for the particular case of conspiracy theories related to the origin of COVID-19, such a relation should be stronger for Chinese than for Germans. To test this hypothesis, we use survey data from Germany and China, including data from the Chinese community in Germany. We also look at relations to other factors, in particular media consumption and xenophobia.
Despite significant advances in terms of the adoption of formal Intellectual Property Rights (IPR) protection, enforcement of and compliance with IPR regulations remains a contested issue in one of the world's major contemporary economies—China. The present review seeks to offer insights into possible reasons for this discrepancy as well as possible paths of future development by reviewing prior literature on IPR in China. Specifically, it focuses on the public's perspective, which is a crucial determinant of the effectiveness of any IPR regime. It uncovers possible differences with public perspectives in other countries and points to mechanisms (e.g., political, economic, cultural, and institutional) that may foster transitions over time in both formal IPR regulation and in the public perception of and compliance with IPR in China. On this basis, the review advances suggestions for future research in order to improve scholars' understanding of the public's perspective of IPR in China, its antecedents and implications.
Similarity-based retrieval of semantic graphs is a core task of Process-Oriented Case-Based Reasoning (POCBR) with applications in real-world scenarios, e.g., in smart manufacturing. The involved similarity computation is usually complex and time-consuming, as it requires some kind of inexact graph matching. To tackle these problems, we present an approach to modeling similarity measures based on embedding semantic graphs via Graph Neural Networks (GNNs). Therefore, we first examine how arbitrary semantic graphs, including node and edge types and their knowledge-rich semantic annotations, can be encoded in a numeric format that is usable by GNNs. Given this, the architecture of two generic graph embedding models from the literature is adapted to enable their usage as a similarity measure for similarity-based retrieval. Thereby, one of the two models is more optimized towards fast similarity prediction, while the other model is optimized towards knowledge-intensive, more expressive predictions. The evaluation examines the quality and performance of these models in preselecting retrieval candidates and in approximating the ground-truth similarities of a graph-matching-based similarity measure for two semantic graph domains. The results show the great potential of the approach for use in a retrieval scenario, either as a preselection model or as an approximation of a graph similarity measure.
A model-based temperature adjustment scheme for wintertime sea-ice production retrievals from MODIS
(2022)
Knowledge of the wintertime sea-ice production in Arctic polynyas is an important requirement for estimations of the dense water formation, which drives vertical mixing in the upper ocean. Satellite-based techniques incorporating relatively high resolution thermal-infrared data from MODIS in combination with atmospheric reanalysis data have proven to be a strong tool to monitor large and regularly forming polynyas and to resolve narrow thin-ice areas (i.e., leads) along the shelf-breaks and across the entire Arctic Ocean. However, the selection of the atmospheric data sets has a large influence on derived polynya characteristics due to their impact on the calculation of the heat loss to the atmosphere, which is determined by the local thin-ice thickness. In order to overcome this methodical ambiguity, we present a MODIS-assisted temperature adjustment (MATA) algorithm that yields corrections of the 2 m air temperature and hence decreases differences between the atmospheric input data sets. The adjustment algorithm is based on atmospheric model simulations. We focus on the Laptev Sea region for detailed case studies on the developed algorithm and present time series of polynya characteristics in the winter season 2019/2020. It shows that the application of the empirically derived correction decreases the difference between different utilized atmospheric products significantly from 49% to 23%. Additional filter strategies are applied that aim at increasing the capability to include leads in the quasi-daily and persistence-filtered thin-ice thickness composites. More generally, the winter of 2019/2020 features high polynya activity in the eastern Arctic and less activity in the Canadian Arctic Archipelago, presumably as a result of the particularly strong polar vortex in early 2020.
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
Extension of an Open GEOBIA Framework for Spatially Explicit Forest Stratification with Sentinel-2
(2022)
Spatially explicit information about forest cover is fundamental for operational forest management and forest monitoring. Although open-satellite-based earth observation data in a spatially high resolution (i.e., Sentinel-2, ≤10 m) can cover some information needs, spatially very high-resolution imagery (i.e., aerial imagery, ≤2 m) is needed to generate maps at a scale suitable for regional and local applications. In this study, we present the development, implementation, and evaluation of a Geographic Object-Based Image Analysis (GEOBIA) framework to stratify forests (needleleaved, broadleaved, non-forest) in Luxembourg. The framework is exclusively based on open data and free and open-source geospatial software. Although aerial imagery is used to derive image objects with a 0.05 ha minimum size, Sentinel-2 scenes of 2020 are the basis for random forest classifications in different single-date and multi-temporal feature setups. These setups are compared with each other and used to evaluate the framework against classifications based on features derived from aerial imagery. The highest overall accuracies (89.3%) have been achieved with classification on a Sentinel-2-based vegetation index time series (n = 8). Similar accuracies have been achieved with classification based on two (88.9%) or three (89.1%) Sentinel-2 scenes in the greening phase of broadleaved forests. A classification based on color infrared aerial imagery and derived texture measures only achieved an accuracy of 74.5%. The integration of the texture measures into the Sentinel-2-based classification did not improve its accuracy. Our results indicate that high resolution image objects can successfully be stratified based on lower spatial resolution Sentinel-2 single-date and multi-temporal features, and that those setups outperform classifications based on aerial imagery only. The conceptual framework of spatially high-resolution image objects enriched with features from lower resolution imagery facilitates the delivery of frequent and reliable updates due to higher spectral and temporal resolution. The framework additionally holds the potential to derive additional information layers (i.e., forest disturbance) as derivatives of the features attached to the image objects, thus providing up-to-date information on the state of observed forests.
We study planned changes in protective routines after the COVID-19 pandemic: in a survey in Germany among >650 respondents, we find that the majority plans to use face masks in certain situations even after the end of the pandemic. We observe that this willingness is strongly related to the perception that there is something to be learned from East Asians’ handling of pandemics, even when controlling for perceived protection by wearing masks. Given strong empirical evidence that face masks help prevent the spread of respiratory diseases and given the considerable estimated health and economic costs of such diseases even pre-Corona, this would be a very positive side effect of the current crisis.
Soil organic matter (SOM) is an indispensable component of terrestrial ecosystems. Soil organic carbon (SOC) dynamics are influenced by a number of well-known abiotic factors such as clay content, soil pH, or pedogenic oxides. These parameters interact with each other and vary in their influence on SOC depending on local conditions. To investigate the latter, the dependence of SOC accumulation on parameters and parameter combinations was statistically assessed that vary on a local scale depending on parent material, soil texture class, and land use. To this end, topsoils were sampled from arable and grassland sites in south-western Germany in four regions with different soil parent material. Principal component analysis (PCA) revealed a distinct clustering of data according to parent material and soil texture that varied largely between the local sampling regions, while land use explained PCA results only to a small extent. The PCA clusters were differentiated into total clusters that contain the entire dataset or major proportions of it and local clusters representing only a smaller part of the dataset. All clusters were analysed for the relationships between SOC concentrations (SOC %) and mineral-phase parameters in order to assess specific parameter combinations explaining SOC and its labile fractions hot water-extractable C (HWEC) and microbial biomass C (MBC). Analyses were focused on soil parameters that are known as possible predictors for the occurrence and stabilization of SOC (e.g. fine silt plus clay and pedogenic oxides). Regarding the total clusters, we found significant relationships, by bivariate models, between SOC, its labile fractions HWEC and MBC, and the applied predictors. However, partly low explained variances indicated the limited suitability of bivariate models. Hence, mixed-effect models were used to identify specific parameter combinations that significantly explain SOC and its labile fractions of the different clusters. Comparing measured and mixed-effect-model-predicted SOC values revealed acceptable to very good regression coefficients (R2=0.41–0.91) and low to acceptable root mean square error (RMSE = 0.20 %–0.42 %). Thereby, the predictors and predictor combinations clearly differed between models obtained for the whole dataset and the different cluster groups. At a local scale, site-specific combinations of parameters explained the variability of organic carbon notably better, while the application of total models to local clusters resulted in less explained variance and a higher RMSE. Independently of that, the explained variance by marginal fixed effects decreased in the order SOC > HWEC > MBC, showing that labile fractions depend less on soil properties but presumably more on processes such as organic carbon input and turnover in soil.
The process of land degradation needs to be understood at various spatial and temporal scales in order to protect ecosystem services and communities directly dependent on it. This is especially true for regions in sub-Saharan Africa, where socio economic and political factors exacerbate ecological degradation. This study identifies spatially explicit land change dynamics in the Copperbelt province of Zambia in a local context using satellite vegetation index time series derived from the MODIS sensor. Three sets of parameters, namely, monthly series, annual peaking magnitude, and annual mean growing season were developed for the period 2000 to 2019. Trend was estimated by applying harmonic regression on monthly series and linear least square regression on annually aggregated series. Estimated spatial trends were further used as a basis to map endemic land change processes. Our observations were as follows: (a) 15% of the study area dominant in the east showed positive trends, (b) 3% of the study area dominant in the west showed negative trends, (c) natural regeneration in mosaic landscapes (post shifting cultivation) and land management in forest reserves were chiefly responsible for positive trends, and (d) degradation over intact miombo woodland and cultivation areas contributed to negative trends. Additionally, lower productivity over areas with semi-permanent agriculture and shift of new encroachment into woodlands from east to west of Copperbelt was observed. Pivot agriculture was not a main driver in land change. Although overall greening trends prevailed across the study site, the risk of intact woodlands being exposed to various disturbances remains high. The outcome of this study can provide insights about natural and assisted landscape restoration specifically addressing the miombo ecoregion.
Measurements of dust emissions and the modeling of dissipation dynamics and total values are related to great uncertainties. Agricultural activity, especially soil cultivation, may be an essential component to calculate and model local and regional dust dynamics and even connect to the global dust cycle. To budget total dust and to assess the impact of tillage, measurement of mobilized and transported dust is an essential but rare basis. In this study, a simple measurement concept with Modified Wilson and Cook samplers was applied for dust measurements on a small temporal and spatial scale on steep-slope vineyards in the Moselle area. Without mechanical impact, a mean horizontal flux of 0.01 g m2 min−1 was measured, while row tillage produced a mean horizontal flux of 5.92 g m2 min−1 of mobilized material and 4.18 g m2 min−1 emitted dust from site (=soil loss). Compared on this singular-event basis, emissions during tillage operations generated 99.89% of total emitted dust from the site under low mean wind velocities. The results also indicate a differing impact of specific cultivation operations, mulching, and tillage tools as well as the additional influence of environmental conditions, with highest emissions on dry soil and with additional wind impact. The dust source function is strongly associated with cultivation operations, implying highly dynamic but also regular and thus predictable and projectable emission peaks of total suspended particles. Detailed knowledge of the effects of mechanical impulses and reliable quantification of the local dust emission inventory are a basis for analysis of risk potential and choice of adequate management options.
The larval stage of the European fire salamander (Salamandra salamandra) inhabits both lentic and lotic habitats. In the latter, they are constantly exposed to unidirectional water flow, which has been shown to cause downstream drift in a variety of taxa. In this study, a closed artificial creek, which allowed us to keep the water flow constant over time and, at the same time, to simulates with predefined water quantities and durations, was used to examine the individual movement patterns of marked larval fire salamanders exposed to unidirectional flow. Movements were tracked by marking the larvae with VIAlpha tags individually and by using downstream and upstream traps. Most individuals showed stationarity, while downstream drift dominated the overall movement pattern. Upstream movements were rare and occurred only on small distances of about 30 cm; downstream drift distances exceeded 10 m (until next downstream trap). The simulated flood events increased drift rates significantly, even several days after the flood simulation experiments. Drift probability increased with decreasing body size and decreasing nutritional status. Our results support the production hypothesis as an explanation for the movements of European fire salamander larvae within creeks.
Low-level jets (LLJs) are climatological features in polar regions. It is well known that katabatic winds over the slopes of the Antarctic ice sheet are associated with strong LLJs. Barrier winds occurring, e.g., along the Antarctic Peninsula may also show LLJ structures. A few observational studies show that LLJs occur over sea ice regions. We present a model-based climatology of the wind field, of low-level inversions and of LLJs in the Weddell Sea region of the Antarctic for the period 2002–2016. The sensitivity of the LLJ detection on the selection of the wind speed maximum is investigated. The common criterion of an anomaly of at least 2 m/s is extended to a relative criterion of wind speed decrease above and below the LLJ. The frequencies of LLJs are sensitive to the choice of the relative criterion, i.e., if the value for the relative decrease exceeds 15%. The LLJs are evaluated with respect to the frequency distributions of height, speed, directional shear and stability for different regions. LLJs are most frequent in the katabatic wind regime over the ice sheet and in barrier wind regions. During winter, katabatic LLJs occur with frequencies of more than 70% in many areas. Katabatic LLJs show a narrow range of heights (mostly below 200 m) and speeds (typically 10–20 m/s), while LLJs over the sea ice cover a broad range of speeds and heights. LLJs are associated with surface inversions or low-level lifted inversions. LLJs in the katabatic wind and barrier wind regions can last several days during winter. The duration of LLJs is sensitive to the LLJ definition criteria. We propose to use only the absolute criterion for model studies.
Digital technologies have become central to social interaction and accessing goods and services. Development strategies and approaches to governance have increasingly deployed self-labelled ‘smart’ technologies and systems at various spatial scales, often promoted as rectifying social and geographic inequalities and increasing economic and environmental efficiencies. These have also been accompanied with similarly digitalized commercial and non-profit offers, particularly within the sharing economy. Concern has grown, however, over possible inequalities linked to their introduction. In this paper we critically analyse the role of sharing economies’ contribution to more inclusive, socially equitable
and spatially just transitions. Conceptually, this paper brings together literature on sharing economies, smart urbanism
and just transitions. Drawing on an explorative database of sharing initiatives within the cross-border region of Luxembourg and Germany, we discuss aspects of sustainability as they relate to distributive justice through spatial accessibility, intended benefits, and their operationalization. The regional analysis shows the diversity of sharing models, how they are appropriated in different ways and how intent and operationalization matter in terms of potential benefits.
Results emphasize the need for more fine-grained, qualitative research revealing who is, and is not, participating and
benefitting from sharing economies.
The present study examined associations between fathers’ masculinity orientation and their anticipated reaction toward their child’s coming out as lesbian or gay (LG). Participants were 134 German fathers (28 to 60years) of a minor child. They were asked how they would personally react if, one day, their child disclosed their LG identity to them. As hypothesized, fathers with a stronger masculinity orientation (i.e., adherence to traditional male gender norms, such as independence, assertiveness, and physical strength) reported that they would be more likely to reject their LG child. This association was serially mediated by two factors: fathers’ general anti-LG attitudes (i.e., level of homophobia) and their emotional distress due to their child’s coming out (e.g., feelings of anger, shame, or sadness). The result pattern was independent of the child’s gender or age. The discussion centers on the problematic role of traditional masculinity when it comes to fathers’ acceptance of their non-heterosexual child.
Amphibian diversity in the Amazonian floating meadows: a Hanski core-satellite species system
(2021)
The Amazon catchment is the largest river basin on earth, and up to 30% of its waters flow across floodplains. In its open waters, floating plants known as floating meadows abound. They can act as vectors of dispersal for their associated fauna and, therefore, can be important for the spatial structure of communities. Here, we focus on amphibian diversity in the Amazonian floating meadows over large spatial scales. We recorded 50 amphibian species over 57 sites, covering around 7000 km along river courses. Using multi-site generalised dissimilarity modelling of zeta diversity, we tested Hanski's core-satellite hypothesis and identified the existence of two functional groups of species operating under different ecological processes in the floating meadows. ‘Core' species are associated with floating meadows, while ‘satellite' species are associated with adjacent environments, being only occasional or accidental occupants of the floating vegetation. At large scales, amphibian diversity in floating meadows is mostly determined by stochastic (i.e. random/neutral) processes, whereas at regional scales, climate and deterministic (i.e. niche-based) processes are central drivers. Compared with the turnover of ‘core' species, the turnover of ‘satellite' species increases much faster with distances and is also controlled by a wider range of climatic features. Distance is not a limiting factor for ‘core' species, suggesting that they have a stronger dispersal ability even over large distances. This is probably related to the existence of passive long-distance dispersal of individuals along rivers via vegetation rafts. In this sense, Amazonian rivers can facilitate dispersal, and this effect should be stronger for species associated with riverine habitats such as floating meadows.
Background
The morphology of anuran larvae is suggested to differ between species with tadpoles living in standing (lentic) and running (lotic) waters. To explore which character combinations within the general tadpole morphospace are associated with these habitats, we studied categorical and metric larval data of 123 (one third of which from lotic environments) Madagascan anurans.
Results
Using univariate and multivariate statistics, we found that certain combinations of fin height, body musculature and eye size prevail either in larvae from lentic or lotic environments.
Conclusion
Evidence for adaptation to lotic conditions in larvae of Madagascan anurans is presented. While lentic tadpoles typically show narrow to moderate oral discs, small to medium sized eyes, convex or moderately low fins and non-robust tail muscles, tadpoles from lotic environments typically show moderate to broad oral discs, medium to big sized eyes, low fins and a robust tail muscle.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
The endemic argan tree (Argania spinosa) populations in southern Morocco are highly degraded due to overbrowsing, illegal firewood extraction and the expansion of intensive agriculture. Bare areas between the isolated trees increase due to limited regrowth; however, it is unknown if the trees influence the soil of the intertree areas. Hypothetically, spatial differences in soil parameters of the intertree area should result from the translocation of litter or soil particles (by runoff and erosion or wind drift) from canopy-covered areas to the intertree areas. In total, 385 soil samples were taken around the tree from the trunk along the tree drip line (within and outside the tree area) and the intertree area between two trees in four directions (upslope, downslope and in both directions parallel to the slope) up to 50 m distance from the tree. They were analysed for gravimetric soil water content, pH, electrical conductivity, percolation stability, total nitrogen content (TN), content of soil organic carbon (SOC) and C/N ratio. A total of 74 tension disc infiltrometer experiments were performed near the tree drip line, within and outside the tree area, to measure the unsaturated hydraulic conductivity. We found that the tree influence on its surrounding intertree area is limited, with, e.g., SOC and TN content decreasing significantly from tree trunk (4.4 % SOC and 0.3 % TN) to tree drip line (2.0 % SOC and 0.2 % TN). However, intertree areas near the tree drip line (1.3 % SOC and 0.2 % TN) differed significantly from intertree areas between two trees (1.0 % SOC and 0.1 % TN) yet only with a small effect. Trends for spatial patterns could be found in eastern and downslope directions due to wind drift and slope wash. Soil water content was highest in the north due to shade from the midday sun; the influence extended to the intertree areas. The unsaturated hydraulic conductivity also showed significant differences between areas within and outside the tree area near the tree drip line. This was the case on sites under different land usages (silvopastoral and agricultural), slope gradients or tree densities. Although only limited influence of the tree on its intertree area was found, the spatial pattern around the tree suggests that reforestation measures should be aimed around tree shelters in northern or eastern directions with higher soil water content or TN or SOC content to ensure seedling survival, along with measures to prevent overgrazing.
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
Institutional and cultural determinants of speed of government responses during COVID-19 pandemic
(2021)
This article examines institutional and cultural determinants of the speed of government responses during the COVID-19 pandemic. We define the speed as the marginal rate of stringency index change. Based on cross-country data, we find that collectivism is associated with higher speed of government response. We also find a moderating role of trust in government, i.e., the association of individualism-collectivism on speed is stronger in countries with higher levels of trust in government. We do not find significant predictive power of democracy, media freedom and power distance on the speed of government responses.
The state-of-the-art finite element software Plaxis 3D was applied in a real-world study site of the Turaida castle mound to investigate the slope stability of the mound and understand the mechanisms triggering landslides there. During the simulation, the stability of the castle mound was analysed and the most landslide-susceptible zones of hillslopes were determined. The 3D finite-element stability analysis has significant advantages over conventional 2D limit-equilibrium methods where locations of 2D stability sections are arbitrarily selected. Two modelling scenarios of the slope stability were elaborated considering deep-seated slides in bedrock and shallow landslides in the colluvial material of slopes. The model shows that shallow slides in colluvium are more probable. In the finite-element model, slope failure occurs along the weakest zone in colluvium, similarly to the situation observed in previous landslides in the study site. The physical basis of the model allows results to be obtained very close to natural conditions and delivers valuable insight in triggering mechanisms of landslides.
Background: The body-oriented therapeutic approach Somatic Experiencing® (SE) treats posttraumatic symptoms by changing the interoceptive and proprioceptive sensations associated with the traumatic experience. Filling a gap in the landscape of trauma treatments, SE has attracted growing interest in research and therapeutic practice, recently.
Objective: To date, there is no literature review of the effectiveness and key factors of SE. This review aims to summarize initial findings on the effectiveness of SE and to outline methodspecific key factors of SE.
Method: To gain a first overview of the literature, we conducted a scoping review including studies until 13 August 2020. We identified 83 articles of which 16 fit inclusion criteria and were systematically analysed.
Results: Findings provide preliminary evidence for positive effects of SE on PTSD-related symptoms. Moreover, initial evidence suggests that SE has a positive impact on affective and somatic symptoms and measures of well-being in both traumatized and non-traumatized
samples. Practitioners and clients identified resource-orientation and use of touch as methodspecific key factors of SE. Yet, an overall studies quality assessment as well as a Cochrane analysis of risk of bias indicate that the overall study quality is mixed.
Conclusions: The results concerning effectiveness and method-specific key factors of SE are promising; yet, require more support from unbiased RCT-research. Future research should focus on filling this gap.
Intense, southward low-level winds are common in Nares Strait, between Ellesmere Island and northern Greenland. The steep topography along Nares Strait leads to channelling effects, resulting in an along-strait flow. This research study presents a 30-year climatology of the flow regime from simulations of the COSMO-CLM climate model. The simulations are available for the winter periods (November–April) 1987/88 to 2016/17, and thus, cover a period long enough to give robust long-term characteristics of Nares Strait. The horizontal resolution of 15 km is high enough to represent the complex terrain and the meteorological conditions realistically. The 30-year climatology shows that LLJs associated with gap flows are a climatological feature of Nares Strait. The maximum of the mean 10-m wind speed is around 12 m s-1 and is located at the southern exit of Smith Sound. The wind speed is strongly related to the pressure gradient. Single events reach wind speeds of 40 m s-1 in the daily mean. The LLJs are associated with gap flows within the narrowest parts of the strait under stably stratified conditions, with the main LLJ occurring at 100–250 m height. With increasing mountain Froude number, the LLJ wind speed and height increase. The frequency of strong wind events (>20 m s-1 in the daily mean) for the 10 m wind shows a strong interannual variability with an average of 15 events per winter. Channelled winds have a strong impact on the formation of the North Water polynya.
Introduction:In patients with common variable immunodeficiency (CVID),immunological response is compromised. Knowledge about COVID‐19 in CVIDpatients is sparse. We, here, synthesize current research addressing the level ofthreat COVID‐19posestoCVIDpatientsandthebest‐known treatments.
Method:Review of 14 publications.
Results:The number of CVID patients with moderate to severe (~29%) andcritical infection courses (~10%), and the number of fatal cases (~13%), areincreased compared to the general picture of COVID‐19 infection. However,this might be an overestimate. Systematic cohort‐wide studies are lacking, andasymptomatic or mild cases among CVID patients occur that can easily remainunnoticed. Regular immunoglobulin replacement therapy was administered inalmost all patients, potentially explaining why the numbers of critical and fatalcases were not higher. In addition, the application of convalescent plasma wasdemonstrated to have positive effects.
Conclusions:COVID‐19 poses an elevated threat to CVID patients. However,only systematic studies can provide robust information on the extent of thisthreat. Regular immunoglobulin replacement therapy is beneficial to combatCOVID‐19 in CVID patients, and best treatment after infection includes theuse of convalescent plasma in addition to common medication.
This intervention study explored the effects of a newly developed intergenerational encounter program on cross-generational age stereotyping (CGAS). Based on a biographical-narrative approach, participants (secondary school students and nursing home residents) were invited to share ideas about existential questions of life (e.g., about one’s core experiences, future plans, and personal values). Therefore, the dyadic Life Story Interview (LSI) had been translated into a group format (the Life Story Encounter Program, LSEP), consisting of 10 90-min sessions. Analyses verified that LSEP participants of both generations showed more favorable CGAS immediately after, but also 3 months after the program end. Such change in CGAS was absent in a control group (no LSEP participation). The LSEP-driven short- and long-term effects on CGAS could be partially explained by two program benefits, the feeling of comfort with and the experience of learning from the other generation.
Food waste is the origin of major social and environmental issues. In industrial societies, domestic households are the biggest contributors to this problem. But why do people waste food although they buy and value it? Answering this question is mandatory to design effective interventions against food waste. So far, however, many interventions have not been based on theoretical knowledge. Integrating food waste literature and ambivalence research, we propose that domestic food waste can be understood via the concept of ambivalence—the simultaneous presence of positive and negative associations towards the same attitude object. In support of this notion, we demonstrated in three pre-registered experiments that people experienced ambivalence towards non-perishable food products with expired best before dates. The experience of ambivalence was in turn associated with an increased willingness to waste food. However, two informational interventions aiming to prevent people from experiencing ambivalence did not work as intended (Experiment 3). We hope that the outlined conceptualization inspires theory-driven research on why and when people dispose of food and on how to design effective interventions.
Background
Identifying pain-related response patterns and understanding functional mechanisms of symptom formation and recovery are important for improving treatment.
Objectives
We aimed to replicate pain-related avoidance-endurance response patterns associated with the Fear-Avoidance Model, and its extension, the Avoidance-Endurance Model, and examined their differences in secondary measures of stress, action control (i.e., dispositional action vs. state orientation), coping, and health.
Methods
Latent profile analysis (LPA) was conducted on self-report data from 536 patients with chronic non-specific low back pain at the beginning of an inpatient rehabilitation program. Measures of stress (i.e., pain, life stress) and action control were analyzed as covariates regarding their influence on the formation of different pain response profiles. Measures of coping and health were examined as dependent variables.
Results
Partially in line with our assumptions, we found three pain response profiles of distress-avoidance, eustress-endurance, and low-endurance responses that are depending on the level of perceived stress and action control. Distress-avoidance responders emerged as the most burdened, dysfunctional patient group concerning measures of stress, action control, maladaptive coping, and health. Eustress-endurance responders showed one of the highest levels of action versus state orientation, as well as the highest levels of adaptive coping and physical activity. Low-endurance responders reported lower levels of stress as well as equal levels of action versus state orientation, maladaptive coping, and health compared to eustress-endurance responders; however, equally low levels of adaptive coping and physical activity compared to distress-avoidance responders.
Conclusions
Apart from the partially supported assumptions of the Fear-Avoidance and Avoidance-Endurance Model, perceived stress and dispositional action versus state orientation may play a crucial role in the formation of pain-related avoidance-endurance response patterns that vary in degree of adaptiveness. Results suggest tailoring interventions based on behavioral and functional analysis of pain responses in order to more effectively improve patients quality of life.
Evaluation of an eye tracking setup for studying visual attention in face-to-face conversations
(2021)
Many eye tracking studies use facial stimuli presented on a display to investigate attentional processing of social stimuli. To introduce a more realistic approach that allows interaction between two real people, we evaluated a new eye tracking setup in three independent studies in terms of data quality, short-term reliability and feasibility. Study 1 measured the robustness, precision and accuracy for calibration stimuli compared to a classical display-based setup. Study 2 used the identical measures with an independent study sample to compare the data quality for a photograph of a face (2D) and the face of the real person (3D). Study 3 evaluated data quality over the course of a real face-to-face conversation and examined the gaze behavior on the facial features of the conversation partner. Study 1 provides evidence that quality indices for the scene-based setup were comparable to those of a classical display-based setup. Average accuracy was better than 0.4° visual angle. Study 2 demonstrates that eye tracking quality is sufficient for 3D stimuli and robust against short interruptions without re-calibration. Study 3 confirms the long-term stability of tracking accuracy during a face-to-face interaction and demonstrates typical gaze patterns for facial features. Thus, the eye tracking setup presented here seems feasible for studying gaze behavior in dyadic face-to-face interactions. Eye tracking data obtained with this setup achieves an accuracy that is sufficient for investigating behavior such as eye contact in social interactions in a range of populations including clinical conditions, such as autism spectrum and social phobia.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
Detection of Preferential Water Flow by Electrical Resistivity Tomography and Self-Potential Method
(2021)
This study explores the hydrogeological conditions of a landslide-prone hillslope in the Upper Mosel valley, Luxembourg. The investigation program included the monitoring of piezometer wells, hydrogeological field tests, analysis of drillcore records, and geophysical surveys. Monitoring and field testing in some of the observation wells indicated very pronounced preferential flow. Electrical resistivity tomography (ERT) and self-potential geophysical methods were employed in the study area for exploration of the morphology of preferential flowpaths. Possible signals associated with flowing groundwater in the subsurface were detected; however, they were diffusively spread over a relatively large zone, which did not allow for the determination of an exact morphology of the conduit. Analysis of drillcore records indicated that flowpaths are caused by the dissolution of thin gypsum interlayers in marls. For better understanding of the site’s hydrogeological settings, a 3D hydrogeological model was compiled. By applying different subsurface flow mechanisms, a hydrogeological model with thin, laterally extending flowpaths embedded in a porous media matrix showed the best correspondence with field observations. Simulated groundwater heads in a preferential flow conduit exactly corresponded with the observed heads in the piezometer wells. This study illustrates how hydrogeological monitoring and geophysical surveys in conjunction with the newest hydrogeological models allow for better conceptualization and parametrization of preferential flow.
Using a dendrochronological approach, we determined the resistance, recovery and resilience of the radial stem increment towards episodes of growth decline, and the accompanying variation of 13C discrimination against atmospheric CO2 (Δ13C) in tree rings of two palaeotropical pine species. These species co-occur in the mountain ranges of south–central Vietnam (1500–1600 m a.s.l.), but differ largely in their areas of distribution (Pinus kesiya from northeast India to the Philippines; P. dalatensis only in south and central Vietnam and in some isolated populations in Laos). For P. dalatensis, a robust growth chronology covering the past 290 years could be set up for the first time in the study region. For P. kesiya, the 140-year chronology constructed was the longest that could be established to date in that region for this species. In the first 40 years of the trees’ lives, the stem diameter increment was significantly larger in P. kesiya, but levelled off and even decreased after 100 years, whereas P. dalatensis exhibited a continuous growth up to an age of almost 300 years. Tree-ring growth of P. kesiya was negatively related to temperature in the wet months and season of the current year and in October (humid transition period) of the preceding year and to precipitation in August (monsoon season), but positively to precipitation in December (dry season) of the current year. The P. dalatensis chronologies exhibited no significant correlation with temperature or precipitation. Negative correlations between BAI and Δ13C indicate a lack of growth impairment by drought in both species. Regression analyses revealed a lower resilience of P. dalatensis upon episodes of growth decline compared to P. kesiya, but, contrary to our hypothesis, mean values of the three sensitivity parameters did not differ significantly between these species. Nevertheless, the vigorous growth of P. kesiya, which does not fall behind that of P. dalatensis even at the margin of its distribution area under below-optimum edaphic conditions, is indicative of a relatively high plasticity of this species towards environmental factors compared to P. dalatensis, which, in tendency, is less resilient upon environmental stress even in the “core” region of its occurrence.
In 2014/2015 a one-year field campaign at the Tiksi observatory in the Laptev Sea area was carried out using Sound Detection and Ranging/Radio Acoustic Sounding System (SODAR/RASS) measurements to investigate the atmospheric boundary layer (ABL) with a focus on low-level jets (LLJ) during the winter season. In addition to SODAR/RASS-derived vertical profiles of temperature, wind speed and direction, a suite of complementary measurements at the Tiksi observatory was available. Data of a regional atmospheric model were used to put the local data into the synoptic context. Two case studies of LLJ events are presented. The statistics of LLJs for six months show that in about 23% of all profiles LLJs were present with a mean jet speed and height of about 7 m/s and 240 m, respectively. In 3.4% of all profiles LLJs exceeding 10 m/s occurred. The main driving mechanism for LLJs seems to be the baroclinicity, since no inertial oscillations were found. LLJs with heights below 200 m are likely influenced by local topography.
Perennial energy crops (PECs) are increasingly used as feedstock to produce energy in an environmental friendly way. Compared to traditional conversion strategies like thermal use, sophisticated technologies such as biomethanation defined different re-quirements of the feedstock. Whereas the first concept relies on dry, woody mate-rial, biomethanation requires a moist feedstock. Thus, over time, the spectrum of species used as PECs has widened. Moreover, harvest dates were adjusted to pro-vide the feedstock at suitable moisture contents. It is well known that perennial, lignocellulose- based energy crops, compared to annual, sugar- and starch- based ones, offer ecological advantages such as, inter alia, improving biodiversity in landscape, protecting soil against erosion, and protecting groundwater from nutrient inputs. However, one of the main arguments for PEC cultivation was their undemanding nature concerning external inputs. With respect to the broader spectrum of PEC spe-cies and changed harvest dates, the question arises whether the concept of PECs being low- input energy crops is still valid. This also implies the question of suitable grow-ing conditions and sustainable management. The aims of this opinion paper were to classify different PECs according to their life- form strategy, compare nutrient exports when harvested in different maturation stages, and to discuss the results in the context of sustainable PEC cultivation on marginal land. This study revealed that nutrient exports with yield biomass of PECs harvested in green state are in the same range than those of annual energy crops and therewith several times higher than those of PECs harvested in brown state or of woody short rotation coppices. Thus, PECs can-not universally be claimed as low- input energy crops. These results also imply the consequences of cultivation of PECs on marginal land. Finally, the question has to be raised whether the term PECs should prospectively be better specified in written and spoken words.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Natural hazards are diverse and uneven in time and space, therefore, understanding its complexity is key to save human lives and conserve natural ecosystems. Reducing the outputs obtained after each modelling analysis is key to present the results for stakeholders, land managers and policymakers. So, the main goal of this survey was to present a method to synthesize three natural hazards in one multi-hazard map and its evaluation for hazard management and land use planning. To test this methodology, we took as study area the Gorganrood Watershed, located in the Golestan Province (Iran). First, an inventory map of three different types of hazards including flood, landslides, and gullies was prepared using field surveys and different official reports. To generate the susceptibility maps, a total of 17 geo-environmental factors were selected as predictors using the MaxEnt (Maximum Entropy) machine learning technique. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic-ROC curves and calculating the area under the ROC curve-AUCROC. The MaxEnt model not only implemented superbly in the degree of fitting, but also obtained significant results in predictive performance. Variables importance of the three studied types of hazards showed that river density, distance from streams, and elevation were the most important factors for flood, respectively. Lithological units, elevation, and annual mean rainfall were relevant for detecting landslides. On the other hand, annual mean rainfall, elevation, and lithological units were used for gully erosion mapping in this study area. Finally, by combining the flood, landslides, and gully erosion susceptibility maps, an integrated multi-hazard map was created. The results demonstrated that 60% of the area is subjected to hazards, reaching a proportion of landslides up to 21.2% in the whole territory. We conclude that using this type of multi-hazard map may be a useful tool for local administrators to identify areas susceptible to hazards at large scales as we demonstrated in this research.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
The daily dose of health information: A psychological view on the health information seeking process
(2021)
The search for health information is becoming increasingly important in everyday life, as well as socially and scientifically relevant Previous studies have mainly focused on the design and communication of information. However, the view of the seeker as well as individual
differences in skills and abilities has been a neglected topic so far. A psychological perspective on the process of searching for health information would provide important starting points for promoting the general dissemination of relevant information and thus improving health behaviour and health status. Within the present dissertation, the process of seeking health information was thus divided into sequential stages to identify relevant personality traits and skills. Accordignly, three studies are presented that focus on one stage
of the process respectively and empirically test potential crucial traits and skills: Study I investigates possible determinants of an intention for a comprehensive search for health information. Building an intention is considered as the basic step of the search process.
Motivational dispositions and self-regulatory skills were related to each other in a structural equation model and empirically tested based on theoretical investigations. Model fit showed an overall good fit and specific direct and indirect effects from approach and avoidance
motivation on the intention to seek comprehensively could be found, which supports the theoretical assumptions. The results show that as early as the formation of intention, the psychological perspective reveals influential personality traits and skills. Study II deals with the subsequent step, the selection of information sources. The preference for basic characteristics of information sources (i.e., accessibility, expertise, and interaction) is related to health information literacy as a collective term for relevant skills and intelligence as a personality trait. Furthermore, the study considers the influence of possible over- or underestimation of these characteristics. The results show not only a different predictive
contribution of health literacy and intelligence, but also the relevance of subjective and objective measurement.
Finally, Study III deals with the selection and evaluation of the health information previously found. The phenomenon of selective exposure is analysed, as this can be considered problematic in the health context. For this purpose, an experimental design was implemented in which a varying health threat was suggested to the participants. Relevant information was presented and the selective choice of this information was assessed. Health literacy was tested
as a moderator in a function of the induced threat and perceived vulnerability, triggering defence motives on the degree of bias. Findings show the importance of the consideration of the defence motives, which could cause a bias in the form of selective exposure. Furthermore, health literacy even seems to amplify this effect.
Results of the three studies are synthesized, discussed and general conclusions are drawn and implications for further research are determined.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
The present work explores how theories of motivation can be used to enhance video game research. Currently, Flow-Theory and Self-Determination Theory are the most common approaches in the field of Human-Computer Interaction. The dissertation provides an in-depth look into Motive Disposition Theory and how to utilize it to explain interindividual differences in motivation. Different players have different preferences and make different choices when playing games, and not every player experiences the same outcomes when playing the same game. I provide a short overview of the current state of the research on motivation to play video games. Next, Motive Disposition Theory is applied in the context of digital games in four different research papers, featuring seven studies, totaling 1197 participants. The constructs of explicit and implicit motives are explained in detail while focusing on the two social motives (i.e., affiliation and power). As dependent variables, behaviour, preferences, choices, and experiences are used in different game environments (i.e., Minecraft, League of Legends, and Pokémon). The four papers are followed by a general discussion about the seven studies and Motive Disposition Theory in general. Finally, a short overview is provided about other theories of motivation and how they could be used to further our understanding of the motivation to play digital games in the future. This thesis proposes that 1) Motive Disposition Theory represents a valuable approach to understand individual motivations within the context of digital games; 2) there is a variety of motivational theories that can and should be utilized by researchers in the field of Human-Computer Interaction to broaden the currently one-sided perspective on human motivation; 3) researchers should aim to align their choice of motivational theory with their research goals by choosing the theory that best describes the phenomenon in question and by carefully adjusting each study design to the theoretical assumptions of that theory.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
With the ongoing trend towards deep learning in the remote sensing community, classical pixel based algorithms are often outperformed by convolution based image segmentation algorithms. This performance was mostly validated spatially, by splitting training and validation pixels for a given year. Though generalizing models temporally is potentially more difficult, it has been a recent trend to transfer models from one year to another, and therefore to validate temporally. The study argues that it is always important to check both, in order to generate models that are useful beyond the scope of the training data. It shows that convolutional neural networks have potential to generalize better than pixel based models, since they do not rely on phenological development alone, but can also consider object geometry and texture. The UNET classifier was able to achieve the highest F1 scores, averaging 0.61 in temporal validation samples, and 0.77 in spatial validation samples. The theoretical potential for overfitting geometry and just memorizing the shape of fields that are maize has been shown to be insignificant in practical applications. In conclusion, kernel based convolutions can offer a large contribution in making agricultural classification models more transferable, both to other regions and to other years.
Many people are aware of the negative consequences of plastic use on the environment. Nevertheless, they use plastic due to its functionality. In the present paper, we hypothesized that this leads to the experience of ambivalence—the simultaneous existence of positive and negative evaluations of plastic. In two studies, we found that participants showed greater ambivalence toward plastic packed food than unpacked food. Moreover, they rated plastic packed food less favorably than unpacked food in response evaluations. In Study 2, we tested whether one-sided (only positive vs. only negative) information interventions could effectively influence ambivalence. Results showed that ambivalence is resistant to (social) influence. Directions for future research were discussed.
Energy transition strategies in Germany have led to an expansion of energy crop cultivation in landscape, with silage maize as most valuable feedstock. The changes in the traditional cropping systems, with increasing shares of maize, raised concerns about the sustainability of agricultural feedstock production regarding threats to soil health. However, spatially explicit data about silage maize cultivation are missing; thus, implications for soil cannot be estimated in a precise way. With this study, we firstly aimed to track the fields cultivated with maize based on remote sensing data. Secondly, available soil data were target-specifically processed to determine the site-specific vulnerability of the soils for erosion and compaction. The generated, spatially-explicit data served as basis for a differentiated analysis of the development of the agricultural biogas sector, associated maize cultivation and its implications for soil health. In the study area, located in a low mountain range region in Western Germany, the number and capacity of biogas producing units increased by 25 installations and 10,163 kW from 2009 to 2016. The remote sensing-based classification approach showed that the maize cultivation area was expanded by 16% from 7305 to 8447 hectares. Thus, maize cultivation accounted for about 20% of the arable land use; however, with distinct local differences. Significant shares of about 30% of the maize cultivation was done on fields that show at least high potentials for soil erosion exceeding 25 t soil ha−1 a−1. Furthermore, about 10% of the maize cultivation was done on fields that pedogenetically show an elevated risk for soil compaction. In order to reach more sustainable cultivation systems of feedstock for anaerobic digestion, changes in cultivated crops and management strategies are urgently required, particularly against first signs of climate change. The presented approach can regionally be modified in order to develop site-adapted, sustainable bioenergy cropping systems.
The parameterization of ocean/sea-ice/atmosphere interaction processes is a challenge for regional climate models (RCMs) of the Arctic, particularly for wintertime conditions, when small fractions of thin ice or open water cause strong modifications of the boundary layer. Thus, the treatment of sea ice and sub-grid flux parameterizations in RCMs is of crucial importance. However, verification data sets over sea ice for wintertime conditions are rare. In the present paper, data of the ship-based experiment Transarktika 2019 during the end of the Arctic winter for thick one-year ice conditions are presented. The data are used for the verification of the regional climate model COSMO-CLM (CCLM). In addition, Moderate Resolution Imaging Spectroradiometer (MODIS) data are used for the comparison of ice surface temperature (IST) simulations of the CCLM sea ice model. CCLM is used in a forecast mode (nested in ERA5) for the Norwegian and Barents Seas with 5 km resolution and is run with different configurations of the sea ice model and sub-grid flux parameterizations. The use of a new set of parameterizations yields improved results for the comparisons with in-situ data. Comparisons with MODIS IST allow for a verification over large areas and show also a good performance of CCLM. The comparison with twice-daily radiosonde ascents during Transarktika 2019, hourly microwave water vapor measurements of first 5 km in the atmosphere and hourly temperature profiler data show a very good representation of the temperature, humidity and wind structure of the whole troposphere for CCLM.
Social innovation became a widely discussed topic in politics, research funding programs, and business development. Recent European and US economic and science policies have set aside significant funds to generate and foster social innovation. In view of current challenges such as digitization, Work 4.0, inclusion or migrant integration, the question of how organizations can be empowered to develop new and innovative approaches and service models to social challenges is becoming increasingly urgent. This especially applies to organizations in the fields of education and social services. In education, implementing new ideas and concepts is usually discussed as educational reform, which mostly addresses changes in policy agendas with consequences for national and international education systems. The concept of social innovation however has a different starting point: the source of new ideas and services are identified new, emergent needs in society or re-conceptualized. Such need-based perspectives might bring new impulses to the field of education. Therefore, this paper identifies important existing strands of social innovation research, which need to be considered in the emerging academic discourse on social innovation in education. Looking at social innovation through an education research lens reveals the close relation between learning, creativity, and innovation. Individuals, teams, and even organizations learn, engage in creative problem solving to create new and innovative products and services. From an organizational education perspective, the questions arise, how social innovation emerges and even more important, how the process of developing social innovation can be supported. After a brief introduction in the concept of social innovation, the paper discusses therefore the sites, where social innovation emerges, social innovators, approaches to foster social innovation as well as promoting and hindering factors for social innovation.
Designing a Randomized Trial with an Age Simulation Suit—Representing People with Health Impairments
(2020)
Due to demographic change, there is an increasing demand for professional care services, whereby this demand cannot be met by available caregivers. To enable adequate care by relieving informal and formal care, the independence of people with chronic diseases has to be preserved for as long as possible. Assistance approaches can be used that support promoting physical activity, which is a main predictor of independence. One challenge is to design and test such approaches without affecting the people in focus. In this paper, we propose a design for a randomized trial to enable the use of an age simulation suit to generate reference data of people with health impairments with young and healthy participants. Therefore, we focus on situations of increased physical activity.
Digitalization primarily takes place in and through organizations. Despite this prominent role, however, the importance of organizational structure-building processes in the digital transformation is still underexposed in discourse. The fact that ongoing digitalization is linked to an established phenomenon and its own logic, is regularly not addressed due to the attraction potential of the semantics of the digital revolution. Digital revolution and the reordering of societal relationships, though, manifest themselves primarily in processes of reorganization. Structural automation processes in the ongoing digital transformation are limiting the scope for action, necessitating forms of structural structurelessness in organizations that cultivate opportunities for chance. Since organizations realize their operations as a dual of structure and individual, and the principle of organization is therefore based on the complementarity of structural formality and unpredictable informality. The paper discusses the topicality of the classical form of modern organization in the digital age and reflects on approaches to a contemporary design of spaces of opportunity. The reflexive handling of future openness is the central task of management and leadership in order to enable variation and innovation in organizations.
Primary focal hyperhidrosis (PFH, OMIM %144110) is a genetically influenced condition characterised by excessive sweating. Prevalence varies between 1.0–6.1% in the general population, dependent on ethnicity. The aetiology of PFH remains unclear but an autosomal dominant mode of inheritance, incomplete penetrance and variable phenotypes have been reported. In our study, nine pedigrees (50 affected, 53 non-affected individuals) were included. Clinical characterisation was performed at the German Hyperhidrosis Centre, Munich, by using physiological and psychological questionnaires. Genome-wide parametric linkage analysis with GeneHunter was performed based on the Illumina genome-wide SNP arrays. Haplotypes were constructed using easyLINKAGE and visualised via HaploPainter. Whole-exome sequencing (WES) with 100x coverage in 31 selected members (24 affected, 7 non-affected) from our pedigrees was achieved by next generation sequencing. We identified four genome-wide significant loci, 1q41-1q42.3, 2p14-2p13.3, 2q21.2-2q23.3 and 15q26.3-15q26.3 for PFH. Three pedigrees map to a shared locus at 2q21.2-2q23.3, with a genome-wide significant LOD score of 3.45. The chromosomal region identified here overlaps with a locus at chromosome 2q22.1-2q31.1 reported previously. Three families support 1q41-1q42.3 (LOD = 3.69), two families share a region identical by descent at 2p14-2p13.3 (LOD = 3.15) and another two families at 15q26.3 (LOD = 3.01). Thus, our results point to considerable genetic heterogeneity. WES did not reveal any causative variants, suggesting that variants or mutations located outside the coding regions might be involved in the molecular pathogenesis of PFH. We suggest a strategy based on whole-genome or targeted next generation sequencing to identify causative genes or variants for PFH.
Laboratory landslide experiments enable the observation of specific properties of these natural hazards. However, these observations are limited by traditional techniques: frequently used high-speed video analysis and wired sensors (e.g. displacement). These techniques lead to the drawback that either only the surface and 2D profiles can be observed or wires confine the motion behaviour. In contrast, an unconfined observation of the total spatiotemporal dynamics of landslides is needed for an adequate understanding of these natural hazards.
The present study introduces an autonomous and wireless probe to characterize motion features of single clasts within laboratory-scale landslides. The Smartstone probe is based on an inertial measurement unit (IMU) and records acceleration and rotation at a sampling rate of 100 Hz. The recording ranges are ±16 g (accelerometer) and ±2000∘ s−1 (gyroscope). The plastic tube housing is 55 mm long with a diameter of 10 mm. The probe is controlled, and data are read out via active radio frequency identification (active RFID) technology. Due to this technique, the probe works under low-power conditions, enabling the use of small button cell batteries and minimizing its size.
Using the Smartstone probe, the motion of single clasts (gravel size, median particle diameter d50 of 42 mm) within approx. 520 kg of a uniformly graded pebble material was observed in a laboratory experiment. Single pebbles were equipped with probes and placed embedded and superficially in or on the material. In a first analysis step, the data of one pebble are interpreted qualitatively, allowing for the determination of different transport modes, such as translation, rotation and saltation. In a second step, the motion is quantified by means of derived movement characteristics: the analysed pebble moves mainly in the vertical direction during the first motion phase with a maximal vertical velocity of approx. 1.7 m s−1. A strong acceleration peak of approx. 36 m s−2 is interpreted as a pronounced hit and leads to a complex rotational-motion pattern. In a third step, displacement is derived and amounts to approx. 1.0 m in the vertical direction. The deviation compared to laser distance measurements was approx. −10 %. Furthermore, a full 3D spatiotemporal trajectory of the pebble is reconstructed and visualized supporting the interpretations. Finally, it is demonstrated that multiple pebbles can be analysed simultaneously within one experiment. Compared to other observation methods Smartstone probes allow for the quantification of internal movement characteristics and, consequently, a motion sampling in landslide experiments.
Currently, new business models created in the sharing economy differ considerably and they differ in the formation of trust as well. If and how trust can be created is shown by a comparison of two examples which diverge in their founding philosophy. The chosen example of community-based economy, Community Supported Agriculture (CSA), no longer trusts the capitalist system and therefore distances itself and creates its own environment including a new business model. It is implemented within rather small groups where trust is created by personal relations and face-to-face communication. On the contrary, the example of a platform economy, the accommodation-provider company Airbnb, shows trust in the system and pushes technological innovations through the use of platform applications. It promotes trust and confidence in the progress of technology. For the conceptual analysis, the distinction between personal trust and system trust defined by Niklas Luhmann is adopted. The analysis describes two different modes of trust formation and how they push distrust or improve trust. Grounded on these analyses, assumptions on the process of trust formation within varying models of the sharing economy are formulated as well as a hypothesis about possible developments is introduced for further research.
The study analyzes the long-term trends (1998–2019) of concentrations of the air pollutants ozone (O3) and nitrogen oxides (NOx) as well as meteorological conditions at forest sites in German midrange mountains to evaluate changes in O3 uptake conditions for trees over time at a plot scale. O3 concentrations did not show significant trends over the course of 22 years, unlike NO2 and NO, whose concentrations decreased significantly since the end of the 1990s. Temporal analyses of meteorological parameters found increasing global radiation at all sites and decreasing precipitation, vapor pressure deficit (VPD), and wind speed at most sites (temperature did not show any trend). A principal component analysis revealed strong correlations between O3 concentrations and global radiation, VPD, and temperature. Examination of the atmospheric water balance, a key parameter for O3 uptake, identified some unusually hot and dry years (2003, 2011, 2018, and 2019). With the help of a soil water model, periods of plant water stress were detected. These periods were often in synchrony with periods of elevated daytime O3 concentrations and usually occurred in mid and late summer, but occasionally also in spring and early summer. This suggests that drought protects forests against O3 uptake and that, in humid years with moderate O3 concentrations, the O3 flux was higher than in dry years with higher O3 concentrations.
Although gravitropism forces trees to grow vertically, stems have shown to prefer specific orientations. Apart from wind deforming the tree shape, lateral light can result in prevailing inclination directions. In recent years a species dependent interaction between gravitropism and phototropism, resulting in trunks leaning down-slope, has been confirmed, but a terrestrial investigation of such factors is limited to small scale surveys. ALS offers the opportunity to investigate trees remotely. This study shall clarify whether ALS detected tree trunks can be used to identify prevailing trunk inclinations. In particular, the effect of topography, wind, soil properties and scan direction are investigated empirically using linear regression models. 299.000 significantly inclined stems were investigated. Species-specific prevailing trunk orientations could be observed. About 58% of the inclination and 19% of the orientation could be explained by the linear models, while the tree species, tree height, aspect and slope could be identified as significant factors. The models indicate that deciduous trees tend to lean down-slope, while conifers tend to lean leeward. This study has shown that ALS is suitable to investigate the trunk orientation on larger scales. It provides empirical evidence for the effect of phototropism and wind on the trunk orientation.
Soil degradation due to erosion is a significant worldwide problem at different spatial (from pedon to watershed) and temporal scales. All stages and factors in the erosion process must be detected and evaluated to reduce this environmental issue and protect existing fertile soils and natural ecosystems. Laboratory studies using rainfall simulators allow single factors and interactive effects to be investigated under controlled conditions during extreme rainfall events. In this study, three main factors (rainfall intensity, inclination, and rainfall duration) were assessed to obtain empirical data for modeling water erosion during single rainfall events. Each factor was divided into three levels (− 1, 0, + 1), which were applied in different combinations using a rainfall simulator on beds (6 × 1 m) filled with soil from a study plot located in the arid Sistan region, Iran. The rainfall duration levels tested were 3, 5, and 7 min, the rainfall intensity levels were 30, 60, and 90 mm/h, and the inclination levels were 5, 15, and 25%. The results showed that the highest rainfall intensity tested (90 mm/h) for the longest duration (7 min) caused the highest runoff (62 mm3/s) and soil loss (1580 g/m2/h). Based on the empirical results, a quadratic function was the best mathematical model (R2 = 0.90) for predicting runoff (Q) and soil loss. Single-factor analysis revealed that rainfall intensity was more influential for runoff production than changes in time and inclination, while rainfall duration was the most influential single factor for soil loss. Modeling and three-dimensional depictions of the data revealed that sediment production was high and runoff production lower at the beginning of the experiment, but this trend was reversed over time as the soil became saturated. These results indicate that avoiding the initial stage of erosion is critical, so all soil protection measures should be taken to reduce the impact at this stage. The final stages of erosion appeared too complicated to be modeled, because different factors showed differing effects on erosion.