Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (341) (entfernen)
Sprache
- Englisch (341) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (17)
- Hydrocortison (13)
- Modellierung (11)
- Fernerkundung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (54)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (20)
- Informatik (16)
- Anglistik (11)
- Fachbereich 6 (8)
- Fachbereich 2 (7)
- Politikwissenschaft (3)
- Computerlinguistik und Digital Humanities (1)
- Fachbereich 3 (1)
- Japanologie (1)
- Sinologie (1)
- Universitätsbibliothek (1)
The main aim of "Her Idoll Selfe"? Shaping Identity in Early Modern Women- Self-Writings is to offer fresh readings of as yet little-read early modern women- texts. I look at a variety of texts that are either explicitly concerned with the constitution of the writer- self, such as the autobiographies by Lady Grace Mildmay and Martha Moulsworth, or in which the preoccupation with the self is of a more indirect nature, as in the mothers" advice books by Elizabeth Grymeston, Dorothy Leigh, Elizabeth Richardson or the anonymous M. R., or even in women- poetry, drama and religious verse. I situate the texts in the context of early modern discourses of femininity and subjectivity to pursue the question in how far it was possible for early modern women to achieve a sense of agency in spite of their culturally marginal position. In that, my readings aim to contribute to the ongoing critical process of decentring the early modern period. At the same time, I draw on contemporary theory as a methodological tool that can open up further dimensions of the texts, especially in places where the texts provide clues and parallels that lend themselves to a theoretical approach. Conversely, the texts themselves shed interesting light on feminist and poststructuralist theory and can serve as testing grounds for the current critical fascination with fragmentation and hybridity. Having outlined the theoretical and methodological framework of my study, I then analyse the women- writings with reference to a matrix of paradigmatic dimensions that encompass their most prominently recurring themes: the notion of writing the self, relationships between self and other, demarcations of private and public, the women- notorious preoccupation with self-loss and death, as well as the recurrent theme of the "golden meane". I suggest that this motif can provide the vital cue to early modern women- constitution of self. The idea of a precarious "golden meane" links in with to parallel discourses of moderation and balance at the time, but reinterprets them in a manner that can present a workable and innovative paradigm of subjectivity. Instead of subscribing to a model of decentred selfhood, early modern women- presentations of self suggest that a concluding but contested compromise is a workable strategy to achieve a form of selfhood that can responsibly be lived with.
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
Mankind has dramatically influenced the nitrogen (N) fluxes between soil, vegetation, water and atmosphere " the global N cycle. Increasing intensification of agricultural land use, caused by the growing demand for agricultural products, has had major impacts on ecosystems worldwide. Particularly nitrogenous gases such as ammonia (NH3) have increased mainly due to industrial livestock farming. Countries with high N deposition rates require a variety of deposition measurements and effective N monitoring networks to assess N loads. Due to high costs, current "conventional"-deposition measurement stations are not widespread and therefore provide only a patchy picture of the real extent of the prevailing N deposition status over large areas. One tool that allows quantification of the exposure and the effects of atmospheric N impacts on an ecosystem is the use of bioindicators. Due to their specific physiology and ecology, especially lichens and mosses are suitable to reflect the atmospheric N input at ecosystem level. The present doctoral project began by investigating the general ability of epiphytic lichens to qualify and quantify N deposition by analysing both lichens and total N and δ15N along a gradient of different N emission sources and severity. The results showed that this was a viable monitoring method, and a grid-based monitoring system with nitrophytic lichens was set up in the western part of Germany. Finally, a critical appraisal of three different monitoring techniques (lichens, mosses and tree bark) was carried out to compare them with national relevant N deposition assessment programmes. In total 1057 lichen samples, 348 tree bark samples, 153 moss samples and 24 deposition water samples, were analysed in this dissertation at different investigation scales in Germany.The study identified species-specific ability and tolerance of various epiphytic lichens to accumulate N. Samples of tree bark were also collected and N accumulation ability was detected in connection with the increased intensity of agriculture, and according to the presence of reduced N compounds (NHx) in the atmosphere. Nitrophytic lichens (Xanthoria parietina, Physcia spp.) have the strongest correlations with high agriculture-related N deposition. In addition, the main N sources were revealed with the help of δ15N values along a gradient of altitude and areas affected by different types of land use (NH3 density classes, livestock units and various deposition types). Furthermore, in the first nationwide survey of Germany to compare lichens, mosses and tree bark samples as biomonitors for N deposition, it was revealed that lichens are clearly the most meaningful monitor organisms in highly N affected regions. Additionally, the study shows that dealing with different biomonitors is a difficult task due to their variety of N responses. The specific receptor surfaces of the indicators and therefore their different strategies of N uptake are responsible for the tissue N concentration of each organism group. It was also shown that the δ15N values depend on their N origin and the specific N transformations in each organism system, so that a direct comparison between atmosphere and ecosystems is not possible.In conclusion, biomonitors, and especially epiphytic lichens may serve as possible alternatives to get a spatially representative picture of the N deposition conditions. Furthermore, bioindication with lichens is a cost-efficient alternative to physico-chemical measurements to comprehensively assess different prevailing N doses and sources of N pools on a regional scale. They can at least support on-site deposition instruments by qualification and quantification of N deposition.
Time series archives of remotely sensed data offer many possibilities to observe and analyse dynamic environmental processes at the Earth- surface. Based on these hypertemporal archives, which offer continuous observations of vegetation indices, typically at repetition rates from one to two weeks, sets of phenological parameters or metrics can be derived. Examples of such parameters are the beginning and end of the annual growing period, as well as its length. Even though these parameters do not correspond exactly to conventional observations of phenological events, they nevertheless provide indications of the dynamic processes occurring in the biosphere. The development of robust algorithms for the derivation of phenological metrics can be challenging. Currently, such algorithms are most commonly based on digital filters or the Fourier analysis of time series. Polynomial spline models offer a useful alternative to existing methods. The possibilities of using spline models in the analytical description of time series are numerous, and their specific mathematical properties may help to avoid known problems occurring with the more common methods for deriving phenological metrics. Based on a selection of different polynomial spline models suitable for the analysis of remotely sensed time series of vegetation indices, a method to derive various phenological parameters from such time series was developed and implemented in this work. Using an example data set from an intensively used agricultural area showing highly dynamic variations in vegetation phenology, the newly developed method was verified by a comparison of the results of the spline based approach to the results of two alternative, well established methods.
The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Two areas were selected to represent major process regimes of Mediterranean rangelands. In the County of Lagads (Greece), situated east of the city of Thessaloniki, livestock grazing with sheep and goats is a major factor of the rural economy. In suitable areas, it is complemented by agricultural use. The region of Ayora (Spain) is located west of the city of Valencia. It is one of regions most affected by fires in Spain. First of all, long time series of satellite data were compiled for both regions on the basis of Landsat sensors, which cover the time until 1976 (Ayora) and 1984 (Lagadas) with one image per year. Using a rigorous processing scheme, the data were geometrically and radiometrically corrected Specific attention was given to an exact sensor calibration, the radiometric intercalibration of Landsat-TM and "MSS. Proportional cover of photosynthetically active vegetation was identified as a suitable quantitative indicator for assessing the state of rangelands. Using Spectral Mixture Analysis (SMA) it was inferred for all data sets. The extensive data base procured this way enabled to map fire events in the Ayora area based on sequential diachronic sets and provide fire dates, perimeter as well as fire recurrence for each pixel. The increasing fire frequency in the past decades is in large parts attributed to the accelerated abandonment of the area that leads to an encroachment of shrublands and the accumulation of combustible biomass. On the basis of the fire mapping results, a spatial and temporal stratification of the data set allowed to asses plant recovery dynamics on the landscape level through linear trend analysis. The long history of fire events in the Mediterranean frequently leads to processes of auto-succession. Following an initial dominance of herbaceous vegetation this commonly leads to similar plant communities as the ones present before the fire. On a temporal axis, this results in typical exponential post-fire trajectories which could also be shown in this study. The analysis of driving factors for post-fire dynamics confirmed the importance of aspect and slope. Locations with lower amounts of solar irradiation and favourable water supply yielded faster recovery rates and higher post-fire vegetation cover levels. In most cases, the vegetation cover levels observed before the fire were not reached within the post-fire observation period. In the area of Lagadas, linear trend analysis and additional statistical parameters were used to infer a degradation index. This could be used to illustrate a complex pattern of stability, regeneration and degradation of vegetation cover. These different processes and states are found in close proximity and are clearly determined by topography and elevation. Following a sequence of analyses, it was found that in particular steep, narrow valleys show positive trends, while negative trends are more abundant on plain or gently undulating areas. Considering the local grazing regime, this spatial differentiation was related to the accessibility of specific locations. Subsequently, animal numbers on community level were used to calculate efficient stocking rates and assess the temporal development of their relation with vegetation cover. This calculation of temporal trajectories illustrated that only some communities show the expected negative relation. To the contrary, a positive relation or even changing relation patterns are observed. This signifies recent concentration and intensification processes in the grazing scheme, as a result of which animals are kept in sheds, where additional feedstuffs are provided. In these cases, free roaming of livestock animals is often confined to some hours every day, which explains the spatial preference of easily accessible areas by the shepherds. Beyond these temporal trends, it was analysed whether the grazing pattern is equally reflected in a spatial trend. Making use of available geospatial information layers, the efforts required to reach each location was expressed as a cost. Then, cost zones could be defined and woody vegetation cover as a grazing indicator could be inferred for the different zones. Animal sheds were employed as starting features for this piospheric analysis, which could be mapped from very high spatial resolution Quickbird image data. The result was a clearly structured gradient showing increasing woody vegetation cover with increasing cost distance. On the basis of these two pilot studies, the elements of a monitoring and interpretation framework identified at the beginning of the work were evaluated and a formal interpretation scheme was presented.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Death is perceived as a severe threat to the self. Although it is certain that everyone has to die, people usually don't think about the finiteness of their life. Everything reminding of death is ignored, rationalized and death-related thoughts and fears are pushed out of mind (TMT; Pyszczynski et al., 1999). However, people differ in their ability to regulate negative affect and to access their self-system (Kuhl, 2001). As death is assumed to arouse existential fears, the ability to regulate such fears is particularly important, higher self-access could be relevant in defending central personal values. This thesis aimed at showing existential fears under mortality salience and effects of self-regulation of affect under mortality salience. In two studies (Chapter 2) implicit negative affect under mortality salience was demonstrated. An additional study (Chapter 3) shows the effects of self-regulation on implicit negative affect, whereas four studies in Chapter 4 displayed differences in self-access under mortality salience depending on people- ability of self-regulating negative affect.
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
A huge number of clinical studies and meta-analyses have shown that psychotherapy is effective on average. However, not every patient profits from psychotherapy and some patients even deteriorate in treatment. Due to this result and the restricted generalization of clinical studies to clinical practice, a more patient-focused research strategy has emerged. The question whether a particular treatment works for an individual case is the focus of this paradigm. The use of repeated assessments and the feedback of this information to therapists is a major ingredient of patient-focused research. Improving patient outcomes and reducing dropout rates by the use of psychometric feedback seems to be a promising path. Therapists seem to differ in the degree to which they make use of and profit from such feedback systems. This dissertation aims to better understand therapist differences in the context of patient-focused research and the impact of therapists on psychotherapy. Three different studies are included, which focus on different aspects within the field:
Study I (Chapter 5) investigated how therapists use psychometric feedback in their work with patients and how much therapists differ in their usage. Data from 72 therapists treating 648 patients were analyzed. It could be shown that therapists used the psychometric feedback for most of their patients. Substantial variance in the use of feedback (between 27% and 52%) was attributable to therapists. Therapists were more likely to use feedback when they reported being satisfied with the graphical information they received. The results therefore indicated that not only patient characteristics or treatment progress affected the use of feedback.
Study II (Chapter 6) picked up on the idea of analyzing systematic differences in therapists and applied it to the criterion of premature treatment termination (dropout). To answer the question whether therapist effects occur in terms of patients’ dropout rates, data from 707 patients treated by 66 therapists were investigated. It was shown that approximately six percent of variance in dropout rates could be attributed to therapists, even when initial impairment was controlled for. Other predictors of dropout were initial impairment, sex, education, personality styles, and treatment expectations.
Study III (Chapter 7) extends the dissertation by investigating the impact of a transfer from one therapist to another within ongoing treatments. Data from 124 patients who agreed to and experienced a transfer during their treatment were analyzed. A significant drop in patient-rated as well as therapist-rated alliance levels could be observed after a transfer. On average, there seemed to be no difficulties establishing a good therapeutic alliance with the new therapist, although differences between patients were observed. There was no increase in symptom severity due to therapy transfer. Various predictors of alliance and symptom development after transfer were investigated. Impacts on clinical practice were discussed.
Results of the three studies are discussed and general conclusions are drawn. Implications for future research as well as their utility for clinical practice and decision-making are presented.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
This dissertation includes three research articles on the portfolio risks of private investors. In the first article, we analyze a large data set of private banking portfolios in Switzerland of a major bank with the unique feature that parts of the portfolios were managed by the bank, and parts were advisory portfolios. To correct the heterogeneity of individual investors, we apply a mixture model and a cluster analysis. Our results suggest that there is indeed a substantial group of advised individual investors that outperform the bank managed portfolios, at least after fees. However, a simple passive strategy that invests in the MSCI World and a risk-free asset significantly outperforms both the better advisory and the bank managed portfolios. The new regulation of the EU for financial products (UCITS IV) prescribes Value at Risk (VaR) as the benchmark for assessing the risk of structured products. The second article discusses the limitations of this approach and shows that, in theory, the expected return of structured products can be unbounded while the VaR requirement for the lowest risk class can still be satisfied. Real-life examples of large returns within the lowest risk class are then provided. The results demonstrate that the new regulation could lead to new seemingly safe products that hide large risks. Behavioral investors who choose products based only on their official risk classes and their expected returns will, therefore, invest into suboptimal products. To overcome these limitations, we suggest a new risk-return measure for financial products based on the martingale measure that could erase such loopholes. Under the mean-VaR framework, the third article discusses the impacts of the underlying's first four moments on the structured product. By expanding the expected return and the VaR of a structured product with its underlying moments, it is possible to investigate each moment's impact on them, simultaneously. Results are tested by Monte Carlo simulation and historical simulation. The findings show that for the majority of structured products, underlyings with large positive skewness are preferred. The preferences for variance and for kurtosis are ambiguous.
The overall objective of this thesis was to gain a deeper understanding of the antecedents, processes, and manifestations of uniqueness-driven consumer behavior. To achieve this goal, five studies have been conducted in Germany and Switzerland with a total of 1048 participants across different demographic and socio-economic backgrounds. Two concepts were employed in all studies: Consumer need for uniqueness (CNFU) and general uniqueness perception (GUP). CNFU (Tian, Bearden, & Hunter, 2001), a mainly US"based consumer research concept, measures the individual need, and thus the motivation to acquire, use, and dispose consumer goods in order to develop a unique image. GUP, adapted from the two-component theory of individuality (Kampmeier, 2001), represents a global and direct measure of self-ascribed uniqueness. Study #1 looked at the interrelation of the uniqueness-driven concepts. Therefore, GUP and CNFU were employed in the study as potential psychological factors that influence and predict uniqueness-driven consumer behavior. Different behavioral measures were used: The newly developed possession of individualized products (POIP), the newly developed products for uniqueness display (PFUD), and the already established uniqueness-enhancing behaviors (UEB). Analyses showed that CNFU mediates the relationship between GUP and the behavioral measures in a German speaking setting. Thus, GUP (representing self-perception) was identified as the driver behind CNFU (representing motivation) and the actual consumer behavior. Study #2 examined further manifestations of uniqueness-driven consumer behavior. For this purpose, an extreme form of uniqueness-increasing behavior was researched: Tattooing. The influence of GUP and CNFU on tattooing behavior was investigated using a sample derived from a tattoo exhibition. To do so, a newly developed measure to determine the percentage of the body covered by tattoos was employed. It was revealed that individuals with higher GUP and CNFU levels indeed have a higher tattooing degree. Study #3 further explored the predictive possibilities and limitations of the GUP and CNFU concepts. On the one hand, study #3 specifically looked at the consumption of customized apparel products as mass customization is said to become the standard of the century (Piller & Müller, 2004). It was shown that individuals with higher CNFU levels not only purchased more customized apparel products in the last six months, but also spend more money on them. On the other hand, uniqueness-enhancing activities (UEA), such as travel to exotic places or extreme sports, were investigated by using a newly developed 30-item scale. It was revealed that CNFU partly mediates the GUP and UEA relationship, proving that CNFU indeed predicts a broad range of consumer behaviors and that GUP is the driver behind the need and the behavior. Study #4, entered a new terrain. In contrast to the previous three studies, it explored the so termed "passive" side of uniqueness-seeking in the consumer context. Individuals might feel unique because business companies treat them in a special way. Such a unique customer treatment (UCT) involves activities like customer service or customer relationship management. Study #4 investigated if individuals differ in their need for such a treatment. Hence, with the need for unique customer treatment (NFUCT) a new uniqueness-driven consumer need was introduced and its impact on customer loyalty examined. Analyses, for example, revealed that individuals with high NFUCT levels receiving a high unique customer treatment (UCT) showed the highest customer loyalty, whereas the lowest customer loyalty was found among those individuals with high NFUCT levels receiving a low unique customer treatment (UCT). Study #5 mainly examined the processes behind uniqueness-driven consumer behavior. Here, not only the psychological influences, but also situational influences were examined. This study investigated the impact of a non-personal "indirect" uniqueness manipulation on the consumption of customized apparel products by simultaneously controlling for the influence of GUP and CNFU. Therefore, two equal experimental groups were created. Afterwards, these groups either received an e-mail with a "pro-individualism" campaign or a "pro-collectivism" campaign especially developed for study #5. The conducted experiment revealed that, individuals receiving a "pro-individualism" poster campaign telling the participants that uniqueness is socially appropriate and desired were willing to spend more money on customization options compared to individuals receiving a "pro-collectivism" poster campaign. Hence, not only psychological antecedents such as GUP and CNFU influence uniqueness-driven consumer behavior, but also situational factors.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Our goal is to approximate energy forms on suitable fractals by discrete graph energies and certain metric measure spaces, using the notion of quasi-unitary equivalence. Quasi-unitary equivalence generalises the two concepts of unitary equivalence and norm resolvent convergence to the case of operators and energy forms defined in varying Hilbert spaces.
More precisely, we prove that the canonical sequence of discrete graph energies (associated with the fractal energy form) converges to the energy form (induced by a resistance form) on a finitely ramified fractal in the sense of quasi-unitary equivalence. Moreover, we allow a perturbation by magnetic potentials and we specify the corresponding errors.
This aforementioned approach is an approximation of the fractal from within (by an increasing sequence of finitely many points). The natural step that follows this realisation is the question whether one can also approximate fractals from outside, i.e., by a suitable sequence of shrinking supersets. We partly answer this question by restricting ourselves to a very specific structure of the approximating sets, namely so-called graph-like manifolds that respect the structure of the fractals resp. the underlying discrete graphs. Again, we show that the canonical (properly rescaled) energy forms on such a sequence of graph-like manifolds converge to the fractal energy form (in the sense of quasi-unitary equivalence).
From the quasi-unitary equivalence of energy forms, we conclude the convergence of the associated linear operators, convergence of the spectra and convergence of functions of the operators – thus essentially the same as in the case of the usual norm resolvent convergence.
Stress related disorders increase continuously. It is not yet clear if stress also promotes breast cancer. This dissertation provides an analyses of the current state of research and focuses on the significance of pre-/postnatal stress factors and chronic stress. The derived hypotheses are empirically examined on breast cancer patients. The clinical study investigates the links between those factors and prognosis and outcome.
ASEAN and ASEAN Plus Three: Manifestations of Collective Identities in Southeast and East Asia?
(2003)
East Asia is a region undergoing vast structural changes. As the region moved closer together economically and politically following the breakdown of the bipolar world order and the ensuing expansion of intra-regional interdependencies, the states of the region faced the challenge of having to actively recast their mutual relations. At the same time, throughout the 1990s, the West became increasingly interested in trans- and inter-regional dialogue and cooperation with the emerging economies of East Asia. These developments gave rise to a "new regionalism", which eventually also triggered debates on Asian identities and the region's potential to integrate. Before this backdrop, this thesis analyzes in how far both the Association of Southeast Asian Nations (ASEAN), which has been operative since 1967 and thus embodies the "old regionalism" of Southeast Asia, and the ASEAN Plus Three forum (APT: the ASEAN states plus China, Japan and South Korea), which has come into existence in the aftermath of the Asian economic crisis of 1997, can be said to represent intergovernmental manifestations of specific collective identities in Southeast and East Asia, respectively. Based on profiles of the respective discursive, behavioral and motivational patterns as well as the integrative potential of ASEAN and APT, this study establishes in how far the member states adhere to sustainable collective patterns of interaction, expectations and objectives, and assesses in how far they can be said to form specific 'ingroups'. Four studies on collective norms, readiness to pool sovereignty, solidarity and attitudes vis-Ã -vis relevant third states show that ASEAN has evolved a certain degree of collective identity, though the Association's political relevance and coherence is frequently thwarted by changes in its external environment. A study on the cooperative and integrative potential of APT yields no manifest evidence of an ongoing or incipient pan-East Asian identity formation process.
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
A sustainable development of forests and their ecosystem services requires the monitoring of the forests" state and changes as well as the prediction of their future development. To achieve the latter, eco-physiological forest growth models are usually applied. These models require calibration and validation with forestry reference data. This data includes forest structural parameters such as tree height or stem diameter which are easy to measure and can be used to estimate the core model parameters, i.e. the tree- biomass pools. The methods traditionally applied to derive the structural parameters are mainly manual and time-consuming. Hence, the in situ data acquisition is inefficient and limited in its ability to capture the vertical and horizontal variability in stand structure. Ground-based remote sensing bears the potential to overcome the limitations of the traditional methods. As they can be automated, ground-based remote sensing methods allow a much more efficient data acquisition and a larger spatial coverage. They are also able to capture forest structure in its three dimensions. Nevertheless, at present further research is required, in particular with respect to the practical integration of ground-based remote sensing data into forest growth models as well as regarding factors influencing the structural parameter retrieval from this data. Therefore, the goal of this PhD thesis was to investigate the influencing factors of two ground-based remote sensing methods (terrestrial laser scanning and hemispherical photography), which have not or only scarcely been studied to date. In addition, the use of forest structural parameters derived from these methods for the calibration of a forest growth model was assessed. Both goals were achieved. The results of this thesis could contribute significantly to a comprehensive assessment of ground-based remote sensing and its potential to derive the forest structural parameters. However, the use of these methods to calibrate forest growth models proved to be limited. An optimized data sampling design is expected to eliminate the major limitations, though. Furthermore, the combination of ground-based, airborne, and satellite remote sensing sensors was suggested to provide an optimized framework for the general integration of remotely sensed data into forest growth models. This combination of remote sensing observations at different scales will contribute greatly to a modern forest management with the purpose of warranting a sustainable forest development even under growing economic and ecological pressures.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
One mechanism underlying the acquisition of interpersonal attitudes is the formation of an association between a valenced unconditioned stimulus (US) and an affectively neutral conditioned stimulus (CS). However, a stimulus (e.g., a person) is not always and necessarily perceived to be unambiguously positive or negative. An individual can be negative regarding abstract (trait) information but at the same time display a positive (concrete) behavior. The present research deals with the question of whether the valence of abstract or concrete information about a US is encoded and subsequently transferred to an associated CS. The central assumptions are that the valence of the concrete information is more important for the evaluation of the US, whereas the abstract information is more important for the evaluation of the CS. The rationale behind these assumptions is that the US is a psychologically proximal stimulus because it elicits a more direct affective reaction. The CS, however, is psychologically more distal because it is merely associated with the US and is therefore only experienced indirectly. It is postulated that the associative relation between US and CS constitutes a dimension of psychological distance. In four studies, the valence of abstract and concrete information about a number of USs was manipulated. Within an evaluative learning paradigm, these stimuli were associated with affectively neutral CSs. As predicted, ambivalent USs were evaluated according to the valence of the concrete information. The evaluation of CSs, however, was influenced more strongly by the valence of the abstract information. Moreover, in a subsequent lexical decision task, participants were faster to categorize abstract (vs. concrete) stimuli when the stimuli were preceded by a CS prime as compared to a US prime. The results provide first evidence that perceived psychological distance influences the evaluations of US and CS in an associative evaluative learning paradigm.
Since the end of the British Empire, which had provided white Australians with points of view, attitudes and stereotypes of the world - including perceptions of their own role in it -, rediscovering an international identity has been an Australian quest. Many turned to European roots; others to the Aboriginal landscape; Blanche d"Alpuget and Christopher J. Koch are two who have ventured into Asia for the culturally and spiritually regenerative materials necessary to redefine Australia in the post-colonial world. They have taken Eastern concepts of "self", and "soul" and forged them with the Australian obsession of fear and desire of contact with the "other" in a looking-glass of hybrid, Austral-Asian myth to reveal the true soul of Australian identity. Along with a brief historical and literary background to the triangular relationship between white Australia, Asia, and the West, this study- goal is to identify some of the Southeast Asian symbols, myths and literary structures which Koch and d"Alpuget integrate into the Western tradition. Central elements include: dichotomies as of personality, righteousness, and virtue; the "Otherworld", where one may approach enlightenment, but at the risk of falling into self-delusion; archetypes of the Hindu divine feminine; Eastern roots of Koch- themes of the "double man"; concepts of the forces of "light" and "dark"; the semiotics of time and meaning; and the central Eastern metaphor of the mirror by which Australia creates interdependent images of itself and of Asia.
This thesis presents a study of the visual change detection mechanism. This mechanism is thought to be responsible for the detection of sudden and unexpected changes in our visual environment. As the brain is a capacity limited system and has to deal with a continuous stream of information from its surroundings only a part of the vast amount of information can be completely processed and be brought to conscious awareness. This information, which passes through attentional filters, is used for goal-directed behaviour. Therefore, the change detection mechanism is a very useful aid to cope with important information which is outside the focus of our attention. rnIt is thought that a neural memory trace of repetitive visual information is stored. Each new information input is compared to this existing memory trace by a so-called change or mismatch detection system. Following a sudden change, the comparison process leads to a mismatch and the detection system elicits a warning signal, to which an orienting response can follow. This involves a change in the focus of attention towards this sudden environmental change which can then be evaluated for potential danger and allows for a behavioural adaptation to the new situation. rnTo this purpose a paradigm was developed combining a 2-choice response time task with in the background a mismatch detection task of which the subjects were not aware. This paradigm was implemented in an ERP and an fMRI study and was used to study the the change detection mechanism and its relationship with impulsivity.rnIn previous studies a change detection system for auditory information had already been established. As the brain is a very efficient system it was thought to be unlikely that this change detection system is only available for the processing of auditory information. rnIndeed, a modality specific mismatch response at the sensory specific occipital cortex and a more general response at the frontocentral midline, both resembling the components shown in auditory research, were found in the ERP study.rnAdditionally, magnetic resonance imaging revealed a possible functional network of regions, which responded specifically to the processing of a deviant. These regions included the occipital gyrus, premotor cortex, inferior frontal cortex, thalamas, insula, and parts of the cingular cortex. rnThe relationship between impulsivity measures and visual change detection was established in an additional study. More impulsive subjects showed less detection of deviant stimuli, which was most likely due to too fast and imprecise information processing.rnIn summary it can be said, that the work presented in this thesis demonstrates that visual mismatch negativity was established, a number of regions could be associated with change detection and additionally the relevance of change detection in information processing was shown.rn
This study examines to what extent a banking crisis and the ensuing potential liquidity shortage affect corporate cash holdings. Specifically, how do firms adjust their liquidity management prior to and during a banking crisis when they are restricted in their financing options? These restrictions might not result from firm-specific characteristics but also incorporate the effects of certain regulatory requirements. I analyse the real effects of indicators of a potential crisis and the occurrence of a crisis event on corporate cash holdings for both unregulated and regulated firms from 31 different countries. In contrast to existing studies, I perform this analysis on the basis of a long observation period (1997 to 2014 respectively 2003 to 2014) using multiple crisis indicators (early warning signals) and multiple crisis events. For regulated firms, this study makes use of a unique sample of country-specific regulatory information, which is collected by hand for 15 countries and converted into an ordinal scale based on the severity of the regulation. Regulated firms are selected from a single industry: Real Estate Investment Trusts. These firms invest in real estate properties and let these properties to third parties. Real Estate Investment Trusts that comply with the aforementioned regulations are exempt from income taxation and are punished for a breach, which makes this industry particularly interesting for the analysis of capital structure decisions.
The results for regulated and unregulated firms are mostly inconclusive. I find no convincing evidence that the degree of regulation affects the level of cash holdings for regulated firms before and during a banking crisis. For unregulated firms, I find strong evidence that financially constrained firms have higher cash holdings than unconstrained firms. Further, there is no real evidence that either financially constrained firms or unconstrained firms increase their cash holdings when observing an early warning signal. In case of a banking crisis, the results differ for univariate tests and in panel regressions. In the univariate setting, I find evidence that both types of firms hold higher levels of cash during a banking crisis. In panel regressions, the effect is only evident for financially unconstrained firms from the US, and when controlling for financial stress, it is also apparent for financially constrained US firms. For firms from Europe, the results are predominantly inconclusive. For banking crises that are preceded by an early warning signal, there is only evidence for an increase in cash holdings for unconstrained US firms when controlling for financial stress.
For the first time, the German Census 2011 will be conducted via a new method the register based census. In contrast to a traditional census, where all inhabitants are surveyed, the German government will mainly attempt to count individuals using population registers of administrative authorities, such as the municipalities and the Federal Employment Agency. Census data that cannot be collected from the registers, such as information on education, training, and occupation, will be collected by an interview-based sample survey. Moreover, the new method reduces citizens' obligations to provide information and helps reduce costs significantly. The use of sample surveys is limited if results with a detailed regional or subject-matter breakdown have to be prepared. Classical estimation methods are sometimes criticized, since estimation is often problematic for small samples. Fortunately, model based small area estimators serve as an alternative. These methods help to increase the information, and hence the effective sample size. In the German Census 2011 it is possible to embed areas on a map in a geographical context. This may offer additional information, such as neighborhood relations or spatial interactions. Standard small area models, like Fay-Herriot or Battese-Harter-Fuller, do not account for such interactions explicitly. The aim of our work is to extend the classical models by integrating the spatial information explicitly into the model. In addition, the possible gain in efficiency will be analyzed.
This study examines the relationship between media content, its production, and its reception in Japanese popular culture with the example of the so-called yuri ("lily") genre that centers on representations of intimate relationships between female characters. Based on contemporary genre theory, which posits that genres are not inherent properties of texts, the central question of this study is how the yuri genre is discursively produced in Japan. To examine this question, the study takes a variety of sources into consideration: Firstly, it discusses ten exemplary texts from the 1910s to 2010s that in the Japanese discourse on the yuri genre are deemed the milestone texts of the yuri genre's historical development (Hana monogatari, Otome no minato, Secret Love, Shiroi heya no futari, BishÅjo senshi Sailor Moon, Maria-sama ga miteru, ShÅjo Sect, Aoi hana, Yuru yuri, and Yuri danshi). Secondly, interviews with ten editors working for Japanese manga magazines shed light on their assessment of the yuri genre. Finally, the results of an online survey among Japanese fans of the yuri genre, which returned 1,352 completed questionnaires, question hitherto assumptions about the fans and their reasons for liking the yuri genre. The central argument of this study is that the yuri genre is for the most part constructed not through assignments on part of the genre's producers but through interpretations on part of the genre's fans. The intimacy portrayed in the texts ranges from "friendship" to "love," and often the ideas of "innocence" and "beauty" are emphasized. Nevertheless, the formation of the yuri genre occurs outside the bounds of the texts, most importantly in fan works, i.e. derivative texts created by fans. The actual content of the originals merely serves as a starting point for these interpretations. Located at the intersection of Japanese studies, cultural studies, media studies, and sociology, this study contributes to our understanding of contemporary Japanese popular culture by showing the mutual dependencies between media content, production, and reception. It provides a deeper look at these processes through first-hand accounts of both producers and fans of the yuri genre.
The implicit power motive is one of the most researched motives in motivational psychology—at least in adults. Children have rarely been subject to investigation and there are virtually no results on behavioral and affective correlates of the implicit power motive in children. As behavior and affect are important components of conceptual validation, the empirical data in this dissertation focused on identifying three correlates, namely resource control behavior (study 1), power stress (study 2), and persuasive behavior (study 3). In each study, the implicit power motive was measured via the Picture Story Exercise, using an adapted version for children. Children across samples were between 4 and 11 years old.
Results from study 1 and 2 showed that children’s power-related behavior corresponded with evidence from adult samples: children with a high implicit power motive secure attractive resources and show negative reactions to a thwarted attempt to exert influence. Study 3 contradicted existing evidence with adults in that children’s persuasive behavior was not associated with nonverbal, but with verbal strategies of persuasion. Despite this inconsistency, these results are, together with the validation of a child-friendly Picture Story Exercise version, an important step into further investigating and confirming the concept of the implicit power motive and how to measure it in children.
The influence of the dopamine agonist Ritalin-® on performance in a card sorting task involving a monetary reward component was tested in 43 healthy male participants. It was investigated whether Ritalin-® would have differential behavioral effects as a function of the participants' parental bonding experiences and the personality variable "Novelty Seeking". When activity and performance accuracy were stimulated my monetary reward, Ritalin-® reduced activity in response to reward and added to the reward-induced increase in performance accuracy. However, performance accuracy after drug challenge was improved only in the low care participants. In the high care participants, it was contrarily impaired. This observation suggests that the successful therapeutic administration of Ritalin-® in ADHD may be influenced by early life parental care. Suggesting an association between the personality dimension of "Novelty Seeking" and the dopamine system, high "Novelty Seeking" scores positively correlated with sensitivity to Ritalin-® challenge.
This thesis focus on threats as an experience of stress. Threats are distinguished from challenges and hindrances as another dimension of stress in challenge-hindrance models (CHM) of work stress (Tuckey et al., 2015). Multiple disciplines of psychology (e.g. stereotype, Fingerhut & Abdou, 2017; identity, Petriglieri, 2011) provide a variety of possible events that can trigger threats (e.g., failure expe-riences, social devaluation; Leary et al., 2009). However, systematic consideration of triggers and thus, an overview of when does the danger of threats arises, has been lacking to date. The explanation why events are appraised as threats is related to frustrated needs (e.g., Quested et al., 2011; Semmer et al., 2007), but empirical evidence is rare and needs can cover a wide range of content (e.g., relatedness, competence, power), depending on need approaches (e.g., Deci & Ryan, 2000; McClelland, 1961). This thesis aims to shed light on triggers (when) and the need-based mechanism (why) of threats.
In the introduction, I introduce threats as a dimension of stress experience (cf. Tuckey et al., 2015) and give insights into the diverse field of threat triggers (the when of threats). Further, I explain threats in terms of a frustrated need for positive self-view, before presenting specific needs as possible deter-minants in the threat mechanism (the why of threats). Study 1 represents a literature review based on 122 papers from interdisciplinary threat research and provides a classification of five triggers and five needs identified in explanations and operationalizations of threats. In Study 2, the five triggers and needs are ecologically validated in interviews with police officers (n = 20), paramedics (n = 10), teach-ers (n = 10), and employees of the German federal employment agency (n = 8). The mediating role of needs in the relationship between triggers and threats is confirmed in a correlative survey design (N = 101 Leaders working part-time, Study 3) and in a controlled laboratory experiment (N = 60 two-person student teams, Study 4). The thesis ends with a general discussion of the results of the four studies, providing theoretical and practical implications.
The distractor-response binding effect (Frings & Rothermund, 2011; Frings, Rothermund, & Wentura, 2007; Rothermund, Wentura, & De Houwer, 2005) is based on the idea that irrelevant information will be integrated with the response to the relevant stimuli in an episodic memory trace. The immediate re-encounter of any aspect of this saved episode " be it relevant or irrelevant " can lead to retrieval of the whole episode. As a consequence, the previously executed and now retrieved response may influencing the response to the current relevant stimulus. That is, the current response may either be facilitated or be impaired by the retrieved response, depending on whether it is compatible or incompatible to the currently demanded response. Previous research on this kind of episodic retrieval focused on the influence on action control. I examined if distractor response binding also plays a role in decision making in addition to action control. To this end I adapted the distractor-to-distractor priming paradigm (Frings et al., 2007) and conducted nine experiments in which participants had to decide as fast as possible which disease a fictional patient suffered from. To infer the correct diagnosis, two cues were presented; one did not give any hint for a disease (the irrelevant cue), whereas the other did (the relevant cue). Experiments 1a to 1c showed that the distractor-response binding effect is present in deterministic decision situations. Further, experiments 2a and 2b indicate that distractor-response binding also influences decisions under uncertainty. Finally, experiments 3a to 3d were conducted to test some constraints and underlying mechanisms of the distractor-response binding effect in decision making under uncertainty. In sum, these nine experiments provide strong evidence that distractor-response binding influences decision making.
Until today the effects of many chlorinated hydrocarbons (e.g. DDT, PCBs) against the specific organisms are still a subject of controversial discussions. It was also the case for potential endocrine effects to influence the spermatogenesis correlated with possible changes of the population's vitality. To clear this situation, three questions could be at the centre of attention: 1) Do the chemicals cause a special harmful effect on the male reproductive tract? 2) Could some particular chemical mixtures act to bind and activate the human estrogen receptor (hER)? 3) Are the life stages of an organism specially sensitive to the effects of chemicals and therefore be established as Screening-Test-System? the connected effects of DDT and Arochlor 1254 as single substance and in 1:1 mixture according to their estrogenic effectiveness on zebrafish (Brachydanio rerio) were therefore investigated. the concentrations of the pesticides and their mixture ranged between 0.05-µg/l and 500-µg/l and separated by a factor of 10. It was turned out that the test concentrations of 500-µg/l were too toxic to zebrafish in all the cases. The experiment was followed up with four concentrations of DDT, A54 as well as their 1:1 mixture anew each separated by a factor of 10 and ranging between 0.05-µg/l and 50-µg/l. The bioaccumulation test within 8 days showed that the zebrafish accumulated the chemicals, but no equilibrum was reached and the concentration 0.05-µg/l was established as No Observed Effect Concentration (NOEC). Putting up on these analyses, the investigation of the life cycle (LC) starting with fertilized eggs demonstrated a reduction in the rate of hatchability, reproduction and length of fish emerged. These reductions involved the duration of the life cycle stages (LCS) which consequently lasted longer than expected. Exposure time and level of the tested chemicals accelerated the occurrence of these effects which were more significant when the chemical mixtures were used too. To establish whether the parameter assessed were correlated to the male reproductive tract, the quality, quantity and life span of sperm were assessed using the methods of Leong (1988) and Shapiro et al (1994). The sperm degeneration observed, led us to investigate the spermatogenesis and the ultrastructure of the testes. This last experiment showed a significant reduction of the late stage of spermatogenesis and the heterophagic vacuoles which play an important role in the spermatid maturation. It could therefore be concluded that, DDT and A54 could act synergically and cause disorders of the male reproductive tract of male zebrafish and influence also their growth.
This study investigates the endemic centres of Indonesian animals and the biodiversity across geographical gradients. At the same time, it also evaluated different lines suggested for separating the Oriental and Australian faunal region in the Indonesian region. The analyses have mainly used the present-day distribution of terrestrial vertebrates, especially the smallest ranges of species and subspecies. The results show that faunal migration of Oriental and Australian lineages to the Indonesian Archipelago may have been happening since the Palaeocene period and more importantly, island drifts might have facilitated such migration. These events caused major reorganisation of island positions and island forms, which in turn resulted in faunal extinction around the mid-Pliocene. Some islands, especially in the Wallacea region, emerged very late and as a result nowadays they are lacking endemic forms. There are currently at least seven endemic centres, which can be recognised, i.e. Borneo, Java, Sumatra, Sulawesi, North Moluccas, New Guinea and the Lesser Sundas/Banda Arcs. The affinities between these endemic centres revealed that there are two clusters of islands in the Indonesian Archipelago. These different clusters suggest in turn the shifts of biogeographical lines in the Indonesian Archipelago. Furthermore, oscillation in climate, eustatic sea level changes and fluctuations in vegetation in the Quaternary period had much affected the distribution pattern of animals. There was a phase of expansion for montane oak forests, grasslands and woodlands during the period 18,000-14,000 years ago in East Indonesia and 16,500-12,000 years ago in West Indonesia. Such an expansion led to the increased isolation of rainforests and of the faunas adapted to them. These periods are also indicated by the lowering of the tree line which facilitated montane fauna to disperse across lower elevations. At 8,000-9,000 years ago, the climate became warmer and slightly wetter. The mid- to upper montane forests expanded to their full altitudinal range, while montane oak forest, grassland, and woodland areas had contracted. The oscillation in climate, eustatic sea level changes and fluctuations in vegetation in turn determines much the formation of numerous sub endemic centres, which today can be found within the mainland. Recently, there are 14 sub endemic centres on Borneo, 8 on Java, 16 on Sumatra, 14 on Sulawesi and 14 on New Guinea. From the conservation management point of view, the identification of such sub endemic centres would generate valuable information for the protection effort.
Besides well-known positive aspects of conservation tillage combined with mulching, a drawback may be the survival of phytopathogenic fungi like Fusarium species on plant residues. This may endanger the health of the following crop by increasing the infection risk for specific plant diseases. In infected plant organs, these pathogens are able to produce mycotoxins like deoxynivalenol (DON). Mycotoxins like DON persist during storage, are heat resistant and of major concern for human and animal health after consumption of contaminated food and feed, respectively. Among fungivorous soil organisms, there are representatives of the soil fauna which are obviously antagonistic to a Fusarium infection and the contamination with mycotoxins. Earthworms (Lumbricus terrestris), collembolans (Folsomia candida) and nematodes (Aphelenchoides saprophilus) provide a wide range of ecosystem services including the stimulation of decomposition processes which may result in the regulation of plant pathogens and the degradation of environmental contaminants. Several investigations under laboratory conditions and in the field were conducted to test the following hypotheses: (1) Fusarium-infected and DON-contaminated wheat straw provides a more attractive food substrate than non-infected control straw (2) the introduced soil fauna reduce the biomass of F. culmorum and the content of DON in infected wheat straw under laboratory and field conditions (3) the species interaction of the introduced soil fauna enhances the degradation of Fusarium biomass and DON concentration in wheat straw; (4) the degradation efficiency of soil fauna is affected by soil texture. The results of the present thesis pointed out that the degradation performance of the introduced soil fauna must be considered as an important contribution to the biological control of plant diseases and environmental pollutants. As in particular L. terrestris revealed to be the driver of the degradation process, earthworms contribute to a sustainable control of fungal pathogens like Fusarium and its mycotoxins in wheat straw, thus reducing the risk of plant diseases and environmental pollution as ecosystem services.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Interoception - the perception of bodily processes - plays a crucial role in the subjective experience of emotion, consciousness and symptom genesis. As an alternative to interoceptive paradigms that depend on the participants" active cooperation, five studies are presented to show that startle methodology may be employed to study visceral afferent processing. Study 1 (38 volunteers) showed that startle responses to acoustic stimuli of 105 dB(A) intensity were smaller when elicited during the cardiac systole (R-wave +230 ms) as compared to the diastole (R +530 ms). In Study 2, 31 diabetic patients were divided into two groups with normal or diminished (< 6 ms/mmHg) baroreflex sensitivity (BRS) of heart rate control. Patients with normal BRS showed a startle inhibition during the cardiac systole as was found for healthy volunteers. Diabetic patients with diminished BRS did not show this pattern. Because diminished BRS is an indicator of impaired baro-afferent signal transmission, we concluded that cardiac modulation of startle is associated with intact arterial baro-afferent feedback. Thus, pre-attentive startle methodology is feasible to study visceral afferent processing. rnVisceral- and baro-afferent information has been found to be mainly processed in the right hemisphere. To explore whether cardiac modulation of startle eye blink is lateralized as well, in Study 3, 37 healthy volunteers received 160 unilateral acoustic startle stimuli presented to both ears, one at a time (R +0, 100, 230, 530 ms). Startle response magnitude was only diminished at R +230 ms and for left-ear presentation. This lateralization effect in the cardiac modulation of startle eye blink may reflect the previously described advantages of right-hemispheric brain structures in relaying viscero- and baro-afferent signal transmission. rnThis lateralization effect implies that higher cognitive processes may also play a role in the cardiac modulation of startle. To address this question, in Study 4, 25 volunteers responded first by 'fast as possible' button pushes (reaction time, RT), and second, rated perceived intensity of 60 acoustic startle stimuli (85, 95, or 105 dB; R +230, 530 ms). RT was divided into evaluation and motor response time. Increasing stimulus intensity enhanced startle eye blink, intensity ratings, and RT components. Eye blinks and intensity judgments were lower when startle was elicited at a latency of R +230 ms, but RT components were differentially affected. It is concluded that the cardiac cycle affects the attentive processing of acoustic startle stimuli. rnBeside the arterial baroreceptors, the cardiopulmonary baroreceptors represent another important system of cardiovascular perception that may have similar effects on startle responsiveness. To clarify this issue, in Study 5, Lower Body Negative Pressure at gradients of 0, -10, -20, and -30 mmHg was applied to unload cardiopulmonary baroreceptors in 12 healthy males, while acoustic startle stimuli were presented (R +230, 530 ms). Unloading of cardiopulmonary baroreceptors increased startle eye blink responsiveness. Furthermore, the effect of relative loading/unloading of arterial baroreceptors on startle eye blink responsiveness was replicated. These results demonstrate that the loading status of cardiopulmonary baroreceptors also has an impact on brainstem-based CNS processes. rnThus, the cardiac modulation of acoustic startle is feasible to reflect baro-afferent signal transmission of multiple neural sources, it represents a pre-attentive method that is independent of active cooperation, but its modulatory effects also reach higher cognitive, attentive processes.rn
We consider a linear regression model for which we assume that some of the observed variables are irrelevant for the prediction. Including the wrong variables in the statistical model can either lead to the problem of having too little information to properly estimate the statistic of interest, or having too much information and consequently describing fictitious connections. This thesis considers discrete optimization to conduct a variable selection. In light of this, the subset selection regression method is analyzed. The approach gained a lot of interest in recent years due to its promising predictive performance. A major challenge associated with the subset selection regression is the computational difficulty. In this thesis, we propose several improvements for the efficiency of the method. Novel bounds on the coefficients of the subset selection regression are developed, which help to tighten the relaxation of the associated mixed-integer program, which relies on a Big-M formulation. Moreover, a novel mixed-integer linear formulation for the subset selection regression based on a bilevel optimization reformulation is proposed. Finally, it is shown that the perspective formulation of the subset selection regression is equivalent to a state-of-the-art binary formulation. We use this insight to develop novel bounds for the subset selection regression problem, which show to be highly effective in combination with the proposed linear formulation.
In the second part of this thesis, we examine the statistical conception of the subset selection regression and conclude that it is misaligned with its intention. The subset selection regression uses the training error to decide on which variables to select. The approach conducts the validation on the training data, which oftentimes is not a good estimate of the prediction error. Hence, it requires a predetermined cardinality bound. Instead, we propose to select variables with respect to the cross-validation value. The process is formulated as a mixed-integer program with the sparsity becoming subject of the optimization. Usually, a cross-validation is used to select the best model out of a few options. With the proposed program the best model out of all possible models is selected. Since the cross-validation is a much better estimate of the prediction error, the model can select the best sparsity itself.
The thesis is concluded with an extensive simulation study which provides evidence that discrete optimization can be used to produce highly valuable predictive models with the cross-validation subset selection regression almost always producing the best results.
Cognitive performance is contingent upon multiple factors. Beyond the impact of en-vironmental circumstances, the bodily state may hinder or promote cognitive processing. Af-ferent transmission from the viscera, for instance, is crucial not only for the genesis of affect and emotion, but further exerts significant influences on memory and attention. In particular, afferent cardiovascular feedback from baroreceptors demonstrated subcortical and cortical inhibition. Consequences for human cognition and behavior are the impairment of simple perception and sensorimotor functioning. Four studies are presented that investigate the mod-ulatory impact of baro-afferent feedback on selective attention. The first study demonstrates that the modulation of sensory processing by baroreceptor activity applies to the processing of complex stimulus configurations. By the use of a visual masking task in which a target had to be selected against a visual mask, perceptual interference was reduced when target and mask were presented during the ventricular systole compared to the diastole. In study two, selection efficiency was systematically manipulated in a visual selection task in which a target letter was flanked by distracting stimuli. By comparing participants" performance under homogene-ous and heterogeneous stimulus conditions, selection efficiency was assessed as a function of the cardiac cycle phase in which the targets and distractors were presented. The susceptibility of selection performance to the stimulus condition at hand was less pronounced during the ventricular systole compared to the diastole. Study one and two therefore indicate that inter-ference from irrelevant sensory input, resulting from temporally overlapping processing traces or from the simultaneous presentation of distractor stimuli, is reduced during phases of in-creased baro-afferent feedback. Study three experimentally manipulated baroreceptor activity by systematically varying the participant- body position while a sequential distractor priming task was completed. In this study, negative priming and distractor-response binding effects were obtained as indices of controlled and automatic distractor processing, respectively. It was found that only controlled distractor processing was affected by tonic increases in baro-receptor activity. In line with study one and two these results indicate that controlled selection processes are more efficient during enhanced baro-afferent feedback, observable in dimin-ished aftereffects of controlled distractor processing. Due to previous findings that indicated baro-afferent transmission to affect central, rather than response-related processing stages, study four measured lateralized-readiness potentials (LRPs) and reaction times (RTs), while participants, again, had to selectively respond to target stimuli that were surrounded by dis-tractors. The impact of distractor inhibition on stimulus-related, but not on response-related LRPs suggests that in a sequential distractor priming task, the sensory representations of dis-tractors, rather than motor responses are targeted by inhibition. Together with the results from studies one through three and the finding of baroreceptor-mediated behavioral inhibition tar-geting central processing stages, study four corroborates the presumption of baro-afferent signal transmission to modulate controlled processes involved in selective attention. In sum, the work presented shows that visual selective attention benefits from in-creased baro-afferent feedback as its effects are not confined to simple perception, but may facilitate the active suppression of neural activity related to sensory input from distractors. Hence, due to noise reduction, baroreceptor-mediated inhibition may promote effective selec-tion in vision.
Objective: Only 20-25% of the variance for the two to four-fold increased risk of developing breast cancer among women with family histories of the disease can be explained by known gene mutations. Other factors must exist. Here, a familial breast cancer model is proposed in which overestimation of risk, general distress, and cancer-specific distress constitute the type of background stress sufficient to increase unrelated acute stress reactivity in women at familial risk for breast cancer. Furthermore, these stress reactions are thought to be associated with central adiposity, an independent well-established risk factor for breast cancer. Hence, stress through its hormonal correlates and possible associations with central adiposity may play a crucial role in the etiology of breast cancer in women at familial risk for the disease. Methods: Participants were 215 healthy working women with first-degree relatives diagnosed before (high familial risk) or after age 50 (low familial risk), or without breast cancer in first-degree relatives (no familial risk). Participants completed self-report measures of perceived lifetime breast cancer risk, intrusive thoughts and avoidance about breast cancer (Impact of Event Scale), negative affect (Profile of Mood States), and general distress (Brief Symptom Inventory). Anthropometric measurements were taken. Urine samples during work, home, and sleep were collected for assessment of cortisol responses in the naturalistic setting where work was conceptualized as the stressful time of the day. Results: A series of analyses indicated a gradient increase of cortisol levels in response to the work environment from no, low, to high familial risk of breast cancer. When adding breast cancer intrusions to the model with familial risk status predicting work cortisol levels, significant intrusion effects emerged rendering the familial risk group non-significant. However, due to a lack of association between intrusions and cortisol in the low and high familial risk group separately, as well as a significant difference between low and high familial risk on intrusions, but not on work cortisol levels, full mediation of familial risk group effects on work cortisol by intrusions could not be established. A separate analysis indicated increased levels of central but not general adiposity in women at high familial risk of breast cancer compared to the low and no risk groups. There were no significant associations between central adiposity and cortisol excretion. Conclusion: A hyperactive hypothalamus-pituitary-adrenal axis with a more pronounced excretion of its end product cortisol, as well as elevated levels of central but not overall adiposity in women at high familial risk for breast cancer may indicate an increased health risk which expands beyond that of increased breast cancer risk for these women.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
N-acetylation by N-acetyltransferase 1 (NAT1) is an important biotransformation pathway of the human skin and it is involved in the deactivation of the arylamine and well-known contact allergen para-phenylenediamine (PPD). Here, NAT1 expression and activity were analyzed in antigen presenting cells (monocyte-derived dendritic cells, MoDCs, a model for epidermal Langerhans cells) and human keratinocytes. The latter were used to study exogenous and endogenous NAT1 activity modulations. Within this thesis, MoDCs were found to express metabolically active NAT1. Activities were between 23.4 and 26.6 nmol/mg/min and thus comparable to peripheral blood mononuclear cells. These data suggest that epidermal Langerhans cells contribute to the cutaneous N-acetylation capacity. Keratinocytes, which are known for their efficient N-acetylation, were analyzed in a comparative study using primary keratinocytes (NHEK) and different shipments of the immortalized keratinocyte cell line HaCaT, in order to investigate the ability of the cell line to model epidermal biotransformation. N-acetylation of the substrate para-aminobenzoic acid (PABA) was 3.4-fold higher in HaCaT compared to NHEK and varied between the HaCaT shipments (range 12.0"44.5 nmol/mg/min). Since B[a]P induced cytochrome p450 1 (CYP1) activities were also higher in HaCaT compared to NHEK, the cell line can be considered as an in vitro tool to qualitatively model epidermal metabolism, regarding NAT1 and CYP1. The HaCaT shipment with the highest NAT1 activity showed only minimal reduction of cell viability after treatment with PPD and was subsequently used to study interactions between NAT1 and PPD in keratinocytes. Treatment with PPD induced expression of cyclooxygenases (COX) in HaCaT, but in parallel, PPD N-acetylation was found to saturate with increasing PPD concentration. This saturation explains the presence of the PPD induced COX induction despite the high N-acetylation capacities. A detailed analysis of the effect of PPD on NAT1 revealed that the saturation of PPD N-acetylation was caused by a PPD-induced decrease of NAT1 activity. This inhibition was found in HaCaT as well as in primary keratinocytes after treatment with PPD and PABA. Regarding the mechanism, reduced NAT1 protein level and unaffected NAT1 mRNA expression after PPD treatment adduced clear evidences for substrate-dependent NAT1 downregulation. These results expand the existing knowledge about substrate-dependent NAT1 downregulation to human epithelial skin cells and demonstrate that NAT1 activity in keratinocytes can be modulated by exogenous factors. Further analysis of HaCaT cells from different shipments revealed an accelerated progression through the cell cycle in HaCaT cells with high NAT1 activities. These findings suggest an association between NAT1 and proliferation in keratinocytes as it has been proposed earlier for tumor cells. In conclusion, N-acetylation capacity of MoDCs as well as keratinocytes contribute to the overall N-acetylation capacity of human skin. NAT1 activity of keratinocytes and consequently the detoxification capacities of human skin can be modulated by the presence of exogenous NAT1 substrates and endogenous by the cell proliferation status of keratinocytes.
Chemical communication in the reproductive behaviour of Neotropical poison frogs (Dendrobatidae)
(2013)
Chemical communication is the evolutionary oldest communication system in the animal kingdom that triggers intra- and interspecific interactions. It is initiated by the emitter releasing either a signal or a cue that causes a reaction of the receiving individual. Compared to other animals there are relatively few studies regarding chemical communication in anurans. In this thesis the impact of chemical communication on the behaviour of the poison frog Ranitomeya variabilis (Dendrobatidae) and its parental care performance was investigated. This species uses phytotelmata (small water bodies in plants) for both clutch and tadpole depositions. Since tadpoles are cannibalistic, adult frogs do not only avoid conspecifics when depositing their eggs but also transport their tadpoles individually into separated phytotelmata. The recognition of already occupied phytotelmata was shown to be due to chemical substances released by the conspecific tadpoles. In order to gain a deeper comprehension about the ability of adult R. variabilis to generally recognize and avoid tadpoles, in-situ pool choice experiments were conducted, offering chemical substances of tadpole of different species to the frogs (Chapter I). It turned out that they were able to recognize all species and avoid their chemical substances for clutch depositions. However, for tadpole depositions only dendrobatid tadpoles occurring in phytotelmata were avoided, while those species living in rivers were not. Additionally, the chemical substances of a treefrog tadpole (Hylidae) were recognized by R. variabilis. Yet, they were not avoided but preferred for tadpole depositions; thus these tadpoles might be recognized as a potential prey for the predatory poison frog larvae. One of the poison frog species which was avoided for both tadpole and clutch depositions, was the phytotelmata breeding Hyloxalus azureiventris. The chemical substances released by its tadpoles were analysed together with those of the R. variabilis tadpoles (Chapter II). After finding a suitable solid-phase extraction sorbent (DSC-18), the active chemical compounds from the water of both tadpole species were extracted and fractionated. In order to determine which fractions triggered the avoidance behaviour of the frogs, in-situ bioassays were conducted. It was found that the biologically active compounds differed between both species. Since the avoidance of the conspecific tadpoles is not advantageous to the releaser tadpoles (losing a potential food resource) the chemicals released by them might be defined as chemical cues. However, as it turned out that the avoidance of the heterospecific tadpoles was not triggered by a mere byproduct based on the close evolutionary relationship between the two species, the chemical compounds released by H. azureiventris tadpoles might be defined as chemical signals (being advantageous to the releasing tadpoles) or, more specifically as synomones, interspecificly acting chemicals that are advantageous for both emitter and receiver (since R. variabilis avoids a competition situation for its offspring, too). Another interspecific communication system investigated in this thesis was the avoidance of predator kairomones (Chapter III). Using chemical substances from damselfly larvae, it could be shown that R. variabilis was unable to recognize and avoid kairomones of these tadpole predators. However, when physically present, damselfly larvae were avoided by the frogs. For the recognition of conspecific tadpoles in contrast, chemical substances were necessary, since purely visible artificial tadpole models were not avoided. If R. variabilis is also capable to chemically communicate with adult conspecifics was investigated by presenting chemical cues/signals of same-sex or opposite-sex conspecifics to the frogs (Chapter IV). It was suggested that males would be attracted to chemical substances of females and repelled by those of conspecific males. But instead all individuals showed avoidance behaviour towards the conspecific chemicals. This was suggested to be an artefact due to confinement stress of the releaser animals, emitting disturbance cues that triggered avoidance behaviour in their conspecifics. The knowledge gained about chemical communication in parental care thus far, was used to further investigate a possible provisioning behaviour in R. variabilis. In-situ pool-choice experiments with chemical cues of conspecific tadpoles were carried out throughout the change from rainy to dry season (Chapter V). With a changepoint analysis, the exact seasonal change was defined and differences between frogs" choices were analysed. It turned out that R. variabilis does not avoid but prefer conspecific cues during the dry season for tadpole depositions, what might be interpreted as a way to provide their tadpoles with food (i.e. younger tadpoles) in order to accelerate their development when facing desiccation risk. That tadpoles were also occasionally fed with fertilized eggs could be shown in a comparative study, where phytotelmata that contained a tadpole deposited by the frogs themselves received more clutch depositions than freshly erected artificial phytotelmata containing unfamiliar tadpoles (i.e. their chemical cues; Chapter VI). Conducting home range calculations with ArcGIS, it turned out that R. variabilis males showed unexpectedly strong site fidelity, leading to the suggestion that they recognize their offspring by phytotelmata location. However, in order to test if R. variabilis is furthermore able to perform chemical offspring recognition, frogs were confronted in in-situ pool-choice experiments with chemical cues of single tadpoles that were found in their home ranges (Chapter VII). Genetic kinship analyses were conducted between those tadpoles emitting the chemical cues and those deposited together with or next to them. The results, however, indicated that frogs did not choose to deposit their offspring with or without another tadpole due to relatedness, i.e. kin recognition by chemical cues could not be confirmed in R. variabilis.
High-resolution projections of the future climate are required to assess climate change realistically at a regional scale. This is in particular important for climate change impact studies since global projections are much too coarse to represent local conditions adequately. A major concern is thereby the change of extreme values in a warming climate due to their severe impact on the natural environment, socio-economical systems and the human health. Regional climate models (RCMs) are, however, able to reproduce much of those local features. Current horizontal resolutions are about 18-25km, which is still too coarse to directly resolve small-scale processes such as deep-convection. For this reason, projections of a possible future climate were simulated in this study with the regional climate model COSMO-CLM at horizontal resolutions of 4.5km and 1.3km for the region of Saarland-Lorraine-Luxemburg and Rhineland-Palatinate for the first time. At a horizontal scale of about 1km deep-convection is treated explicitly, which is expected to improve particularly the simulation of convective summer precipitation and a better resolved orography is expected to improve near surface fields such as 2m temperature. These simulations were performed as 10-year long time-slice experiments for the present climate (1991"2000), the near future (2041"2050) and the end of the century (2091"2100). The climate change signals of the annual and seasonal means and the change of extremes are analysed with respect to precipitation and 2m temperature and a possible added value due to the increased resolution is investigated. To assess changes in extremes, extreme indices have been applied and 10- and 20-year return levels were estimated by "peak-over-threshold" models. Since it is generally known that model output of RCMs should not directly be used for climate change impact studies, the precipitation and temperature fields were bias-corrected with several quantile-matching methods. Among them is a new developed parametric method which includes an extension for extreme values and is hence expected to improve the correction. In addition, the impact of the bias-correction on the climate change signals and on the extreme value statistics was investigated. The results reveal a significant warming of the annual mean by about +1.7 -°C until 2041"2050 and +3.7 -°C until 2091"2100, but considerably stronger signals of up to +5 -°C in summer in the Rhine Valley. Furthermore, the daily variability increases by about +0.8 -°C in summer but decreases by about -0.8 -°C in winter. Consequently, hot extremes increase moderately until the mid of the century but strongly thereafter, in particular in the Rhine Valley. Cold extremes warm continuously in the complete domain in the next 100 years but strongest in mountainous areas. The change signals with regard to annual precipitation are of the order -±10% but not significant. Significant, however, are a predicted increase of +32% of the seasonal precipitation in autumn until 2041"2050 and a decrease of -28% in summer until 2091-2100. No significant changes were found for days with intensities > 20 mm/day, but the results indicate that extremes with return periods ≤2 years increase as well as the frequency and duration of dry periods. The bias-corrections amplified positive signals but dampened negative signals and considerably reduced the power of detection. Moreover, absolute values and frequencies of extremes were altered by the correction but change signals remained approximately constant. The new method outperformed other parametric methods, in particular with regard to extreme value correction and related extreme indices and return levels. Although the bias correction removed systematic errors, it should be treated as an additional layer of uncertainty in climate change studies. Finally, the increased resolution of 1.3km improved predominantly the representation of temperature fields and extremes in terms of spatial heterogeneity. The benefits for summer precipitation were not as clear due to a severe dry-bias in summer, but it could be shown that in principle the onset and intensity of convection improves. This work demonstrates that climate change will have severe impacts in this investigation area and that in particular extremes may change considerably. An increased resolution provides thereby an added value to the results. These findings encourage further investigations, for other variables as for example near-surface wind, which will be more feasible with growing computing resources. These analyses should, however, be repeated with longer time series, different RCMs and anthropogenic scenarios to determine the robustness and uncertainty of these results more extensively.