Refine
Year of publication
Document Type
- Doctoral Thesis (341) (remove)
Language
- English (341) (remove)
Keywords
- Stress (24)
- Optimierung (17)
- Hydrocortison (13)
- Modellierung (11)
- Fernerkundung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institute
- Psychologie (64)
- Fachbereich 4 (54)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (20)
- Informatik (16)
- Anglistik (11)
- Fachbereich 6 (8)
- Fachbereich 2 (7)
The main aim of "Her Idoll Selfe"? Shaping Identity in Early Modern Women- Self-Writings is to offer fresh readings of as yet little-read early modern women- texts. I look at a variety of texts that are either explicitly concerned with the constitution of the writer- self, such as the autobiographies by Lady Grace Mildmay and Martha Moulsworth, or in which the preoccupation with the self is of a more indirect nature, as in the mothers" advice books by Elizabeth Grymeston, Dorothy Leigh, Elizabeth Richardson or the anonymous M. R., or even in women- poetry, drama and religious verse. I situate the texts in the context of early modern discourses of femininity and subjectivity to pursue the question in how far it was possible for early modern women to achieve a sense of agency in spite of their culturally marginal position. In that, my readings aim to contribute to the ongoing critical process of decentring the early modern period. At the same time, I draw on contemporary theory as a methodological tool that can open up further dimensions of the texts, especially in places where the texts provide clues and parallels that lend themselves to a theoretical approach. Conversely, the texts themselves shed interesting light on feminist and poststructuralist theory and can serve as testing grounds for the current critical fascination with fragmentation and hybridity. Having outlined the theoretical and methodological framework of my study, I then analyse the women- writings with reference to a matrix of paradigmatic dimensions that encompass their most prominently recurring themes: the notion of writing the self, relationships between self and other, demarcations of private and public, the women- notorious preoccupation with self-loss and death, as well as the recurrent theme of the "golden meane". I suggest that this motif can provide the vital cue to early modern women- constitution of self. The idea of a precarious "golden meane" links in with to parallel discourses of moderation and balance at the time, but reinterprets them in a manner that can present a workable and innovative paradigm of subjectivity. Instead of subscribing to a model of decentred selfhood, early modern women- presentations of self suggest that a concluding but contested compromise is a workable strategy to achieve a form of selfhood that can responsibly be lived with.
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
Mankind has dramatically influenced the nitrogen (N) fluxes between soil, vegetation, water and atmosphere " the global N cycle. Increasing intensification of agricultural land use, caused by the growing demand for agricultural products, has had major impacts on ecosystems worldwide. Particularly nitrogenous gases such as ammonia (NH3) have increased mainly due to industrial livestock farming. Countries with high N deposition rates require a variety of deposition measurements and effective N monitoring networks to assess N loads. Due to high costs, current "conventional"-deposition measurement stations are not widespread and therefore provide only a patchy picture of the real extent of the prevailing N deposition status over large areas. One tool that allows quantification of the exposure and the effects of atmospheric N impacts on an ecosystem is the use of bioindicators. Due to their specific physiology and ecology, especially lichens and mosses are suitable to reflect the atmospheric N input at ecosystem level. The present doctoral project began by investigating the general ability of epiphytic lichens to qualify and quantify N deposition by analysing both lichens and total N and δ15N along a gradient of different N emission sources and severity. The results showed that this was a viable monitoring method, and a grid-based monitoring system with nitrophytic lichens was set up in the western part of Germany. Finally, a critical appraisal of three different monitoring techniques (lichens, mosses and tree bark) was carried out to compare them with national relevant N deposition assessment programmes. In total 1057 lichen samples, 348 tree bark samples, 153 moss samples and 24 deposition water samples, were analysed in this dissertation at different investigation scales in Germany.The study identified species-specific ability and tolerance of various epiphytic lichens to accumulate N. Samples of tree bark were also collected and N accumulation ability was detected in connection with the increased intensity of agriculture, and according to the presence of reduced N compounds (NHx) in the atmosphere. Nitrophytic lichens (Xanthoria parietina, Physcia spp.) have the strongest correlations with high agriculture-related N deposition. In addition, the main N sources were revealed with the help of δ15N values along a gradient of altitude and areas affected by different types of land use (NH3 density classes, livestock units and various deposition types). Furthermore, in the first nationwide survey of Germany to compare lichens, mosses and tree bark samples as biomonitors for N deposition, it was revealed that lichens are clearly the most meaningful monitor organisms in highly N affected regions. Additionally, the study shows that dealing with different biomonitors is a difficult task due to their variety of N responses. The specific receptor surfaces of the indicators and therefore their different strategies of N uptake are responsible for the tissue N concentration of each organism group. It was also shown that the δ15N values depend on their N origin and the specific N transformations in each organism system, so that a direct comparison between atmosphere and ecosystems is not possible.In conclusion, biomonitors, and especially epiphytic lichens may serve as possible alternatives to get a spatially representative picture of the N deposition conditions. Furthermore, bioindication with lichens is a cost-efficient alternative to physico-chemical measurements to comprehensively assess different prevailing N doses and sources of N pools on a regional scale. They can at least support on-site deposition instruments by qualification and quantification of N deposition.
Time series archives of remotely sensed data offer many possibilities to observe and analyse dynamic environmental processes at the Earth- surface. Based on these hypertemporal archives, which offer continuous observations of vegetation indices, typically at repetition rates from one to two weeks, sets of phenological parameters or metrics can be derived. Examples of such parameters are the beginning and end of the annual growing period, as well as its length. Even though these parameters do not correspond exactly to conventional observations of phenological events, they nevertheless provide indications of the dynamic processes occurring in the biosphere. The development of robust algorithms for the derivation of phenological metrics can be challenging. Currently, such algorithms are most commonly based on digital filters or the Fourier analysis of time series. Polynomial spline models offer a useful alternative to existing methods. The possibilities of using spline models in the analytical description of time series are numerous, and their specific mathematical properties may help to avoid known problems occurring with the more common methods for deriving phenological metrics. Based on a selection of different polynomial spline models suitable for the analysis of remotely sensed time series of vegetation indices, a method to derive various phenological parameters from such time series was developed and implemented in this work. Using an example data set from an intensively used agricultural area showing highly dynamic variations in vegetation phenology, the newly developed method was verified by a comparison of the results of the spline based approach to the results of two alternative, well established methods.
The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Two areas were selected to represent major process regimes of Mediterranean rangelands. In the County of Lagads (Greece), situated east of the city of Thessaloniki, livestock grazing with sheep and goats is a major factor of the rural economy. In suitable areas, it is complemented by agricultural use. The region of Ayora (Spain) is located west of the city of Valencia. It is one of regions most affected by fires in Spain. First of all, long time series of satellite data were compiled for both regions on the basis of Landsat sensors, which cover the time until 1976 (Ayora) and 1984 (Lagadas) with one image per year. Using a rigorous processing scheme, the data were geometrically and radiometrically corrected Specific attention was given to an exact sensor calibration, the radiometric intercalibration of Landsat-TM and "MSS. Proportional cover of photosynthetically active vegetation was identified as a suitable quantitative indicator for assessing the state of rangelands. Using Spectral Mixture Analysis (SMA) it was inferred for all data sets. The extensive data base procured this way enabled to map fire events in the Ayora area based on sequential diachronic sets and provide fire dates, perimeter as well as fire recurrence for each pixel. The increasing fire frequency in the past decades is in large parts attributed to the accelerated abandonment of the area that leads to an encroachment of shrublands and the accumulation of combustible biomass. On the basis of the fire mapping results, a spatial and temporal stratification of the data set allowed to asses plant recovery dynamics on the landscape level through linear trend analysis. The long history of fire events in the Mediterranean frequently leads to processes of auto-succession. Following an initial dominance of herbaceous vegetation this commonly leads to similar plant communities as the ones present before the fire. On a temporal axis, this results in typical exponential post-fire trajectories which could also be shown in this study. The analysis of driving factors for post-fire dynamics confirmed the importance of aspect and slope. Locations with lower amounts of solar irradiation and favourable water supply yielded faster recovery rates and higher post-fire vegetation cover levels. In most cases, the vegetation cover levels observed before the fire were not reached within the post-fire observation period. In the area of Lagadas, linear trend analysis and additional statistical parameters were used to infer a degradation index. This could be used to illustrate a complex pattern of stability, regeneration and degradation of vegetation cover. These different processes and states are found in close proximity and are clearly determined by topography and elevation. Following a sequence of analyses, it was found that in particular steep, narrow valleys show positive trends, while negative trends are more abundant on plain or gently undulating areas. Considering the local grazing regime, this spatial differentiation was related to the accessibility of specific locations. Subsequently, animal numbers on community level were used to calculate efficient stocking rates and assess the temporal development of their relation with vegetation cover. This calculation of temporal trajectories illustrated that only some communities show the expected negative relation. To the contrary, a positive relation or even changing relation patterns are observed. This signifies recent concentration and intensification processes in the grazing scheme, as a result of which animals are kept in sheds, where additional feedstuffs are provided. In these cases, free roaming of livestock animals is often confined to some hours every day, which explains the spatial preference of easily accessible areas by the shepherds. Beyond these temporal trends, it was analysed whether the grazing pattern is equally reflected in a spatial trend. Making use of available geospatial information layers, the efforts required to reach each location was expressed as a cost. Then, cost zones could be defined and woody vegetation cover as a grazing indicator could be inferred for the different zones. Animal sheds were employed as starting features for this piospheric analysis, which could be mapped from very high spatial resolution Quickbird image data. The result was a clearly structured gradient showing increasing woody vegetation cover with increasing cost distance. On the basis of these two pilot studies, the elements of a monitoring and interpretation framework identified at the beginning of the work were evaluated and a formal interpretation scheme was presented.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Death is perceived as a severe threat to the self. Although it is certain that everyone has to die, people usually don't think about the finiteness of their life. Everything reminding of death is ignored, rationalized and death-related thoughts and fears are pushed out of mind (TMT; Pyszczynski et al., 1999). However, people differ in their ability to regulate negative affect and to access their self-system (Kuhl, 2001). As death is assumed to arouse existential fears, the ability to regulate such fears is particularly important, higher self-access could be relevant in defending central personal values. This thesis aimed at showing existential fears under mortality salience and effects of self-regulation of affect under mortality salience. In two studies (Chapter 2) implicit negative affect under mortality salience was demonstrated. An additional study (Chapter 3) shows the effects of self-regulation on implicit negative affect, whereas four studies in Chapter 4 displayed differences in self-access under mortality salience depending on people- ability of self-regulating negative affect.
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
A huge number of clinical studies and meta-analyses have shown that psychotherapy is effective on average. However, not every patient profits from psychotherapy and some patients even deteriorate in treatment. Due to this result and the restricted generalization of clinical studies to clinical practice, a more patient-focused research strategy has emerged. The question whether a particular treatment works for an individual case is the focus of this paradigm. The use of repeated assessments and the feedback of this information to therapists is a major ingredient of patient-focused research. Improving patient outcomes and reducing dropout rates by the use of psychometric feedback seems to be a promising path. Therapists seem to differ in the degree to which they make use of and profit from such feedback systems. This dissertation aims to better understand therapist differences in the context of patient-focused research and the impact of therapists on psychotherapy. Three different studies are included, which focus on different aspects within the field:
Study I (Chapter 5) investigated how therapists use psychometric feedback in their work with patients and how much therapists differ in their usage. Data from 72 therapists treating 648 patients were analyzed. It could be shown that therapists used the psychometric feedback for most of their patients. Substantial variance in the use of feedback (between 27% and 52%) was attributable to therapists. Therapists were more likely to use feedback when they reported being satisfied with the graphical information they received. The results therefore indicated that not only patient characteristics or treatment progress affected the use of feedback.
Study II (Chapter 6) picked up on the idea of analyzing systematic differences in therapists and applied it to the criterion of premature treatment termination (dropout). To answer the question whether therapist effects occur in terms of patients’ dropout rates, data from 707 patients treated by 66 therapists were investigated. It was shown that approximately six percent of variance in dropout rates could be attributed to therapists, even when initial impairment was controlled for. Other predictors of dropout were initial impairment, sex, education, personality styles, and treatment expectations.
Study III (Chapter 7) extends the dissertation by investigating the impact of a transfer from one therapist to another within ongoing treatments. Data from 124 patients who agreed to and experienced a transfer during their treatment were analyzed. A significant drop in patient-rated as well as therapist-rated alliance levels could be observed after a transfer. On average, there seemed to be no difficulties establishing a good therapeutic alliance with the new therapist, although differences between patients were observed. There was no increase in symptom severity due to therapy transfer. Various predictors of alliance and symptom development after transfer were investigated. Impacts on clinical practice were discussed.
Results of the three studies are discussed and general conclusions are drawn. Implications for future research as well as their utility for clinical practice and decision-making are presented.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
This dissertation includes three research articles on the portfolio risks of private investors. In the first article, we analyze a large data set of private banking portfolios in Switzerland of a major bank with the unique feature that parts of the portfolios were managed by the bank, and parts were advisory portfolios. To correct the heterogeneity of individual investors, we apply a mixture model and a cluster analysis. Our results suggest that there is indeed a substantial group of advised individual investors that outperform the bank managed portfolios, at least after fees. However, a simple passive strategy that invests in the MSCI World and a risk-free asset significantly outperforms both the better advisory and the bank managed portfolios. The new regulation of the EU for financial products (UCITS IV) prescribes Value at Risk (VaR) as the benchmark for assessing the risk of structured products. The second article discusses the limitations of this approach and shows that, in theory, the expected return of structured products can be unbounded while the VaR requirement for the lowest risk class can still be satisfied. Real-life examples of large returns within the lowest risk class are then provided. The results demonstrate that the new regulation could lead to new seemingly safe products that hide large risks. Behavioral investors who choose products based only on their official risk classes and their expected returns will, therefore, invest into suboptimal products. To overcome these limitations, we suggest a new risk-return measure for financial products based on the martingale measure that could erase such loopholes. Under the mean-VaR framework, the third article discusses the impacts of the underlying's first four moments on the structured product. By expanding the expected return and the VaR of a structured product with its underlying moments, it is possible to investigate each moment's impact on them, simultaneously. Results are tested by Monte Carlo simulation and historical simulation. The findings show that for the majority of structured products, underlyings with large positive skewness are preferred. The preferences for variance and for kurtosis are ambiguous.
The overall objective of this thesis was to gain a deeper understanding of the antecedents, processes, and manifestations of uniqueness-driven consumer behavior. To achieve this goal, five studies have been conducted in Germany and Switzerland with a total of 1048 participants across different demographic and socio-economic backgrounds. Two concepts were employed in all studies: Consumer need for uniqueness (CNFU) and general uniqueness perception (GUP). CNFU (Tian, Bearden, & Hunter, 2001), a mainly US"based consumer research concept, measures the individual need, and thus the motivation to acquire, use, and dispose consumer goods in order to develop a unique image. GUP, adapted from the two-component theory of individuality (Kampmeier, 2001), represents a global and direct measure of self-ascribed uniqueness. Study #1 looked at the interrelation of the uniqueness-driven concepts. Therefore, GUP and CNFU were employed in the study as potential psychological factors that influence and predict uniqueness-driven consumer behavior. Different behavioral measures were used: The newly developed possession of individualized products (POIP), the newly developed products for uniqueness display (PFUD), and the already established uniqueness-enhancing behaviors (UEB). Analyses showed that CNFU mediates the relationship between GUP and the behavioral measures in a German speaking setting. Thus, GUP (representing self-perception) was identified as the driver behind CNFU (representing motivation) and the actual consumer behavior. Study #2 examined further manifestations of uniqueness-driven consumer behavior. For this purpose, an extreme form of uniqueness-increasing behavior was researched: Tattooing. The influence of GUP and CNFU on tattooing behavior was investigated using a sample derived from a tattoo exhibition. To do so, a newly developed measure to determine the percentage of the body covered by tattoos was employed. It was revealed that individuals with higher GUP and CNFU levels indeed have a higher tattooing degree. Study #3 further explored the predictive possibilities and limitations of the GUP and CNFU concepts. On the one hand, study #3 specifically looked at the consumption of customized apparel products as mass customization is said to become the standard of the century (Piller & Müller, 2004). It was shown that individuals with higher CNFU levels not only purchased more customized apparel products in the last six months, but also spend more money on them. On the other hand, uniqueness-enhancing activities (UEA), such as travel to exotic places or extreme sports, were investigated by using a newly developed 30-item scale. It was revealed that CNFU partly mediates the GUP and UEA relationship, proving that CNFU indeed predicts a broad range of consumer behaviors and that GUP is the driver behind the need and the behavior. Study #4, entered a new terrain. In contrast to the previous three studies, it explored the so termed "passive" side of uniqueness-seeking in the consumer context. Individuals might feel unique because business companies treat them in a special way. Such a unique customer treatment (UCT) involves activities like customer service or customer relationship management. Study #4 investigated if individuals differ in their need for such a treatment. Hence, with the need for unique customer treatment (NFUCT) a new uniqueness-driven consumer need was introduced and its impact on customer loyalty examined. Analyses, for example, revealed that individuals with high NFUCT levels receiving a high unique customer treatment (UCT) showed the highest customer loyalty, whereas the lowest customer loyalty was found among those individuals with high NFUCT levels receiving a low unique customer treatment (UCT). Study #5 mainly examined the processes behind uniqueness-driven consumer behavior. Here, not only the psychological influences, but also situational influences were examined. This study investigated the impact of a non-personal "indirect" uniqueness manipulation on the consumption of customized apparel products by simultaneously controlling for the influence of GUP and CNFU. Therefore, two equal experimental groups were created. Afterwards, these groups either received an e-mail with a "pro-individualism" campaign or a "pro-collectivism" campaign especially developed for study #5. The conducted experiment revealed that, individuals receiving a "pro-individualism" poster campaign telling the participants that uniqueness is socially appropriate and desired were willing to spend more money on customization options compared to individuals receiving a "pro-collectivism" poster campaign. Hence, not only psychological antecedents such as GUP and CNFU influence uniqueness-driven consumer behavior, but also situational factors.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Our goal is to approximate energy forms on suitable fractals by discrete graph energies and certain metric measure spaces, using the notion of quasi-unitary equivalence. Quasi-unitary equivalence generalises the two concepts of unitary equivalence and norm resolvent convergence to the case of operators and energy forms defined in varying Hilbert spaces.
More precisely, we prove that the canonical sequence of discrete graph energies (associated with the fractal energy form) converges to the energy form (induced by a resistance form) on a finitely ramified fractal in the sense of quasi-unitary equivalence. Moreover, we allow a perturbation by magnetic potentials and we specify the corresponding errors.
This aforementioned approach is an approximation of the fractal from within (by an increasing sequence of finitely many points). The natural step that follows this realisation is the question whether one can also approximate fractals from outside, i.e., by a suitable sequence of shrinking supersets. We partly answer this question by restricting ourselves to a very specific structure of the approximating sets, namely so-called graph-like manifolds that respect the structure of the fractals resp. the underlying discrete graphs. Again, we show that the canonical (properly rescaled) energy forms on such a sequence of graph-like manifolds converge to the fractal energy form (in the sense of quasi-unitary equivalence).
From the quasi-unitary equivalence of energy forms, we conclude the convergence of the associated linear operators, convergence of the spectra and convergence of functions of the operators – thus essentially the same as in the case of the usual norm resolvent convergence.
Stress related disorders increase continuously. It is not yet clear if stress also promotes breast cancer. This dissertation provides an analyses of the current state of research and focuses on the significance of pre-/postnatal stress factors and chronic stress. The derived hypotheses are empirically examined on breast cancer patients. The clinical study investigates the links between those factors and prognosis and outcome.
ASEAN and ASEAN Plus Three: Manifestations of Collective Identities in Southeast and East Asia?
(2003)
East Asia is a region undergoing vast structural changes. As the region moved closer together economically and politically following the breakdown of the bipolar world order and the ensuing expansion of intra-regional interdependencies, the states of the region faced the challenge of having to actively recast their mutual relations. At the same time, throughout the 1990s, the West became increasingly interested in trans- and inter-regional dialogue and cooperation with the emerging economies of East Asia. These developments gave rise to a "new regionalism", which eventually also triggered debates on Asian identities and the region's potential to integrate. Before this backdrop, this thesis analyzes in how far both the Association of Southeast Asian Nations (ASEAN), which has been operative since 1967 and thus embodies the "old regionalism" of Southeast Asia, and the ASEAN Plus Three forum (APT: the ASEAN states plus China, Japan and South Korea), which has come into existence in the aftermath of the Asian economic crisis of 1997, can be said to represent intergovernmental manifestations of specific collective identities in Southeast and East Asia, respectively. Based on profiles of the respective discursive, behavioral and motivational patterns as well as the integrative potential of ASEAN and APT, this study establishes in how far the member states adhere to sustainable collective patterns of interaction, expectations and objectives, and assesses in how far they can be said to form specific 'ingroups'. Four studies on collective norms, readiness to pool sovereignty, solidarity and attitudes vis-Ã -vis relevant third states show that ASEAN has evolved a certain degree of collective identity, though the Association's political relevance and coherence is frequently thwarted by changes in its external environment. A study on the cooperative and integrative potential of APT yields no manifest evidence of an ongoing or incipient pan-East Asian identity formation process.
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
A sustainable development of forests and their ecosystem services requires the monitoring of the forests" state and changes as well as the prediction of their future development. To achieve the latter, eco-physiological forest growth models are usually applied. These models require calibration and validation with forestry reference data. This data includes forest structural parameters such as tree height or stem diameter which are easy to measure and can be used to estimate the core model parameters, i.e. the tree- biomass pools. The methods traditionally applied to derive the structural parameters are mainly manual and time-consuming. Hence, the in situ data acquisition is inefficient and limited in its ability to capture the vertical and horizontal variability in stand structure. Ground-based remote sensing bears the potential to overcome the limitations of the traditional methods. As they can be automated, ground-based remote sensing methods allow a much more efficient data acquisition and a larger spatial coverage. They are also able to capture forest structure in its three dimensions. Nevertheless, at present further research is required, in particular with respect to the practical integration of ground-based remote sensing data into forest growth models as well as regarding factors influencing the structural parameter retrieval from this data. Therefore, the goal of this PhD thesis was to investigate the influencing factors of two ground-based remote sensing methods (terrestrial laser scanning and hemispherical photography), which have not or only scarcely been studied to date. In addition, the use of forest structural parameters derived from these methods for the calibration of a forest growth model was assessed. Both goals were achieved. The results of this thesis could contribute significantly to a comprehensive assessment of ground-based remote sensing and its potential to derive the forest structural parameters. However, the use of these methods to calibrate forest growth models proved to be limited. An optimized data sampling design is expected to eliminate the major limitations, though. Furthermore, the combination of ground-based, airborne, and satellite remote sensing sensors was suggested to provide an optimized framework for the general integration of remotely sensed data into forest growth models. This combination of remote sensing observations at different scales will contribute greatly to a modern forest management with the purpose of warranting a sustainable forest development even under growing economic and ecological pressures.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
One mechanism underlying the acquisition of interpersonal attitudes is the formation of an association between a valenced unconditioned stimulus (US) and an affectively neutral conditioned stimulus (CS). However, a stimulus (e.g., a person) is not always and necessarily perceived to be unambiguously positive or negative. An individual can be negative regarding abstract (trait) information but at the same time display a positive (concrete) behavior. The present research deals with the question of whether the valence of abstract or concrete information about a US is encoded and subsequently transferred to an associated CS. The central assumptions are that the valence of the concrete information is more important for the evaluation of the US, whereas the abstract information is more important for the evaluation of the CS. The rationale behind these assumptions is that the US is a psychologically proximal stimulus because it elicits a more direct affective reaction. The CS, however, is psychologically more distal because it is merely associated with the US and is therefore only experienced indirectly. It is postulated that the associative relation between US and CS constitutes a dimension of psychological distance. In four studies, the valence of abstract and concrete information about a number of USs was manipulated. Within an evaluative learning paradigm, these stimuli were associated with affectively neutral CSs. As predicted, ambivalent USs were evaluated according to the valence of the concrete information. The evaluation of CSs, however, was influenced more strongly by the valence of the abstract information. Moreover, in a subsequent lexical decision task, participants were faster to categorize abstract (vs. concrete) stimuli when the stimuli were preceded by a CS prime as compared to a US prime. The results provide first evidence that perceived psychological distance influences the evaluations of US and CS in an associative evaluative learning paradigm.
Since the end of the British Empire, which had provided white Australians with points of view, attitudes and stereotypes of the world - including perceptions of their own role in it -, rediscovering an international identity has been an Australian quest. Many turned to European roots; others to the Aboriginal landscape; Blanche d"Alpuget and Christopher J. Koch are two who have ventured into Asia for the culturally and spiritually regenerative materials necessary to redefine Australia in the post-colonial world. They have taken Eastern concepts of "self", and "soul" and forged them with the Australian obsession of fear and desire of contact with the "other" in a looking-glass of hybrid, Austral-Asian myth to reveal the true soul of Australian identity. Along with a brief historical and literary background to the triangular relationship between white Australia, Asia, and the West, this study- goal is to identify some of the Southeast Asian symbols, myths and literary structures which Koch and d"Alpuget integrate into the Western tradition. Central elements include: dichotomies as of personality, righteousness, and virtue; the "Otherworld", where one may approach enlightenment, but at the risk of falling into self-delusion; archetypes of the Hindu divine feminine; Eastern roots of Koch- themes of the "double man"; concepts of the forces of "light" and "dark"; the semiotics of time and meaning; and the central Eastern metaphor of the mirror by which Australia creates interdependent images of itself and of Asia.
This thesis presents a study of the visual change detection mechanism. This mechanism is thought to be responsible for the detection of sudden and unexpected changes in our visual environment. As the brain is a capacity limited system and has to deal with a continuous stream of information from its surroundings only a part of the vast amount of information can be completely processed and be brought to conscious awareness. This information, which passes through attentional filters, is used for goal-directed behaviour. Therefore, the change detection mechanism is a very useful aid to cope with important information which is outside the focus of our attention. rnIt is thought that a neural memory trace of repetitive visual information is stored. Each new information input is compared to this existing memory trace by a so-called change or mismatch detection system. Following a sudden change, the comparison process leads to a mismatch and the detection system elicits a warning signal, to which an orienting response can follow. This involves a change in the focus of attention towards this sudden environmental change which can then be evaluated for potential danger and allows for a behavioural adaptation to the new situation. rnTo this purpose a paradigm was developed combining a 2-choice response time task with in the background a mismatch detection task of which the subjects were not aware. This paradigm was implemented in an ERP and an fMRI study and was used to study the the change detection mechanism and its relationship with impulsivity.rnIn previous studies a change detection system for auditory information had already been established. As the brain is a very efficient system it was thought to be unlikely that this change detection system is only available for the processing of auditory information. rnIndeed, a modality specific mismatch response at the sensory specific occipital cortex and a more general response at the frontocentral midline, both resembling the components shown in auditory research, were found in the ERP study.rnAdditionally, magnetic resonance imaging revealed a possible functional network of regions, which responded specifically to the processing of a deviant. These regions included the occipital gyrus, premotor cortex, inferior frontal cortex, thalamas, insula, and parts of the cingular cortex. rnThe relationship between impulsivity measures and visual change detection was established in an additional study. More impulsive subjects showed less detection of deviant stimuli, which was most likely due to too fast and imprecise information processing.rnIn summary it can be said, that the work presented in this thesis demonstrates that visual mismatch negativity was established, a number of regions could be associated with change detection and additionally the relevance of change detection in information processing was shown.rn
This study examines to what extent a banking crisis and the ensuing potential liquidity shortage affect corporate cash holdings. Specifically, how do firms adjust their liquidity management prior to and during a banking crisis when they are restricted in their financing options? These restrictions might not result from firm-specific characteristics but also incorporate the effects of certain regulatory requirements. I analyse the real effects of indicators of a potential crisis and the occurrence of a crisis event on corporate cash holdings for both unregulated and regulated firms from 31 different countries. In contrast to existing studies, I perform this analysis on the basis of a long observation period (1997 to 2014 respectively 2003 to 2014) using multiple crisis indicators (early warning signals) and multiple crisis events. For regulated firms, this study makes use of a unique sample of country-specific regulatory information, which is collected by hand for 15 countries and converted into an ordinal scale based on the severity of the regulation. Regulated firms are selected from a single industry: Real Estate Investment Trusts. These firms invest in real estate properties and let these properties to third parties. Real Estate Investment Trusts that comply with the aforementioned regulations are exempt from income taxation and are punished for a breach, which makes this industry particularly interesting for the analysis of capital structure decisions.
The results for regulated and unregulated firms are mostly inconclusive. I find no convincing evidence that the degree of regulation affects the level of cash holdings for regulated firms before and during a banking crisis. For unregulated firms, I find strong evidence that financially constrained firms have higher cash holdings than unconstrained firms. Further, there is no real evidence that either financially constrained firms or unconstrained firms increase their cash holdings when observing an early warning signal. In case of a banking crisis, the results differ for univariate tests and in panel regressions. In the univariate setting, I find evidence that both types of firms hold higher levels of cash during a banking crisis. In panel regressions, the effect is only evident for financially unconstrained firms from the US, and when controlling for financial stress, it is also apparent for financially constrained US firms. For firms from Europe, the results are predominantly inconclusive. For banking crises that are preceded by an early warning signal, there is only evidence for an increase in cash holdings for unconstrained US firms when controlling for financial stress.
For the first time, the German Census 2011 will be conducted via a new method the register based census. In contrast to a traditional census, where all inhabitants are surveyed, the German government will mainly attempt to count individuals using population registers of administrative authorities, such as the municipalities and the Federal Employment Agency. Census data that cannot be collected from the registers, such as information on education, training, and occupation, will be collected by an interview-based sample survey. Moreover, the new method reduces citizens' obligations to provide information and helps reduce costs significantly. The use of sample surveys is limited if results with a detailed regional or subject-matter breakdown have to be prepared. Classical estimation methods are sometimes criticized, since estimation is often problematic for small samples. Fortunately, model based small area estimators serve as an alternative. These methods help to increase the information, and hence the effective sample size. In the German Census 2011 it is possible to embed areas on a map in a geographical context. This may offer additional information, such as neighborhood relations or spatial interactions. Standard small area models, like Fay-Herriot or Battese-Harter-Fuller, do not account for such interactions explicitly. The aim of our work is to extend the classical models by integrating the spatial information explicitly into the model. In addition, the possible gain in efficiency will be analyzed.
This study examines the relationship between media content, its production, and its reception in Japanese popular culture with the example of the so-called yuri ("lily") genre that centers on representations of intimate relationships between female characters. Based on contemporary genre theory, which posits that genres are not inherent properties of texts, the central question of this study is how the yuri genre is discursively produced in Japan. To examine this question, the study takes a variety of sources into consideration: Firstly, it discusses ten exemplary texts from the 1910s to 2010s that in the Japanese discourse on the yuri genre are deemed the milestone texts of the yuri genre's historical development (Hana monogatari, Otome no minato, Secret Love, Shiroi heya no futari, BishÅjo senshi Sailor Moon, Maria-sama ga miteru, ShÅjo Sect, Aoi hana, Yuru yuri, and Yuri danshi). Secondly, interviews with ten editors working for Japanese manga magazines shed light on their assessment of the yuri genre. Finally, the results of an online survey among Japanese fans of the yuri genre, which returned 1,352 completed questionnaires, question hitherto assumptions about the fans and their reasons for liking the yuri genre. The central argument of this study is that the yuri genre is for the most part constructed not through assignments on part of the genre's producers but through interpretations on part of the genre's fans. The intimacy portrayed in the texts ranges from "friendship" to "love," and often the ideas of "innocence" and "beauty" are emphasized. Nevertheless, the formation of the yuri genre occurs outside the bounds of the texts, most importantly in fan works, i.e. derivative texts created by fans. The actual content of the originals merely serves as a starting point for these interpretations. Located at the intersection of Japanese studies, cultural studies, media studies, and sociology, this study contributes to our understanding of contemporary Japanese popular culture by showing the mutual dependencies between media content, production, and reception. It provides a deeper look at these processes through first-hand accounts of both producers and fans of the yuri genre.
The implicit power motive is one of the most researched motives in motivational psychology—at least in adults. Children have rarely been subject to investigation and there are virtually no results on behavioral and affective correlates of the implicit power motive in children. As behavior and affect are important components of conceptual validation, the empirical data in this dissertation focused on identifying three correlates, namely resource control behavior (study 1), power stress (study 2), and persuasive behavior (study 3). In each study, the implicit power motive was measured via the Picture Story Exercise, using an adapted version for children. Children across samples were between 4 and 11 years old.
Results from study 1 and 2 showed that children’s power-related behavior corresponded with evidence from adult samples: children with a high implicit power motive secure attractive resources and show negative reactions to a thwarted attempt to exert influence. Study 3 contradicted existing evidence with adults in that children’s persuasive behavior was not associated with nonverbal, but with verbal strategies of persuasion. Despite this inconsistency, these results are, together with the validation of a child-friendly Picture Story Exercise version, an important step into further investigating and confirming the concept of the implicit power motive and how to measure it in children.
The influence of the dopamine agonist Ritalin-® on performance in a card sorting task involving a monetary reward component was tested in 43 healthy male participants. It was investigated whether Ritalin-® would have differential behavioral effects as a function of the participants' parental bonding experiences and the personality variable "Novelty Seeking". When activity and performance accuracy were stimulated my monetary reward, Ritalin-® reduced activity in response to reward and added to the reward-induced increase in performance accuracy. However, performance accuracy after drug challenge was improved only in the low care participants. In the high care participants, it was contrarily impaired. This observation suggests that the successful therapeutic administration of Ritalin-® in ADHD may be influenced by early life parental care. Suggesting an association between the personality dimension of "Novelty Seeking" and the dopamine system, high "Novelty Seeking" scores positively correlated with sensitivity to Ritalin-® challenge.
This thesis focus on threats as an experience of stress. Threats are distinguished from challenges and hindrances as another dimension of stress in challenge-hindrance models (CHM) of work stress (Tuckey et al., 2015). Multiple disciplines of psychology (e.g. stereotype, Fingerhut & Abdou, 2017; identity, Petriglieri, 2011) provide a variety of possible events that can trigger threats (e.g., failure expe-riences, social devaluation; Leary et al., 2009). However, systematic consideration of triggers and thus, an overview of when does the danger of threats arises, has been lacking to date. The explanation why events are appraised as threats is related to frustrated needs (e.g., Quested et al., 2011; Semmer et al., 2007), but empirical evidence is rare and needs can cover a wide range of content (e.g., relatedness, competence, power), depending on need approaches (e.g., Deci & Ryan, 2000; McClelland, 1961). This thesis aims to shed light on triggers (when) and the need-based mechanism (why) of threats.
In the introduction, I introduce threats as a dimension of stress experience (cf. Tuckey et al., 2015) and give insights into the diverse field of threat triggers (the when of threats). Further, I explain threats in terms of a frustrated need for positive self-view, before presenting specific needs as possible deter-minants in the threat mechanism (the why of threats). Study 1 represents a literature review based on 122 papers from interdisciplinary threat research and provides a classification of five triggers and five needs identified in explanations and operationalizations of threats. In Study 2, the five triggers and needs are ecologically validated in interviews with police officers (n = 20), paramedics (n = 10), teach-ers (n = 10), and employees of the German federal employment agency (n = 8). The mediating role of needs in the relationship between triggers and threats is confirmed in a correlative survey design (N = 101 Leaders working part-time, Study 3) and in a controlled laboratory experiment (N = 60 two-person student teams, Study 4). The thesis ends with a general discussion of the results of the four studies, providing theoretical and practical implications.
The distractor-response binding effect (Frings & Rothermund, 2011; Frings, Rothermund, & Wentura, 2007; Rothermund, Wentura, & De Houwer, 2005) is based on the idea that irrelevant information will be integrated with the response to the relevant stimuli in an episodic memory trace. The immediate re-encounter of any aspect of this saved episode " be it relevant or irrelevant " can lead to retrieval of the whole episode. As a consequence, the previously executed and now retrieved response may influencing the response to the current relevant stimulus. That is, the current response may either be facilitated or be impaired by the retrieved response, depending on whether it is compatible or incompatible to the currently demanded response. Previous research on this kind of episodic retrieval focused on the influence on action control. I examined if distractor response binding also plays a role in decision making in addition to action control. To this end I adapted the distractor-to-distractor priming paradigm (Frings et al., 2007) and conducted nine experiments in which participants had to decide as fast as possible which disease a fictional patient suffered from. To infer the correct diagnosis, two cues were presented; one did not give any hint for a disease (the irrelevant cue), whereas the other did (the relevant cue). Experiments 1a to 1c showed that the distractor-response binding effect is present in deterministic decision situations. Further, experiments 2a and 2b indicate that distractor-response binding also influences decisions under uncertainty. Finally, experiments 3a to 3d were conducted to test some constraints and underlying mechanisms of the distractor-response binding effect in decision making under uncertainty. In sum, these nine experiments provide strong evidence that distractor-response binding influences decision making.
Until today the effects of many chlorinated hydrocarbons (e.g. DDT, PCBs) against the specific organisms are still a subject of controversial discussions. It was also the case for potential endocrine effects to influence the spermatogenesis correlated with possible changes of the population's vitality. To clear this situation, three questions could be at the centre of attention: 1) Do the chemicals cause a special harmful effect on the male reproductive tract? 2) Could some particular chemical mixtures act to bind and activate the human estrogen receptor (hER)? 3) Are the life stages of an organism specially sensitive to the effects of chemicals and therefore be established as Screening-Test-System? the connected effects of DDT and Arochlor 1254 as single substance and in 1:1 mixture according to their estrogenic effectiveness on zebrafish (Brachydanio rerio) were therefore investigated. the concentrations of the pesticides and their mixture ranged between 0.05-µg/l and 500-µg/l and separated by a factor of 10. It was turned out that the test concentrations of 500-µg/l were too toxic to zebrafish in all the cases. The experiment was followed up with four concentrations of DDT, A54 as well as their 1:1 mixture anew each separated by a factor of 10 and ranging between 0.05-µg/l and 50-µg/l. The bioaccumulation test within 8 days showed that the zebrafish accumulated the chemicals, but no equilibrum was reached and the concentration 0.05-µg/l was established as No Observed Effect Concentration (NOEC). Putting up on these analyses, the investigation of the life cycle (LC) starting with fertilized eggs demonstrated a reduction in the rate of hatchability, reproduction and length of fish emerged. These reductions involved the duration of the life cycle stages (LCS) which consequently lasted longer than expected. Exposure time and level of the tested chemicals accelerated the occurrence of these effects which were more significant when the chemical mixtures were used too. To establish whether the parameter assessed were correlated to the male reproductive tract, the quality, quantity and life span of sperm were assessed using the methods of Leong (1988) and Shapiro et al (1994). The sperm degeneration observed, led us to investigate the spermatogenesis and the ultrastructure of the testes. This last experiment showed a significant reduction of the late stage of spermatogenesis and the heterophagic vacuoles which play an important role in the spermatid maturation. It could therefore be concluded that, DDT and A54 could act synergically and cause disorders of the male reproductive tract of male zebrafish and influence also their growth.
This study investigates the endemic centres of Indonesian animals and the biodiversity across geographical gradients. At the same time, it also evaluated different lines suggested for separating the Oriental and Australian faunal region in the Indonesian region. The analyses have mainly used the present-day distribution of terrestrial vertebrates, especially the smallest ranges of species and subspecies. The results show that faunal migration of Oriental and Australian lineages to the Indonesian Archipelago may have been happening since the Palaeocene period and more importantly, island drifts might have facilitated such migration. These events caused major reorganisation of island positions and island forms, which in turn resulted in faunal extinction around the mid-Pliocene. Some islands, especially in the Wallacea region, emerged very late and as a result nowadays they are lacking endemic forms. There are currently at least seven endemic centres, which can be recognised, i.e. Borneo, Java, Sumatra, Sulawesi, North Moluccas, New Guinea and the Lesser Sundas/Banda Arcs. The affinities between these endemic centres revealed that there are two clusters of islands in the Indonesian Archipelago. These different clusters suggest in turn the shifts of biogeographical lines in the Indonesian Archipelago. Furthermore, oscillation in climate, eustatic sea level changes and fluctuations in vegetation in the Quaternary period had much affected the distribution pattern of animals. There was a phase of expansion for montane oak forests, grasslands and woodlands during the period 18,000-14,000 years ago in East Indonesia and 16,500-12,000 years ago in West Indonesia. Such an expansion led to the increased isolation of rainforests and of the faunas adapted to them. These periods are also indicated by the lowering of the tree line which facilitated montane fauna to disperse across lower elevations. At 8,000-9,000 years ago, the climate became warmer and slightly wetter. The mid- to upper montane forests expanded to their full altitudinal range, while montane oak forest, grassland, and woodland areas had contracted. The oscillation in climate, eustatic sea level changes and fluctuations in vegetation in turn determines much the formation of numerous sub endemic centres, which today can be found within the mainland. Recently, there are 14 sub endemic centres on Borneo, 8 on Java, 16 on Sumatra, 14 on Sulawesi and 14 on New Guinea. From the conservation management point of view, the identification of such sub endemic centres would generate valuable information for the protection effort.
Besides well-known positive aspects of conservation tillage combined with mulching, a drawback may be the survival of phytopathogenic fungi like Fusarium species on plant residues. This may endanger the health of the following crop by increasing the infection risk for specific plant diseases. In infected plant organs, these pathogens are able to produce mycotoxins like deoxynivalenol (DON). Mycotoxins like DON persist during storage, are heat resistant and of major concern for human and animal health after consumption of contaminated food and feed, respectively. Among fungivorous soil organisms, there are representatives of the soil fauna which are obviously antagonistic to a Fusarium infection and the contamination with mycotoxins. Earthworms (Lumbricus terrestris), collembolans (Folsomia candida) and nematodes (Aphelenchoides saprophilus) provide a wide range of ecosystem services including the stimulation of decomposition processes which may result in the regulation of plant pathogens and the degradation of environmental contaminants. Several investigations under laboratory conditions and in the field were conducted to test the following hypotheses: (1) Fusarium-infected and DON-contaminated wheat straw provides a more attractive food substrate than non-infected control straw (2) the introduced soil fauna reduce the biomass of F. culmorum and the content of DON in infected wheat straw under laboratory and field conditions (3) the species interaction of the introduced soil fauna enhances the degradation of Fusarium biomass and DON concentration in wheat straw; (4) the degradation efficiency of soil fauna is affected by soil texture. The results of the present thesis pointed out that the degradation performance of the introduced soil fauna must be considered as an important contribution to the biological control of plant diseases and environmental pollutants. As in particular L. terrestris revealed to be the driver of the degradation process, earthworms contribute to a sustainable control of fungal pathogens like Fusarium and its mycotoxins in wheat straw, thus reducing the risk of plant diseases and environmental pollution as ecosystem services.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Interoception - the perception of bodily processes - plays a crucial role in the subjective experience of emotion, consciousness and symptom genesis. As an alternative to interoceptive paradigms that depend on the participants" active cooperation, five studies are presented to show that startle methodology may be employed to study visceral afferent processing. Study 1 (38 volunteers) showed that startle responses to acoustic stimuli of 105 dB(A) intensity were smaller when elicited during the cardiac systole (R-wave +230 ms) as compared to the diastole (R +530 ms). In Study 2, 31 diabetic patients were divided into two groups with normal or diminished (< 6 ms/mmHg) baroreflex sensitivity (BRS) of heart rate control. Patients with normal BRS showed a startle inhibition during the cardiac systole as was found for healthy volunteers. Diabetic patients with diminished BRS did not show this pattern. Because diminished BRS is an indicator of impaired baro-afferent signal transmission, we concluded that cardiac modulation of startle is associated with intact arterial baro-afferent feedback. Thus, pre-attentive startle methodology is feasible to study visceral afferent processing. rnVisceral- and baro-afferent information has been found to be mainly processed in the right hemisphere. To explore whether cardiac modulation of startle eye blink is lateralized as well, in Study 3, 37 healthy volunteers received 160 unilateral acoustic startle stimuli presented to both ears, one at a time (R +0, 100, 230, 530 ms). Startle response magnitude was only diminished at R +230 ms and for left-ear presentation. This lateralization effect in the cardiac modulation of startle eye blink may reflect the previously described advantages of right-hemispheric brain structures in relaying viscero- and baro-afferent signal transmission. rnThis lateralization effect implies that higher cognitive processes may also play a role in the cardiac modulation of startle. To address this question, in Study 4, 25 volunteers responded first by 'fast as possible' button pushes (reaction time, RT), and second, rated perceived intensity of 60 acoustic startle stimuli (85, 95, or 105 dB; R +230, 530 ms). RT was divided into evaluation and motor response time. Increasing stimulus intensity enhanced startle eye blink, intensity ratings, and RT components. Eye blinks and intensity judgments were lower when startle was elicited at a latency of R +230 ms, but RT components were differentially affected. It is concluded that the cardiac cycle affects the attentive processing of acoustic startle stimuli. rnBeside the arterial baroreceptors, the cardiopulmonary baroreceptors represent another important system of cardiovascular perception that may have similar effects on startle responsiveness. To clarify this issue, in Study 5, Lower Body Negative Pressure at gradients of 0, -10, -20, and -30 mmHg was applied to unload cardiopulmonary baroreceptors in 12 healthy males, while acoustic startle stimuli were presented (R +230, 530 ms). Unloading of cardiopulmonary baroreceptors increased startle eye blink responsiveness. Furthermore, the effect of relative loading/unloading of arterial baroreceptors on startle eye blink responsiveness was replicated. These results demonstrate that the loading status of cardiopulmonary baroreceptors also has an impact on brainstem-based CNS processes. rnThus, the cardiac modulation of acoustic startle is feasible to reflect baro-afferent signal transmission of multiple neural sources, it represents a pre-attentive method that is independent of active cooperation, but its modulatory effects also reach higher cognitive, attentive processes.rn
We consider a linear regression model for which we assume that some of the observed variables are irrelevant for the prediction. Including the wrong variables in the statistical model can either lead to the problem of having too little information to properly estimate the statistic of interest, or having too much information and consequently describing fictitious connections. This thesis considers discrete optimization to conduct a variable selection. In light of this, the subset selection regression method is analyzed. The approach gained a lot of interest in recent years due to its promising predictive performance. A major challenge associated with the subset selection regression is the computational difficulty. In this thesis, we propose several improvements for the efficiency of the method. Novel bounds on the coefficients of the subset selection regression are developed, which help to tighten the relaxation of the associated mixed-integer program, which relies on a Big-M formulation. Moreover, a novel mixed-integer linear formulation for the subset selection regression based on a bilevel optimization reformulation is proposed. Finally, it is shown that the perspective formulation of the subset selection regression is equivalent to a state-of-the-art binary formulation. We use this insight to develop novel bounds for the subset selection regression problem, which show to be highly effective in combination with the proposed linear formulation.
In the second part of this thesis, we examine the statistical conception of the subset selection regression and conclude that it is misaligned with its intention. The subset selection regression uses the training error to decide on which variables to select. The approach conducts the validation on the training data, which oftentimes is not a good estimate of the prediction error. Hence, it requires a predetermined cardinality bound. Instead, we propose to select variables with respect to the cross-validation value. The process is formulated as a mixed-integer program with the sparsity becoming subject of the optimization. Usually, a cross-validation is used to select the best model out of a few options. With the proposed program the best model out of all possible models is selected. Since the cross-validation is a much better estimate of the prediction error, the model can select the best sparsity itself.
The thesis is concluded with an extensive simulation study which provides evidence that discrete optimization can be used to produce highly valuable predictive models with the cross-validation subset selection regression almost always producing the best results.
Cognitive performance is contingent upon multiple factors. Beyond the impact of en-vironmental circumstances, the bodily state may hinder or promote cognitive processing. Af-ferent transmission from the viscera, for instance, is crucial not only for the genesis of affect and emotion, but further exerts significant influences on memory and attention. In particular, afferent cardiovascular feedback from baroreceptors demonstrated subcortical and cortical inhibition. Consequences for human cognition and behavior are the impairment of simple perception and sensorimotor functioning. Four studies are presented that investigate the mod-ulatory impact of baro-afferent feedback on selective attention. The first study demonstrates that the modulation of sensory processing by baroreceptor activity applies to the processing of complex stimulus configurations. By the use of a visual masking task in which a target had to be selected against a visual mask, perceptual interference was reduced when target and mask were presented during the ventricular systole compared to the diastole. In study two, selection efficiency was systematically manipulated in a visual selection task in which a target letter was flanked by distracting stimuli. By comparing participants" performance under homogene-ous and heterogeneous stimulus conditions, selection efficiency was assessed as a function of the cardiac cycle phase in which the targets and distractors were presented. The susceptibility of selection performance to the stimulus condition at hand was less pronounced during the ventricular systole compared to the diastole. Study one and two therefore indicate that inter-ference from irrelevant sensory input, resulting from temporally overlapping processing traces or from the simultaneous presentation of distractor stimuli, is reduced during phases of in-creased baro-afferent feedback. Study three experimentally manipulated baroreceptor activity by systematically varying the participant- body position while a sequential distractor priming task was completed. In this study, negative priming and distractor-response binding effects were obtained as indices of controlled and automatic distractor processing, respectively. It was found that only controlled distractor processing was affected by tonic increases in baro-receptor activity. In line with study one and two these results indicate that controlled selection processes are more efficient during enhanced baro-afferent feedback, observable in dimin-ished aftereffects of controlled distractor processing. Due to previous findings that indicated baro-afferent transmission to affect central, rather than response-related processing stages, study four measured lateralized-readiness potentials (LRPs) and reaction times (RTs), while participants, again, had to selectively respond to target stimuli that were surrounded by dis-tractors. The impact of distractor inhibition on stimulus-related, but not on response-related LRPs suggests that in a sequential distractor priming task, the sensory representations of dis-tractors, rather than motor responses are targeted by inhibition. Together with the results from studies one through three and the finding of baroreceptor-mediated behavioral inhibition tar-geting central processing stages, study four corroborates the presumption of baro-afferent signal transmission to modulate controlled processes involved in selective attention. In sum, the work presented shows that visual selective attention benefits from in-creased baro-afferent feedback as its effects are not confined to simple perception, but may facilitate the active suppression of neural activity related to sensory input from distractors. Hence, due to noise reduction, baroreceptor-mediated inhibition may promote effective selec-tion in vision.
Objective: Only 20-25% of the variance for the two to four-fold increased risk of developing breast cancer among women with family histories of the disease can be explained by known gene mutations. Other factors must exist. Here, a familial breast cancer model is proposed in which overestimation of risk, general distress, and cancer-specific distress constitute the type of background stress sufficient to increase unrelated acute stress reactivity in women at familial risk for breast cancer. Furthermore, these stress reactions are thought to be associated with central adiposity, an independent well-established risk factor for breast cancer. Hence, stress through its hormonal correlates and possible associations with central adiposity may play a crucial role in the etiology of breast cancer in women at familial risk for the disease. Methods: Participants were 215 healthy working women with first-degree relatives diagnosed before (high familial risk) or after age 50 (low familial risk), or without breast cancer in first-degree relatives (no familial risk). Participants completed self-report measures of perceived lifetime breast cancer risk, intrusive thoughts and avoidance about breast cancer (Impact of Event Scale), negative affect (Profile of Mood States), and general distress (Brief Symptom Inventory). Anthropometric measurements were taken. Urine samples during work, home, and sleep were collected for assessment of cortisol responses in the naturalistic setting where work was conceptualized as the stressful time of the day. Results: A series of analyses indicated a gradient increase of cortisol levels in response to the work environment from no, low, to high familial risk of breast cancer. When adding breast cancer intrusions to the model with familial risk status predicting work cortisol levels, significant intrusion effects emerged rendering the familial risk group non-significant. However, due to a lack of association between intrusions and cortisol in the low and high familial risk group separately, as well as a significant difference between low and high familial risk on intrusions, but not on work cortisol levels, full mediation of familial risk group effects on work cortisol by intrusions could not be established. A separate analysis indicated increased levels of central but not general adiposity in women at high familial risk of breast cancer compared to the low and no risk groups. There were no significant associations between central adiposity and cortisol excretion. Conclusion: A hyperactive hypothalamus-pituitary-adrenal axis with a more pronounced excretion of its end product cortisol, as well as elevated levels of central but not overall adiposity in women at high familial risk for breast cancer may indicate an increased health risk which expands beyond that of increased breast cancer risk for these women.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
N-acetylation by N-acetyltransferase 1 (NAT1) is an important biotransformation pathway of the human skin and it is involved in the deactivation of the arylamine and well-known contact allergen para-phenylenediamine (PPD). Here, NAT1 expression and activity were analyzed in antigen presenting cells (monocyte-derived dendritic cells, MoDCs, a model for epidermal Langerhans cells) and human keratinocytes. The latter were used to study exogenous and endogenous NAT1 activity modulations. Within this thesis, MoDCs were found to express metabolically active NAT1. Activities were between 23.4 and 26.6 nmol/mg/min and thus comparable to peripheral blood mononuclear cells. These data suggest that epidermal Langerhans cells contribute to the cutaneous N-acetylation capacity. Keratinocytes, which are known for their efficient N-acetylation, were analyzed in a comparative study using primary keratinocytes (NHEK) and different shipments of the immortalized keratinocyte cell line HaCaT, in order to investigate the ability of the cell line to model epidermal biotransformation. N-acetylation of the substrate para-aminobenzoic acid (PABA) was 3.4-fold higher in HaCaT compared to NHEK and varied between the HaCaT shipments (range 12.0"44.5 nmol/mg/min). Since B[a]P induced cytochrome p450 1 (CYP1) activities were also higher in HaCaT compared to NHEK, the cell line can be considered as an in vitro tool to qualitatively model epidermal metabolism, regarding NAT1 and CYP1. The HaCaT shipment with the highest NAT1 activity showed only minimal reduction of cell viability after treatment with PPD and was subsequently used to study interactions between NAT1 and PPD in keratinocytes. Treatment with PPD induced expression of cyclooxygenases (COX) in HaCaT, but in parallel, PPD N-acetylation was found to saturate with increasing PPD concentration. This saturation explains the presence of the PPD induced COX induction despite the high N-acetylation capacities. A detailed analysis of the effect of PPD on NAT1 revealed that the saturation of PPD N-acetylation was caused by a PPD-induced decrease of NAT1 activity. This inhibition was found in HaCaT as well as in primary keratinocytes after treatment with PPD and PABA. Regarding the mechanism, reduced NAT1 protein level and unaffected NAT1 mRNA expression after PPD treatment adduced clear evidences for substrate-dependent NAT1 downregulation. These results expand the existing knowledge about substrate-dependent NAT1 downregulation to human epithelial skin cells and demonstrate that NAT1 activity in keratinocytes can be modulated by exogenous factors. Further analysis of HaCaT cells from different shipments revealed an accelerated progression through the cell cycle in HaCaT cells with high NAT1 activities. These findings suggest an association between NAT1 and proliferation in keratinocytes as it has been proposed earlier for tumor cells. In conclusion, N-acetylation capacity of MoDCs as well as keratinocytes contribute to the overall N-acetylation capacity of human skin. NAT1 activity of keratinocytes and consequently the detoxification capacities of human skin can be modulated by the presence of exogenous NAT1 substrates and endogenous by the cell proliferation status of keratinocytes.
Chemical communication in the reproductive behaviour of Neotropical poison frogs (Dendrobatidae)
(2013)
Chemical communication is the evolutionary oldest communication system in the animal kingdom that triggers intra- and interspecific interactions. It is initiated by the emitter releasing either a signal or a cue that causes a reaction of the receiving individual. Compared to other animals there are relatively few studies regarding chemical communication in anurans. In this thesis the impact of chemical communication on the behaviour of the poison frog Ranitomeya variabilis (Dendrobatidae) and its parental care performance was investigated. This species uses phytotelmata (small water bodies in plants) for both clutch and tadpole depositions. Since tadpoles are cannibalistic, adult frogs do not only avoid conspecifics when depositing their eggs but also transport their tadpoles individually into separated phytotelmata. The recognition of already occupied phytotelmata was shown to be due to chemical substances released by the conspecific tadpoles. In order to gain a deeper comprehension about the ability of adult R. variabilis to generally recognize and avoid tadpoles, in-situ pool choice experiments were conducted, offering chemical substances of tadpole of different species to the frogs (Chapter I). It turned out that they were able to recognize all species and avoid their chemical substances for clutch depositions. However, for tadpole depositions only dendrobatid tadpoles occurring in phytotelmata were avoided, while those species living in rivers were not. Additionally, the chemical substances of a treefrog tadpole (Hylidae) were recognized by R. variabilis. Yet, they were not avoided but preferred for tadpole depositions; thus these tadpoles might be recognized as a potential prey for the predatory poison frog larvae. One of the poison frog species which was avoided for both tadpole and clutch depositions, was the phytotelmata breeding Hyloxalus azureiventris. The chemical substances released by its tadpoles were analysed together with those of the R. variabilis tadpoles (Chapter II). After finding a suitable solid-phase extraction sorbent (DSC-18), the active chemical compounds from the water of both tadpole species were extracted and fractionated. In order to determine which fractions triggered the avoidance behaviour of the frogs, in-situ bioassays were conducted. It was found that the biologically active compounds differed between both species. Since the avoidance of the conspecific tadpoles is not advantageous to the releaser tadpoles (losing a potential food resource) the chemicals released by them might be defined as chemical cues. However, as it turned out that the avoidance of the heterospecific tadpoles was not triggered by a mere byproduct based on the close evolutionary relationship between the two species, the chemical compounds released by H. azureiventris tadpoles might be defined as chemical signals (being advantageous to the releasing tadpoles) or, more specifically as synomones, interspecificly acting chemicals that are advantageous for both emitter and receiver (since R. variabilis avoids a competition situation for its offspring, too). Another interspecific communication system investigated in this thesis was the avoidance of predator kairomones (Chapter III). Using chemical substances from damselfly larvae, it could be shown that R. variabilis was unable to recognize and avoid kairomones of these tadpole predators. However, when physically present, damselfly larvae were avoided by the frogs. For the recognition of conspecific tadpoles in contrast, chemical substances were necessary, since purely visible artificial tadpole models were not avoided. If R. variabilis is also capable to chemically communicate with adult conspecifics was investigated by presenting chemical cues/signals of same-sex or opposite-sex conspecifics to the frogs (Chapter IV). It was suggested that males would be attracted to chemical substances of females and repelled by those of conspecific males. But instead all individuals showed avoidance behaviour towards the conspecific chemicals. This was suggested to be an artefact due to confinement stress of the releaser animals, emitting disturbance cues that triggered avoidance behaviour in their conspecifics. The knowledge gained about chemical communication in parental care thus far, was used to further investigate a possible provisioning behaviour in R. variabilis. In-situ pool-choice experiments with chemical cues of conspecific tadpoles were carried out throughout the change from rainy to dry season (Chapter V). With a changepoint analysis, the exact seasonal change was defined and differences between frogs" choices were analysed. It turned out that R. variabilis does not avoid but prefer conspecific cues during the dry season for tadpole depositions, what might be interpreted as a way to provide their tadpoles with food (i.e. younger tadpoles) in order to accelerate their development when facing desiccation risk. That tadpoles were also occasionally fed with fertilized eggs could be shown in a comparative study, where phytotelmata that contained a tadpole deposited by the frogs themselves received more clutch depositions than freshly erected artificial phytotelmata containing unfamiliar tadpoles (i.e. their chemical cues; Chapter VI). Conducting home range calculations with ArcGIS, it turned out that R. variabilis males showed unexpectedly strong site fidelity, leading to the suggestion that they recognize their offspring by phytotelmata location. However, in order to test if R. variabilis is furthermore able to perform chemical offspring recognition, frogs were confronted in in-situ pool-choice experiments with chemical cues of single tadpoles that were found in their home ranges (Chapter VII). Genetic kinship analyses were conducted between those tadpoles emitting the chemical cues and those deposited together with or next to them. The results, however, indicated that frogs did not choose to deposit their offspring with or without another tadpole due to relatedness, i.e. kin recognition by chemical cues could not be confirmed in R. variabilis.
High-resolution projections of the future climate are required to assess climate change realistically at a regional scale. This is in particular important for climate change impact studies since global projections are much too coarse to represent local conditions adequately. A major concern is thereby the change of extreme values in a warming climate due to their severe impact on the natural environment, socio-economical systems and the human health. Regional climate models (RCMs) are, however, able to reproduce much of those local features. Current horizontal resolutions are about 18-25km, which is still too coarse to directly resolve small-scale processes such as deep-convection. For this reason, projections of a possible future climate were simulated in this study with the regional climate model COSMO-CLM at horizontal resolutions of 4.5km and 1.3km for the region of Saarland-Lorraine-Luxemburg and Rhineland-Palatinate for the first time. At a horizontal scale of about 1km deep-convection is treated explicitly, which is expected to improve particularly the simulation of convective summer precipitation and a better resolved orography is expected to improve near surface fields such as 2m temperature. These simulations were performed as 10-year long time-slice experiments for the present climate (1991"2000), the near future (2041"2050) and the end of the century (2091"2100). The climate change signals of the annual and seasonal means and the change of extremes are analysed with respect to precipitation and 2m temperature and a possible added value due to the increased resolution is investigated. To assess changes in extremes, extreme indices have been applied and 10- and 20-year return levels were estimated by "peak-over-threshold" models. Since it is generally known that model output of RCMs should not directly be used for climate change impact studies, the precipitation and temperature fields were bias-corrected with several quantile-matching methods. Among them is a new developed parametric method which includes an extension for extreme values and is hence expected to improve the correction. In addition, the impact of the bias-correction on the climate change signals and on the extreme value statistics was investigated. The results reveal a significant warming of the annual mean by about +1.7 -°C until 2041"2050 and +3.7 -°C until 2091"2100, but considerably stronger signals of up to +5 -°C in summer in the Rhine Valley. Furthermore, the daily variability increases by about +0.8 -°C in summer but decreases by about -0.8 -°C in winter. Consequently, hot extremes increase moderately until the mid of the century but strongly thereafter, in particular in the Rhine Valley. Cold extremes warm continuously in the complete domain in the next 100 years but strongest in mountainous areas. The change signals with regard to annual precipitation are of the order -±10% but not significant. Significant, however, are a predicted increase of +32% of the seasonal precipitation in autumn until 2041"2050 and a decrease of -28% in summer until 2091-2100. No significant changes were found for days with intensities > 20 mm/day, but the results indicate that extremes with return periods ≤2 years increase as well as the frequency and duration of dry periods. The bias-corrections amplified positive signals but dampened negative signals and considerably reduced the power of detection. Moreover, absolute values and frequencies of extremes were altered by the correction but change signals remained approximately constant. The new method outperformed other parametric methods, in particular with regard to extreme value correction and related extreme indices and return levels. Although the bias correction removed systematic errors, it should be treated as an additional layer of uncertainty in climate change studies. Finally, the increased resolution of 1.3km improved predominantly the representation of temperature fields and extremes in terms of spatial heterogeneity. The benefits for summer precipitation were not as clear due to a severe dry-bias in summer, but it could be shown that in principle the onset and intensity of convection improves. This work demonstrates that climate change will have severe impacts in this investigation area and that in particular extremes may change considerably. An increased resolution provides thereby an added value to the results. These findings encourage further investigations, for other variables as for example near-surface wind, which will be more feasible with growing computing resources. These analyses should, however, be repeated with longer time series, different RCMs and anthropogenic scenarios to determine the robustness and uncertainty of these results more extensively.
Climate change and habitat fragmentation modify the natural habitat of many wetland biota and lead to new compositions of biodiversity in these ecosystems. While the direct effects of climate are often well known, indirect effects due to biotic interactions remain poorly understood. The water meadow grasshopper, Chorthippus montanus, is a univoltine habitat specialist, which is adapted to permanently moist habitats. Land use change and drainage led to highly fragmented populations of this generally flightless species. In large parts of the Palaearctic Ch. montanus occurs sympatrically with its widespread congener, the meadow grasshopper Chorthippus parallelus. Due to their close relationship and their similar songs, hybridization is likely to occur in syntopic populations. Such a species pair of a habitat specialist and a habitat generalist represents an ideal model system to examine the role of ongoing climate change and an accumulation of extreme climatic events on the life history strategies, population dynamics and inter-specific interactions. In Chapter I a laboratory experiment was conducted to identify the impact of environmental factors on intra-specific life-history traits of Ch. montanus. Like other Orthoptera species, Ch. montanus follows a converse temperature size rule. In line with the dimorphic niche hypothesis, which states that sexual size dimorphism evolved in response to the different sexual reproductive roles, both sexes showed different responses to increasing density at lower temperatures. Males attained smaller body sizes at high densities, whereas females had a prolonged development time. This is the first evidence for a sex-specific phenotypic plasticity in Ch. montanus. Females benefit from the prolonged development as their reproductive success depends on the size and number of egg clutches they may produce. By contrast, the reproductive success of males depends on the chance to fertilize virgin females, which increases with faster development. This may become a disadvantage for Ch. montanus as an intraspecific phenology shift may increase hybridization risk with the sibling species. Despite the widespread assumption that hybridization between two sympatric species is rare due to complete reproductive barriers, the genetic analyses of 16 populations (Chapter II) provided evidence for wide prevalence of hybridization between both species in the wild. As no complete admixture was found in the examined population, it is assumed that hybridization only occurs in ecotones between wetlands and drier parts. Reproductive barriers (habitat isolation, behavior, phenology) seem to prevent the genetic swamping of Ch. montanus populations. Although a behavioral experiment showed that mate choice presents an important reproductive barrier between both species, the experiment also revealed that reproductive barriers could be altered by environmental change (e.g. increasing heterospecific frequency). Chapter III analyzes the impact of extreme climatic events on population dynamics and interspecific hybridization. A mark-recapture analysis combined with weather records over five years provides evidence that the embryonic development in Ch. montanus is vulnerable to extreme climatic events. Strong population declines in Ch. montanus lead to a disequilibrium between Ch. montanus and Ch. parallelus populations and increases the risk of hybridization. The highest hybridization risk was found in the first weeks of a season, when both species had an overlapping phenology. Furthermore, hybrids were generally localized at the edge of the Ch. montanus distribution with higher heterospecific encounter probabilities. The hybridization rate reached up to 19.6%. The genetic analyses in Chapter II and III show that hybridization differentially affects specialists and generalists. While generalists may benefit from hybridization by an increasing genetic diversity, such a positive correlation was not found for Ch. montanus. The results underline the importance of reproductive barriers for the co-existence of these sympatric species. However, climate change and other anthropogenic disturbances alter reproductive barriers and promote hybridization, which may threaten small populations by genetic displacement. As anthropogenic hybridization is recognized as a major threat to biodiversity, it should be considered in environmental law and policy. In Chapter IV the role of hybrids and hybridization in three levels of law and the historical backgrounds of hybrids becoming a part of legal instruments is analyzed. Due to legal uncertainties and the complexity of this topic a legal assessment of hybrids is challenging and argues for species-specific approaches. Nonetheless, existing legal norms provide a suitable basis, but need to be specified. Finally, this chapter discusses different opportunities for the management of hybrids and hybridization in a conservation perspective and their necessity.
Magnet Resonance Imaging (MRI) and Electroencephalography (EEG) are tools used to investigate the functioning of the working brain in both humans and animal studies. Both methods are increasingly combined in separate or simultaneous measurements under the assumption to benefit from their individual strength while compensating their particular weaknesses. However, little attention has been paid to how statistical analyses strategies can influence the information that can be retrieved from a combined EEG fMRI study. Two independent studies in healthy student volunteers were conducted in the context of emotion research to demonstrate two approaches of combining MRI and EEG data of the same participants. The first study (N = 20) applied a visual search paradigm and found that in both measurements the assumed effects were absent by not statistically combining their results. The second study (N = 12) applied a novelty P300 paradigm and found that only the statistical combination of MRI and EEG measurements was able to disentangle the functional effects of brain areas involved in emotion processing. In conclusion, the observed results demonstrate that there are added benefits of statistically combining EEG-fMRI data acquisitions by assessing both the inferential statistical structure and the intra-individual correlations of the EEG and fMRI signal.
Comparing the results of the phylogeographies of the four species included in this thesis, some accordances have been found, even though certain patterns are only represented in one or two species. In all cases, the findings of the studied species strongly support the existence of forests or forest-like ecosystems beyond the classic forest refugia in the Mediterranean areas (Iberian, Apennine and Balkan peninsulas) during glacial times. However, evidence of glacial refugial areas in Southeastern Europe, especially the Balkans, have been found in this study as well. The analysed populations of Aposeris foetida, Melampyrum sylvaticum and Erebia euryale showed high genetic diversity values and mostly higher private fragments in this area, which is a strong indicator for centres of glacial survival during Würm and, regarding the results of M. sylvaticum, even during the Riss ice age. Three of the analysed species (A. foetida, M. sylvaticum and Colias palaeno) supported a second main glacial refuge area located along the Northern Alps. Again, high genetic diversity values and the uniqueness of the populations living in this region today prove the importance of this area as a glacial centre of survival. Those results confirm several recently published studies on forest species and strongly indicate the persistence of forest-like structures or even forests during the ice ages along the foothills of the Northern Alps. Additionally, the persistence of C. palaeno in this area furthermore supports the existence of peatlands north of the Alps, at least during the last glacial. The results of M. sylvaticum and E. euryale further indicate the vicinity of the Tatra Mountains as core areas for glacial survival. However, the genetic patterns found for E. euryale are ambiguous. Due to an intermediate position of two genetic lineages (originating in the Eastern Alps and Southeastern Europe), the Tatras could also reflect a postglacial mixture zone of those lineages. Moreover, the glacial and postglacial importance of this area for woodland species was accentuated, supporting other phylogeographic studies published. Besides the congruities among the results of the study species, some unique patterns and therefore further potential glacial refugia have also been illuminated in this thesis. For instance, the calcicole species, A. foetida, most probably had further survival area at both sides of the Dinaric Alps, supported by high genetic diversity values and a high number of private fragments found in Croatian populations. Furthermore, the surroundings of the German Uplands and the margin of the Southern Alps provided suitable conditions for glacial survival for M. sylvaticum, while the Eastern and Southeastern Alpine region most probably sheltered the Large Ringlet E. euryale during ice ages. Additionally, this butterfly species survived at least the glaciation along the foothills of the Massif Central, whose present populations showed a unique genetic lineage and their genetic diversity values have been measurably higher than in other populations for this species. Finally, a large and continuous Würm distribution is highly likely south of the Fennoscandian glaciers in Central Europe for C. palaeno, which might indicate extended peatland areas during Würm glacial. With all the patterns found in this study, the understanding of glacial persistence of forest, respectively forest-like structures and peatlands during Würm or even Riss glacial in Europe could be advanced. The congruencies among the analysed woodland and bog species illustrate the importance and location of extra-Mediterranean refugia for European mountain forests and the glacial presence of Central European peatlands. Thus, already postulated theories could be supported and further pieces of the overall puzzle could be added. The varieties of the different survival centres once more clarified that further phylogeographic studies on mountain forest of different habitat requirements and especially peatland species have to be implemented to get a clearer picture of the glacial history of these habitats.
Mechanical and Biological Treatment (MBT) generally aims to reduce the amount of solid waste and emissions in landfills and enhance the recoveries. MBT technology has been studied in various countries in Europe and Asia. Techniques of solid waste treatment are distinctly different in the study areas. A better understanding of MBT waste characteristics can lead to an optimization of the MBT technology. For a sustainable waste management, it is essential to determine the characteristics of the final MBT waste, the effectiveness of the treatment system as well as the potential application of the final material regarding future utilization. This study aims to define and compare the characteristics of the final MBT materials in the following countries: Luxembourg (using a high degree technology), Fridhaff in Diekirch/Erpeldange, Germany (using a well regulated technology), Singhofen in Rhein-Lahn district, Thailand (using a low cost technology): Phitsanulok in Phitsanulok province. The three countries were chosen for this comparative study due to their unique performance in the MBT implementation. The samples were taken from the composting heaps of the final treatment process prior to sending them to landfills, using a random sampling standard strategy from August 2008 onwards. The size of the sample was reduced to manageable sizes before characterization. The size reduction was achieved by the quartering method. The samples were first analyzed for the size fraction on the day of collection. They were screened into three fractions by the method of dry sieving: small size with a diameter of <10 mm, medium size with a diameter of 10-40 mm and large size with a diameter of >40 mm. These fractions were further analyzed for their physical and chemical parameters such as particle size distribution (total into 12 size fractions), particle shape, porosity, composition, water content, water retention capacity and respiratory activity. The extracted eluate was analyzed for pH-value, heavy metals (lead, cadmium and arsenic), chemical oxygen demand, ammonium, sulfate and chloride. In order to describe and evaluate the potential application of the small size material as a final cover of landfills, the fraction of small size samples were tested for the geotechnical properties as well. The geotechnical parameters were the compaction test, permeability test and shear strength test. The detailed description of the treatment facilities and methods of the study areas were included in the results. The samples from the three countries are visibly smaller than waste without pretreatment. Maximum particle size is found to be less than 100 mm. The samples are found to consist of dust to coarse fractions. The small size with a diameter of <10 mm was highest in the sample from Germany (average 60% by weight), secondly in the sample from Luxembourg (average 43% by weight) and lowest in the sample from Thailand (average 15% by weight). The content of biodegradable material generally increased with decreasing particle sizes. Primary components are organic, plastics, fibrous materials and inert materials (glass and ceramics). The percentage of each components greatly depends on the MBT process of each country. Other important characteristics are significantly reduced water content, reduced total organic carbon and reduced potential heavy metals. The geotechnical results show that the small fraction is highly compact, has a low permeability and lot of water adsorbed material. The utilization of MBT material in this study shows a good trend as it proved to be a safe material which contained very low amounts of loadings and concentrations of chemical oxygen demand, ammonium, and heavy metals. The organic part can be developed to be a soil conditioner. It is also suitably utilized as a bio-filter layer in the final cover of landfill or as a temporary cover during the MBT process. This study showed how to identify the most appropriate technology for municipal solid waste disposal through the study of waste characterization.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
Competitive analysis is a well known method for analyzing online algorithms.
Two online optimization problems, the scheduling problems and the list accessing problems, are considered in the thesis of Yida Zhu in the respect of this method.
For both problems, several existing online and offline algorithms are studied. Their performances are compared with the performances of corresponding offline optimal algorithms.
In particular, the list accessing algorithm BIT is carefully reviewed.
The classical proof of its worst case performance get simplified by adapting the knowledge about the optimal offline algorithm.
With regard to average case analysis, a new closed formula is developed to determine the performance of BIT on specific class of instances.
All algorithm considered in this thesis are also implemented in Julia.
Their empirical performances are studied and compared with each other directly.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
This dissertation deals with consistent estimates in household surveys. Household surveys are often drawn via cluster sampling, with households sampled at the first stage and persons selected at the second stage. The collected data provide information for estimation at both the person and the household level. However, consistent estimates are desirable in the sense that the estimated household-level totals should coincide with the estimated totals obtained at the person-level. Current practice in statistical offices is to use integrated weighting. In this approach consistent estimates are guaranteed by equal weights for all persons within a household and the household itself. However, due to the forced equality of weights, the individual patterns of persons are lost and the heterogeneity within households is not taken into account. In order to avoid the negative consequences of integrated weighting, we propose alternative weighting methods in the first part of this dissertation that ensure both consistent estimates and individual person weights within a household. The underlying idea is to limit the consistency conditions to variables that emerge in both the personal and household data sets. These common variables are included in the person- and household-level estimator as additional auxiliary variables. This achieves consistency more directly and only for the relevant variables, rather than indirectly by forcing equal weights on all persons within a household. Further decisive advantages of the proposed alternative weighting methods are that original individual rather than the constructed aggregated auxiliaries are utilized and that the variable selection process is more flexible because different auxiliary variables can be incorporated in the person-level estimator than in the household-level estimator.
In the second part of this dissertation, the variances of a person-level GREG estimator and an integrated estimator are compared in order to quantify the effects of the consistency requirements in the integrated weighting approach. One of the challenges is that the estimators to be compared are of different dimensions. The proposed solution is to decompose the variance of the integrated estimator into the variance of a reduced GREG estimator, whose underlying model is of the same dimensions as the person-level GREG estimator, and add a constructed term that captures the effects disregarded by the reduced model. Subsequently, further fields of application for the derived decomposition are proposed such as the variable selection process in the field of econometrics or survey statistics.
In this thesis, we investigate the quantization problem of Gaussian measures on Banach spaces by means of constructive methods. That is, for a random variable X and a natural number N, we are searching for those N elements in the underlying Banach space which give the best approximation to X in the average sense. We particularly focus on centered Gaussians on the space of continuous functions on [0,1] equipped with the supremum-norm, since in that case all known methods failed to achieve the optimal quantization rate for important Gauss-processes. In fact, by means of Spline-approximations and a scheme based on the Best-Approximations in the sense of the Kolmogorov n-width we were able to attain the optimal rate of convergence to zero for these quantization problems. Moreover, we established a new upper bound for the quantization error, which is based on a very simple criterion, the modulus of smoothness of the covariance function. Finally, we explicitly constructed those quantizers numerically.
Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Looking at queueing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the overhead, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, during which the servers remain idle. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes. In this thesis, the essential work consists in designing controlled queueing models and investigation of their optimal control properties for the application in the area of the modern telecommunication systems, which should satisfy the growing demands for quality of service (QoS). For two types of optimization criterion (the model without penalties and with set-up costs), a class of controlled queueing systems is defined. The general case of the queue that forms this class is characterized by a Markov Additive Arrival Process and heterogeneous Phase-Type service time distributions. We show that for these queueing systems the structural properties of optimal control policies, e.g. monotonicity properties and threshold structure, are preserved. Moreover, we show that these systems possess specific properties, e.g. the dependence of optimal policies on the arrival and service statistics. In order to practically use controlled stochastic models, it is necessary to obtain a quick and an effective method to find optimal policies. We present the iteration algorithm which can be successfully used to find an optimal solution in case of a large state space.
Many combinatorial optimization problems on finite graphs can be formulated as conic convex programs, e.g. the stable set problem, the maximum clique problem or the maximum cut problem. Especially NP-hard problems can be written as copositive programs. In this case the complexity is moved entirely into the copositivity constraint.
Copositive programming is a quite new topic in optimization. It deals with optimization over the so-called copositive cone, a superset of the positive semidefinite cone, where the quadratic form x^T Ax has to be nonnegative for only the nonnegative vectors x. Its dual cone is the cone of completely positive matrices, which includes all matrices that can be decomposed as a sum of nonnegative symmetric vector-vector-products.
The related optimization problems are linear programs with matrix variables and cone constraints.
However, some optimization problems can be formulated as combinatorial problems on infinite graphs. For example, the kissing number problem can be formulated as a stable set problem on a circle.
In this thesis we will discuss how the theory of copositive optimization can be lifted up to infinite dimension. For some special cases we will give applications in combinatorial optimization.
There is considerable evidence for an association between chronic dysregulation of the hypothalamus-pituitary adrenal (HPA) axis, atrophy of the hippocampus (HC) and cognitive and mood changes in clinical populations and in aging. The present thesis investigated this relationship in young healthy male subjects. Special emphasis was put on measures of HC volume and function derived from structural and functional magnetic resonance imaging (MRI). Higher cortisol levels after awakening were observed in subjects with higher levels of depressive symptomatology. Larger HC volume was associated with higher cortisol levels after awakening and in response to acute stress, whereas cognitive performance was impaired in subjects with larger HC volumes. Hippocampal activation during picture encoding was reduced after stress induction, and positive associations between activation and cognitive performance before stress were not present anymore afterwards. The present findings underscore the importance of structural and functional brain imaging for psychoneuroendocrinological research. The investigation of the association between cortisol levels and hippocampal integrity in young healthy subjects elicited unexpected results and adds to the understanding of HPA dysfunction and HC atrophy in clinical and aged populations.
The glucocorticoid (GC) cortisol, main mediator of the hypothalamic-pituitary-adrenal axis, has many implications in metabolism, stress response and the immune system. GC function is mediated mainly via the glucocorticoid receptor (GR) which binds as a transcription factor to glucocorticoid response elements (GREs). GCs are strong immunosuppressants and used to treat inflammatory and autoimmune diseases. Long-term usage can lead to several irreversible side effects which make improved understanding indispensable and warrant the adaptation of current drugs. Several large scale gene expression studies have been performed to gain insight into GC signalling. Nevertheless, studies at the proteomic level have not yet been made. The effects of cortisol on monocytes and macrophages were studied in the THP-1 cell line using 2D fluorescence difference gel electrophoresis (2D DIGE) combined with MALDI-TOF mass spectrometry. More than 50 cortisol-modulated proteins were identified which belonged to five functional groups: cytoskeleton, chaperones, immune response, metabolism, and transcription/translation. Multiple GREs were found in the promoters of their corresponding genes (+10 kb/-0.2 kb promoter regions including all alternative promoters available within the Database for Transcription Start Sites (DBTSS)). High quality GREs were observed mainly in cortisol modulated genes, corroborating the proteomics results. Differential regulation of selected immune response related proteins were confirmed by qPCR and immuno-blotting. All immune response related proteins (MX1, IFIT3, SYWC, STAT3, PMSE2, PRS7) which were induced by LPS were suppressed by cortisol and belong mainly to classical interferon target genes. Mx1 has been selected for detailed expression analysis since new isoforms have been identified by proteomics. FKBP51, known to be induced by cortisol, was identified as the strongest differentially expressed protein and contained the highest number of strict GREs. Genomic analysis of five alternative FKBP5 promoter regions suggested GC inducibility of all transcripts. 2D DIGE combined with 2D immunoblotting revealed the existence of several previously unknown FKBP51 isoforms, possibly resulting from these transcripts. Additionally multiple post-translational modifications were found, which could lead to different subcellular localization in monocytes and macrophages as seen by confocal microscopy. Similar results were obtained for the different cellular subsets of human peripheral blood mononuclear cells (PBMCs). FKBP51 was found to be constitutively phosphorylated with up to 8 phosphosites in CD19+ B lymphocytes. Differential Co-immunoprecipitation for cytoplasm and nucleus allowed us to identify new potential interaction partners. Nuclear FKBP51 was found to interact with myosin 9, whereas cytosolic FKBP51 with TRIM21 (synonym: Ro52, Sjögren`s syndrome antigen). The GR has been found to interact with THOC4 and YB1, two proteins implicated in mRNA processing and transcriptional regulation. We also applied proteomics to study rapid non-genomic effects of acute stress in a rat model. The nuclear proteome of the thymus was investigated after 15 min restraint stress and compared to the non-stressed control. Most of the identified proteins were transcriptional regulators found to be enriched in the nucleus probably to assist gene expression in an appropriate manner. The proteomic approach allowed us to further understand the cortisol mediated response in monocytes/macrophages. We identified several new target proteins, but we also found new protein variants and post-translational modifications which need further investigation. Detailed study of FKBP51 and GR indicated a complex regulation network which opened a new field of research. We identified new variants of the anti-viral response protein MX1, displaying differential expression and phosphorylation in the cellular compartments. Further, proteomics allowed us to follow the very early effects of acute stress, which happen prior to gene expression. The nuclear thymocyte proteome of restraint stressed rats revealed an active preparation for subsequent gene expression. Proteomics was successfully applied to study differential protein expression, to identify new protein variants and phosphorylation events as well as to follow translocation. New aspects for future research in the field of cortisol-mediated immune modulation have been added.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
This dissertation investigates corporate acquisition decisions that represent important corporate development activities for family and non-family firms. The main research objective of this dissertation is to generate insights into the subjective decision-making behavior of corporate decision-makers from family and non-family firms and their weighting of M&A decision-criteria during the early pre-acquisition target screening and selection process. The main methodology chosen for the investigation of M&A decision-making preferences and the weighting of M&A decision criteria is a choice-based conjoint analysis. The overall sample of this dissertation consists of 304 decision-makers from 264 private and public family and non-family firms from mainly Germany and the DACH-region. In the first empirical part of the dissertation, the relative importance of strategic, organizational and financial M&A decision-criteria for corporate acquirers in acquisition target screening is investigated. In addition, the author uses a cluster analysis to explore whether distinct decision-making patterns exist in acquisition target screening. In the second empirical part, the dissertation explores whether there are differences in investment preferences in acquisition target screening between family and non-family firms and within the group of family firms. With regards to the heterogeneity of family firms, the dissertation generated insights into how family-firm specific characteristics like family management, the generational stage of the firm and non-economic goals such as transgenerational control intention influences the weighting of different M&A decision criteria in acquisition target screening. The dissertation contributes to strategic management research, in specific to M&A literature, and to family business research. The results of this dissertation generate insights into the weighting of M&A decision-making criteria and facilitate a better understanding of corporate M&A decisions in family and non-family firms. The findings show that decision-making preferences (hence the weighting of M&A decision criteria) are influenced by characteristics of the individual decision-maker, the firm and the environment in which the firm operates.
Representation Learning techniques play a crucial role in a wide variety of Deep Learning applications. From Language Generation to Link Prediction on Graphs, learned numerical vector representations often build the foundation for numerous downstream tasks.
In Natural Language Processing, word embeddings are contextualized and depend on their current context. This useful property reflects how words can have different meanings based on their neighboring words.
In Knowledge Graph Embedding (KGE) approaches, static vector representations are still the dominant approach. While this is sufficient for applications where the underlying Knowledge Graph (KG) mainly stores static information, it becomes a disadvantage when dynamic entity behavior needs to be modelled.
To address this issue, KGE approaches would need to model dynamic entities by incorporating situational and sequential context into the vector representations of entities. Analogous to contextualised word embeddings, this would allow entity embeddings to change depending on their history and current situational factors.
Therefore, this thesis provides a description of how to transform static KGE approaches to contextualised dynamic approaches and how the specific characteristics of different dynamic scenarios are need to be taken into consideration.
As a starting point, we conduct empirical studies that attempt to integrate sequential and situational context into static KG embeddings and investigate the limitations of the different approaches. In a second step, the identified limitations serve as guidance for developing a framework that enables KG embeddings to become truly dynamic, taking into account both the current situation and the past interactions of an entity. The two main contributions in this step are the introduction of the temporally contextualized Knowledge Graph formalism and the corresponding RETRA framework which realizes the contextualisation of entity embeddings.
Finally, we demonstrate how situational contextualisation can be realized even in static environments, where all object entities are passive at all times.
For this, we introduce a novel task that requires the combination of multiple context modalities and their integration with a KG based view on entity behavior.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
This dissertation looked at both design-based and model-based estimation for rare and clustered populations using the idea of the ACS design. The ACS design (Thompson, 2012, p. 319) starts with an initial sample that is selected by a probability sampling method. If any of the selected units meets a pre-specified condition, its neighboring units are added to the sample and observed. If any of the added units meets the pre-specified condition, its neighboring units are further added to the sample and observed. The procedure continues until there are no more units that meet the pre-specified condition. In this dissertation, the pre-specified condition is the detection of at least one animal in a selected unit. In the design-based estimation, three estimators were proposed under three specific design setting. The first design was stratified strip ACS design that is suitable for aerial or ship surveys. This was a case study in estimating population totals of African elephants. In this case, units/quadrant were observed only once during an aerial survey. The Des Raj estimator (Raj, 1956) was modified to obtain an unbiased estimate of the population total. The design was evaluated using simulated data with different levels of rarity and clusteredness. The design was also evaluated on real data of African elephants that was obtained from an aerial census conducted in parts of Kenya and Tanzania in October (dry season) 2013. In this study, the order in which the samples were observed was maintained. Re-ordering the samples by making use of the Murthy's estimator (Murthy, 1957) can produce more efficient estimates. Hence a possible extension of this study. The computation cost resulting from the n! permutations in the Murthy's estimator however, needs to be put into consideration. The second setting was when there exists an auxiliary variable that is negatively correlated with the study variable. The Murthy's estimator (Murthy, 1964) was modified. Situations when the modified estimator is preferable was given both in theory and simulations using simulated and two real data sets. The study variable for the real data sets was the distribution and counts of oryx and wildbeest. This was obtained from an aerial census that was conducted in parts of Kenya and Tanzania in October (dry season) 2013. Temperature was the auxiliary variable for two study variables. Temperature data was obtained from R package raster. The modified estimator provided more efficient estimates with lower bias compared to the original Murthy's estimator (Murthy, 1964). The modified estimator was also more efficient compared to the modified HH and the modified HT estimators of (Thompson, 2012, p. 319). In this study, one auxiliary variable is considered. A fruitful area for future research would be to incorporate multi-auxiliary information at the estimation phase of an ACS design. This could, in principle, be done by using for instance a multivariate extension of the product estimator (Singh, 1967) or by using the generalized regression estimator (Särndal et al., 1992). The third case under design-based estimation, studied the conjoint use of the stopping rule (Gattone and Di Battista, 2011) and the use of the without replacement of clusters (Dryver and Thompson, 2007). Each of these two methods was proposed to reduce the sampling cost though the use of the stopping rule results in biased estimates. Despite this bias, the new estimator resulted in higher efficiency gain in comparison to the without replacement of cluster design. It was also more efficient compared to the stratified design which is known to reduce final sample size when networks are truncated at stratum boundaries. The above evaluation was based on simulated and real data. The real data was the distribution and counts of hartebeest, elephants and oryx obtained in the same census as above. The bias attributed by the stopping rule has not been evaluated analytically. This may not be direct since the truncated network formed depends on the initial unit sampled (Gattone et al., 2016a). This and the order of the bias however, deserves further investigation as it may help in understanding the effect of the increase in the initial sample size together with the population characteristics on the efficiency of the proposed estimator. Chapter four modeled data that was obtained using the stratified strip ACS (as described in sub-section (3.1)). This was an extension of the model of Rapley and Welsh (2008) by modeling data that was obtained from a different design, the introduction of an auxiliary variable and the use of the without replacement of clusters mechanism. Ideally, model-based estimation does not depend on the design or rather how the sample was obtained. This is however, not the case if the design is informative; such as the ACS design. In this case, the procedure that was used to obtain the sample was incorporated in the model. Both model-based and design-based simulations were conducted using artificial and real data. The study and the auxiliary variable for the real data was the distribution and counts of elephants collected during an aerial census in parts of Kenya and Tanzania in October (dry season) and April (wet season) 2013 respectively. Areas of possible future research include predicting the population total of African elephants in all parks in Kenya. This can be achieved in an economical and reliable way by using the theory of SAE. Chapter five compared the different proposed strategies using the elephant data. Again the study variable was the elephant data from October (dry season) 2013 and the auxiliary variable was the elephant data from April (wet season) 2013. The results show that the choice of particular strategy to use depends on the characteristic of the population under study and the level and the direction of the correlation between the study and the auxiliary variable (if present). One general area of the ACS design that is still behind, is the implementation of the design in the field especially on animal populations. This is partly attributed by the challenges associated with the field implementation, some of which were discussed in section 2.3. Green et al. (2010) however, provides new insights in undertaking the ACS design during an aerial survey such as how the aircraft should turn while surveying neighboring units. A key point throughout the dissertation is the reduction of cost during a survey which can be seen by the reduction in the number of units in the final sample (through the use of stopping rule, use of stratification and truncating networks at stratum boundaries) and ensuring that units are observed only once (by using the without replacement of cluster sampling technique). The cost of surveying an edge unit(s) is assumed to be low in which case the efficiency of the ACS design relative to the non-adaptive design is achieved (Thompson and Collins, 2002). This is however not the case in aerial surveys as the aircraft flies at constant speed and height (Norton-Griffiths, 1978). Hence the cost of surveying an edge unit is the same as the cost of surveying a unit that meets the condition of interest. The without replacement of cluster technique plays a greater role of reducing the cost of sampling in such surveys. Other key points that motivated the sections in the dissertation include gains in efficiency (in all sections) and practicability of the designs in the specific setting. Even though the dissertation focused on animal populations, the methods can as well be implemented in any population that is rare and clustered such as in the study of forestry, plants, pollution, minerals and so on.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Social entrepreneurship is a successful activity to solve social problems and economic challenges. Social entrepreneurship uses for-profit industry techniques and tools to build financially sound businesses that provide nonprofit services. Social entrepreneurial activities also lead to the achievement of sustainable development goals. However, due to the complex, hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel determinants. To expand our knowledge of how beneficial macro-level determinants can be, this work examines empirical evidence about the impact of macro-level determinants on social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning, the introductory section is explained in Chapter 1, which contains the motivation for the research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship. Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that a variety of determinants at the micro and macro levels are essential for the emergence of social entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the determinants and consequences of social entrepreneurship from different methodological perspectives. The theoretical foundations of the micro- and macro-level determinants influencing social entrepreneurial activities were discussed in Chapter 3. The purpose of reproducibility in research is to confirm previously published results (Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of transparency of methodology, reluctance to publish, and lack of interest from researchers, there is a lack of promoting replication of the existing research study (Baker, 2016; Hedges & Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that provide replicability of the reported results are considered rare in previous research (Burman et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand macro-level determinants on social inclusion, although it is still unclear whether these studies accurately reflect reality. It is important to provide conceptual underpinnings to the field through a reassessment of published results (Bettis et al., 2016). The results of their research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention, Chapter 5 considered the reproducibility of previous results, particularly on the topic of social entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of reproducibility and validate the specific conclusions they drew. The literal and constructive replication in the dissertation inspired us to explore technical replication research on social entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the growth of social ventures. The current debate reviews and references literature that has specifically focused on the development of social entrepreneurship. An empirical analysis of factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter 6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as consequences at the micro level. This study examined many characteristics of social and commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the growth of social entrepreneurship differs from commercial entrepreneurial activities due to objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative research has been used in studies to support the evidence on related topics, including Gupta et al (2020) emphasized that research needs to focus on specific concepts of social entrepreneurship for the field to advance. Therefore, this study provides a quantitative, analysis-based assessment of facts and data. For this purpose, a data set from the Global Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from 38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence of various social and commercial characteristics of entrepreneurship on economic growth in developing countries. Chapter 7 briefly explains future directions and practical/theoretical implications.
It has been the overall aim of this research work to assess the potential of hyperspectral remote sensing data for the determination of forest attributes relevant to forest ecosystem simulation modeling and forest inventory purposes. A number of approaches for the determination of structural and chemical attributes from hyperspectral remote sensing have been applied to the collected data sets. Many of the methods to be found in the literature were up to now just applied to broadband multispectral data, applied to vegetation canopies other than forests, reported to work on the leaf level or with modelled data, not validated with ground truth data, or not systematically compared to other methods. Attributes that describe the properties of the forest canopy and that are potentially open to remote sensing were identified, appropriate methods for their retrieval were implemented and field, laboratory and image data (HyMap sensor) were acquired over a number of forest plots. The study on structural attributes compared statistical and physical approaches. In the statistical section, linear predictive models between vegetation indices derived from HyMap data and field measurements of structural forest stand attributes were systematically evaluated. The study demonstrates that for hyperspectral image data, linear regression models can be applied to quantify leaf area index and crown volume with good accuracy. For broadband multispectral data, the accuracy was generally lower. The physically-based approach used the invertible forest reflectance model (INFORM), a combination of well established sub-models FLIM, SAIL and LIBERTY. The model was inverted with HyMap data using a neural network approach. In comparison to the statistical approach, it could be shown that the reflectance model inversion works equally well. In opposition to empirically derived prediction functions that are generally limited to the local conditions at a certain point in time and to a specified sensor type, the calibrated reflectance model can be applied more easily to different optical remote sensing data acquired over central European forests. The study on chemical forest attributes evaluated the information content of HyMap data for the estimation of nitrogen, chlorophyll and water concentration. A number of needle samples of Norway spruce were analysed for their total chlorophyll, nitrogen and water concentrations. The chemical data was linked to needle spectra measured in the laboratory and canopy spectra measured by the HyMap sensor. Wavebands selected in statistical models were often located in spectral regions that are known to be important for chlorophyll detection (red edge, green peak). Predictive models were applied on the HyMap image to compute maps of chlorophyll concentration and nitrogen concentration. Results of map overlay operations revealed coherence between total chlorophyll and zones of stand development stage and between total chlorophyll and zones of soil type. Finally, it can be stated that the hyperspectral remote sensing data generally contains more information relevant to the estimation of the forest attributes compared to multispectral data. Structural forest attributes, except biomass, can be determined with good accuracy from a hyperspectral sensor type like HyMap. Among the chemical attributes, chlorophyll concentration can be determined with good accuracy and nitrogen concentration with moderate accuracy. For future research, additional dimensions have to be taken into account, for instance through exploitation of multi-view angle data. Additionally, existing forest canopy reflectance models should be further improved.
The classic Capital Asset Pricing Model and the portfolio theory suggest that investors hold the market portfolio to diversify idiosyncratic risks. The theory predicts that expected return of assets is positive and that reacts linearly on the overall market. However, in reality, we observe that investors often do not have perfectly diversified portfolios. Empirical studies find that new factors influence the deviation from the theoretical optimal investment. In the first part of this work (Chapter 2) we study such an example, namely the influence of maximum daily returns on subsequent returns. Here we follow ideas of Bali et al. (2011). The goal is to find cross-sectional relations between extremely positive returns and expected average returns. We take account a larger number of markets worldwide. Bali et al. (2011) report with respect to the U.S. market a robust negative relation between MAX (the maximum daily return) and the expected return in the subsequent time. We extent substantially their database to a number of other countries, and also take more recent data into account (until end of 2009). From that we conclude that the relation between MAX and expected returns is not consistent in all countries. Moreover, we test the robustness of the results of Bali et al. (2011) in two time-periods using the same data from CRSP. The results show that the effect of extremely positive returns is not stable over time. Indeed we find a negative cross-sectional relation between the extremely positive returns and the average returns for the first half of the time series, however, we do not find significant effects for the second half. The main results of this chapter serve as a basis for an unpublished working paper Yuan and Rieger (2014b). While in Chapter 2 we have studied factors that prevent optimal diversification, we consider in Chapter 3 and 4 situations where the optimal structure of diversification was previously unknown, namely diversification of options (or structured financial products). Financial derivatives are important additional investment form with respect to diversification. Not only common call and put options, but also structured products enable investors to pursue a multitude of investment strategies to improve the risk-return profile. Since derivatives become more and more important, diversification of portfolios with dimension of derivatives is of particularly practical relevance. We investigate the optimal diversification strategies in connection with underlying stocks for classical rational investors with constant relative risk aversion (CRRA). In particular, we apply Monte Carlo method based on the Black-Scholes model and the Heston model for stochastic volatility to model the stock market processes and the pricing of the derivatives. Afterwards, we compare the benchmark portfolio which consists of derivatives on single assets with derivatives on the index of these assets. First we compute the utility improvement of an investment in the risk-free assets and plain-vanilla options for CRRA investors in various scenarios. Furthermore, we extend our analysis to several kinds of structured products, in particular capital protected notes (CPNs), discount certificates (DCs) and bonus certificates (BCs). We find that the decision of an investor between these two diversification strategies leads to remarkable differences. The difference in the utility improvement is influenced by risk-preferences of investors, stock prices and the properties of the derivatives in the portfolio. The results will be presented in Chapter 3 and are the basis for a yet unpublished working paper Yuan and Rieger (2014a). To check furthermore whether underlyings of structured products influence decisions of investors, we discuss explicitly the utility gain of a stock-based product and an index-based product for an investor whose preferences are described by cumulative prospect theory (CPT) (Chapter 4, compare to Yuan (2014)). The goal is that to investigate the dependence of structured products on their underlying where we put emphasis on the difference between index-products and single-stock-products, in particular with respect to loss-aversion and mental accounting. We consider capital protected notes and discount certificates as examples, and model the stock prices and the index of these stocks via Monte Carlo simulations in the Black-Scholes framework. The results point out that market conditions, particularly the expected returns and volatility of the stocks play a crucial role in determining the preferences of investors for stock-based CPNs and index-based CPNs. A median CPT investor prefers the index-based CPNs if the expected return is higher and the volatility is lower, while he prefers the stock-based CPNs in the other situation. We also show that index-based DCs are robustly more attractive as compared to stock-based DCs for CPT investors.
A big challenge for agriculture in the 21st century is the provision of food safety to fast growing world- population, which not only demands the well utilisation of the available agricultural resources but also to develop new advancements in the mass production of food crops. Wheat is the third largest food crop of the world and Pakistan is the eighth largest wheat producing country globally. Rice is the second most important staple food of Pakistan after wheat, grown in all provinces of the country. Maize is the world- top ranking food crop followed by wheat and rice. The harvested produts have to be stored in different types of storage structures on small or large scale for food as well as seed purpose. In Pakistan, the harvested grains are stored for the whole year till the introduction of fresh produce in order to ensure the regular food supply throughout the year. However, it is this extended storage period making the commodity more vulnerable to insect attacks. Rhyzopertha dominica (Coleoptera: Bostrychidae), Cryptolestes ferrugineus (Coleoptera: Laemophloeidae), Tribolium castaneum (Coleoptera: Tenebrionidae) and Liposcelis spp. (Psocoptera: Liposcelididae) are the major and most damaging insect pests of stored products all around the world. Various management strategies have been adopted for stored grain insect pests mostly relying upon the use of a broad spectrum of insecticides, but the injudicious use of these chemicals raised various environmental and human health related issues, which necessitate the safe use of the prevailing control measures and evaluation of new and alternative control methods. The application of new chemical insecticides, microbial insecticides (particularly entomopathogenic fungi) and the use of inert dusts (diatomaceous earths) is believed amongst the potential alternatives to generally used insecticides in stored grain insect management system. In the current investigations, laboratory bioassays conducted to evaluate the effects of combining Imidacloprid (new chemistry insecticide) with and without Protect-It (diatomaceous earth formulation) against R. dominica, L. paeta, C. ferrugineus and T. castaneum, on three different grain commodities (i.e. wheat, maize and rice) revealed differences in adult mortality levels among grains and insect species tested. Individually, Imidacloprid was more effective as compared with Protect-It alone and the highest numbers of dead adults were recorded in wheat. The insecticidal efficacy of B. bassiana with Protect-It and DEBBM was also assessed against all test insect species under laboratory conditions. The findings of these studies revealed that the more extended exposure period and the higher combined application rate of B. bassiana and DEs provided the highest mortality of the test insect species. The progeny emergence of each insect species was also greatly suppressed where the highest dose rates of the combined treatments were applied. The residual efficacy of all three control measures Imidacloprid, B. bassiana and DEBBM formulation was also evaluated against all test insect species. The bioassays were carried out after grain treatments and monthly for 6 months. The results indicated that the adult mortality of each test insect species was decreased within the six month storage period, and the integarted application of the test grain protectants enhanced the mortality rates than their alone treatments. The maximum mortality was noted in the combined treatment of DEBBM with Imidacloprid. At the end, the effectiveness of B. bassiana, DEBBM and Imidacloprid applied alone as well as in combinations, against all above mentioned test insect species was also evaluated under field conditions in trials conducted in four districts of Punjab, Pakistan. For each district, a significant difference was observed between treatments, while the combined treatments gave better control of test species as compared with them alone. The least number of surviving adults and minimum percentage of grain damage was observed for the DEBBM and Imidacloprid combination, but DEBBM with B. bassiana provided the best long-term protection as compared with the remaining treatments.
Tropospheric ozone (O3) is known to have various detrimental effects on plants, such as visible leaf injury, reduced growth and premature senescence. Flux models offer the determination of the harmful ozone dose entering the plant through the stomata. This dose can then be related to phytotoxic effects mentioned above to obtain dose-response relationships, which are a helpful tool for the formulation of abatement strategies of ozone precursors. rnOzone flux models are dependant on the correct estimation of stomatal conductance (gs). Based on measurements of gs, an ozone flux model for two white clover clones (Trifolium repens L. cv Regal; NC-S (ozone-sensitive) and NC-R (ozone-resistant)) differing in their sensitivity to ozone was developed with the help of artificial neural networks (ANNs). White clover is an important species of various European grassland communities. The clover plants were exposed to ambient air at three sites in the Trier region (West Germany) during five consecutive growing seasons (1997 to 2001). The response parameters visible leaf injury and biomass ratio of NC-S/NC-R clone were regularly assessed. gs-measurements of both clones functioned as output of the ANN-based gs model, while corresponding climate parameters (i.e. temperature, vapour pressure deficit (VPD) and photosynthetic active radiation (PAR)) and various ozone concentration indices were inputs. The development of the model was documented in detail and various model evaluation techniques (e.g. sensitivity analysis) were applied. The resulting gs model was used as a basis for ozone flux calculations, which were related to above mentioned response parameters. rnThe results showed that the ANNs were capable of revealing and learning the complex relationship between gs and key meteorological parameters and ozone concentration indices. The dose-response relationships between ozone fluxes and visible leaf injury were reasonably strong, while those between ozone fluxes and NC-S/NC-R biomass ratio were fairly weak. The results were discussed in detail with respect to the suitability of the chosen experimental methods and model type.
Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related activities, most of them enter their professional life unprepared, as classroom assessment is often not part of their educational training. Since teacher judgments matter for the educational development of students, the judgments should be up to a high standard. The present dissertation comprises three studies focusing on accuracy of teacher judgments (Study 1), consequences of (mis-)judgment regarding teacher nomination for gifted programming (Study 2) and teacher recommendations for secondary school tracks (Study 3), and individual student characteristics that impact and potentially bias teacher judgment (Studies 1 through 3). All studies were designed to contribute to a further understanding of classroom assessment skills of teachers. Overall, the results implied that, teacher judgment of cognitive ability was an important constant for teacher nominations and recommendations but lacked accuracy. Furthermore, teacher judgments of various traits and school achievement were substantially related to social background variables, especially the parents" educational background. However, multivariate analysis showed social background variables to impact nomination and recommendation only marginally if at all. All results indicated differentiated but potentially biased teacher judgments to impact their far-reaching referral decisions directly, while the influence of social background on the referral decisions itself seems mediated. Implications regarding further research practices and educational assessment strategies are discussed. The implications on the needs of teachers to be educated on judgment and educational assessment are of particular interest and importance.
My dissertation is concerned with contemporary (Anglo-)Canadian immigrant fiction and proposes an analytic grid with which it may be appreciated and compared more adequately. As a starting-point serves the general observation that the works of many Canadian immigrant writers are characterised by a focus on their respective home cultures as well as on their Canadian host culture. Following the ground-breaking work of Northrop Frye, Margaret Atwood and David Staines, the categories of "there" and "here" are suggested in order to reflect this double encoding of Canadian immigrant literature. However, "here" and "there" are more than spatial configurations in that they represent a concern with issues of multiculturalism and postcolonialism. Both of which are informed by an emphasis on difference and identity, and difference and identity are also what the narratives of M.G. Vassanji, Neil Bissoondath and Rohinton Mistry are preoccupied with. My study sets out to show two things: On the one hand, it attempts to exemplify the complexity and interrelatedness of "there" and "here" in a representative fashion. Hence in their treatments of difference, M.G. Vassanji, Neil Bissoondath and Rohinton Mistry come up with comparable identity constructions "here" and "there" respectively. On the other hand, special attention is paid to the strategies by which Vassanji, Bissoondath and Mistry construct difference and corroborate their respective understandings of identity.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
This work investigates the industrial applicability of graphics and stream processors in the field of fluid simulations. For this purpose, an explicit Runge-Kutta discontinuous Galerkin method in arbitrarily high order is implemented completely for the hardware architecture of GPUs. The same functionality is simultaneously realized for CPUs and compared to GPUs. Explicit time steppings as well as established implicit methods are under consideration for the CPU. This work aims at the simulation of inviscid, transsonic flows over the ONERA M6 wing. The discontinuities which typically arise in hyperbolic equations are treated with an artificial viscosity approach. It is further investigated how this approach fits into the explicit time stepping and works together with the special architecture of the GPU. Since the treatment of artificial viscosity is close to the simulation of the Navier-Stokes equations, it is reviewed how GPU-accelerated methods could be applied for computing viscous flows. This work is based on a nodal discontinuous Galerkin approach for linear hyperbolic problems. Here, it is extended to non-linear problems, which makes the application of numerical quadrature obligatory. Moreover, the representation of complex geometries is realized using isoparametric mappings. Higher order methods are typically very sensitive with respect to boundaries which are not properly resolved. For this purpose, an approach is presented which fits straight-sided DG meshes to curved geometries which are described by NURBS surfaces. The mesh is modeled as an elastic body and deformed according to the solution of closest point problems in order to minimize the gap to the original spline surface. The sensitivity with respect to geometry representations is reviewed in the end of this work in the context of shape optimization. Here, the aerodynamic drag of the ONERA M6 wing is minimized according to the shape gradient which is implicitly smoothed within the mesh deformation approach. In this context a comparison to the classical Laplace-Beltrami operator is made in a Stokes flow situation.
This doctoral dissertation examines two authors of German descent who are representatives for the development of Canadian literature and its regional focus on the prairies: Frederick Philip Grove (1879-1948) and Robert Kroetsch (*1927). Kroetsch, in his essays and talks, has repeatedly referred to Grove as one of his "literary ancestors". Although there exist monographs and numerous articles on both authors, the present study is the first-ever comparative approach. This study's main access is provided by the motif of disguise and masquerade, which plays a central role in the authors' works. Even if critics have looked at the traditional motif (cf. Homer's Odyssey, or many Renaissance plays) in Kroetsch's writing sporadically, and have used it to examine Grove's biography, no approach has attempted a larger contextualization within/among both writers' oeuvres. According to Lloyd Davis, however, the motif can be seen as "representing the cultural dialogism, rather than any particular thesis, of selfhood" (Davis 16). Hence, it helps interrogate a topic that within Canada - the former colony and current multicultural immigrant society - had and has a specific relevance. As an analytical tool, the motif allows for highlighting both the similarities and the differences between the œuvres of Grove and Kroetsch as key-figures of a (post)colonial literature of Western Canada on the one hand, and for general questions pertaining to the characterisation of figures, the definition of narrative positions and even of genres on the other hand. Following the preface, two theoretical chapters outline conceptions of identity and their deducible forms and functions of disguise and masquerade, including a discussion of John Richardson's Wacousta (1832), which is the first Canadian example for the motif's constitutive use. The second major section sketches, in two separate chapters, the poetics and mentalities (Mentalitätsgeschichte) of each writer within the context of their complete works by looking at biographical data as well as the critics' assessments. After immigrating into Manitoba in 1912, Grove soon became the first representative of a literary prairie-realism. Before, he had faked his suicide in 1909 and stripped off his 'original' identity as the German translator (e.g., Wilde, Wells, Flaubert) as well as modestly successful poet and novelist Felix Paul Greve to leave behind debts and a notorious lover and to reinvent himself in the New World. The protean role-plays of 'FPG' - decoded only 23 years after his death - are manifested in his creation of literary characters, in a "collectivity of identities" (Cavell 12) or number of metonymic personae that keep his critics busy to this date. Providing a different story, Kroetsch's family of German background immigrated into Canada in the mid-19th-century. Kroetsch has been thematizing his native province, Alberta, just as much as general national dispositions or questings in the course of his literary career spanning five decades now. His progressive and experimental writing has earned him, for instance, the label of "Mr Canadian Postmodern" by Linda Hutcheon (Canadian Postmodern 183). Particularly important among his specifically postmodern instruments is the principle of archaeology as derived from Foucault and employed as both metaphor and method; further methodological tools are Barthes' theories on reading/writing as an erotic act, Bakhtin's notion of (the) carnival(ization of literature) and a great sensibility for the myths as well as oral traditions of the North American Natives. If the third section analyzes two of FPG's novels to illustrate his transfer, or literal translation, from a German to a Canadian cultural context, the fourth section represents this study's core with three one-to-one comparisons of the two writers' central prose texts. In spite of all affinities between both authors, however, this section already indicates what section five further underlines: Kroetsch clearly transcends Grove's achievements (which ultimately reduce all his characters and texts to nothing but his own will- and wishful projections and identity-configurations); on the level of narrativity, genre and gender, Kroetsch not only goes far beyond parodying Grove, but proves to be an innovator whose mis-en-scène of the motif of disguise provides both more psychological depth and relevance for socio-historical contexts. This comparative study has been informed by research in the Special Archives and Collections at the University of Manitoba (Grove Papers) and at the University of Calgary (Kroetsch Papers), by related talks at Lund, Belfast and Winnipeg as well as by an occasional quotation from an interview I conducted with Robert Kroetsch as early as 1996.
In this study, candidate loci for periodic catatonia (SCZD10, OMIM #605419) on chromosome 15q15 and 22q13.33 have been fine mapped and investigated. Previously, several studies found evidences for a major susceptibility locus on chromosome 15q15 and a further potential locus on 22q13.33 pointing to genetic heterogeneity. Fine mapping was done in our multiplex families through linkage and mutational analysis using genomic markers selected from public databases. Positional candidate genes like SPRED1 and BRD1, and ultra-conserved elements were investigated by direct sequencing in these families. The results narrow down the susceptibility locus on chromosome 15q14-15q15.1 to a region between markers D15S1042 and D15S968, as well as exclusion of SPRED1 and ultra-conserved elements as susceptibility candidates. Fine mapping for two chromosome 23q13.33-linked families showed that the recombination events would place the disease-causing gene to a telomeric ~577 Kb interval and SNP rs138880 investigation revealed an A-allele in the affected person, therefore excludes BRD1 as well as confirmed MLC1 to be the candidate gene for periodic catatonia.
The main purpose of this dissertation is to solve the following question: How will the emergence of the Euro influence the currency composition of the NICs?monetary reserves? Taiwan and Thailand are chosen as our investigation subjects. There are two sorts of motives for central banks' reserve holdings, i.e., intervention-related motives and portfolio-related motives. The need for reserve holdings resulting from intervention-related motives are justified because of the costs resulting from exchange rate instability. On the other hand, we use the Tobin-Markowitz model to justify the need for monetary reserves held for portfolio-related motives. The operational implication of this distinction is the separation of monetary reserves into two tranches corresponding to different objectives. An analysis of a central bank's transaction balance is a money quality analysis. Such an analysis has to do with transaction costs and non-pecuniary rates of return. The facts point out, that the Euro's emergence will not change the fact that the USD will continue to be the major currency of transaction balances of the central banks in Taiwan and Thailand. In order to answer the question about diversification of monetary reserves as idle balance in the two NICs, we carry out an analysis of the portfolio approach, which is based on the basic ideas of the Tobin-Markowitz model. This analysis shows that Taiwan and/or Thailand respectively cannot reduce risk at a given rate of return or increase the rate of return at a given risk by diversifying their monetary reserves as idle balance from the USD to the Euro.
Do Personality Traits, Trust and Fairness Shape the Stock-Investing Decisions of an Individual?
(2023)
This thesis is comprised of three projects, all of which are fundamentally connected to the choices that individuals make about stock investments. Differences in stock market participation (SMP) across countries are large and difficult to explain. The second chapter focuses on differences between Germany (low SMP) and East Asian countries (mostly high SMP). The study hypothesis is that cultural differences regarding social preferences and attitudes towards inequality lead to different attitudes towards stock markets and subsequently to different SMPs. Using a large-scale survey, it is found that these factors can, indeed, explain a substantial amount of the country differences that other known factors (financial literacy, risk preferences, etc.) could not. This suggests that social preferences should be given a more central role in programs that aim to enhance SMP in countries like Germany. The third chapter documented the importance of trust as well as herding for stock ownership decisions. The findings show that trust as a general concept has no significant contribution to stock investment intention. A thorough examination of general trust elements reveals that in group and out-group trust have an impact on individual stock market investment. Higher out group trust directly influences a person's decision to invest in stocks, whereas higher in-group trust increases herding attitudes in stock investment decisions and thus can potentially increase the likelihood of stock investments as well. The last chapter investigates the significance of personality traits in stock investing and home bias in portfolio selection. Findings show that personality traits do indeed have a significant impact on stock investment and portfolio allocation decisions. Despite the fact that the magnitude and significance of characteristics differ between two groups of investors, inexperienced and experienced, conscientiousness and neuroticism play an important role in stock investments and preferences. Moreover, high conscientiousness scores increase stock investment desire and portfolio allocation to risky assets like stocks, discouraging home bias in asset allocation. Regarding neuroticism, a higher-level increases home bias in portfolio selection and decreases willingness to stock investment and portfolio share. Finally, when an investor has no prior experience with portfolio selection, patriotism generates home bias. For experienced investors, having a low neuroticism score and a high conscientiousness and openness score seemed to be a constant factor in deciding to invest in a well-diversified international portfolio
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
This dissertation develops a rationale of how to use fossil data in solving biogeographical and ecological problems. It is argued that large amounts of fossil data of high quality can be used to document the evolutionary processes (the origin, development, formation and dynamics) of Arealsystems, which can be divided into six stages in North America: the Refugium Stage (before 15,000 years ago: > 15 ka), the Dispersal Stage (from 8,000 to 15,000 years ago: 8.0 - 15 ka), the Developing Stage (from 3,000 to 8,000 years ago: 3.0 - 8.0 ka), the Transitional Stage (from 1,000 to 3,000 years ago: 1 - 3 ka), the Primitive Stage (from 5,00 to 1,000 years ago: 0.5 - 1 ka) and the Human Disturbing Stage (during the last 500 years: < 0.5 ka). The division into these six stages is based on geostatistical analysis of the FAUNMAP database that contains 43,851 fossil records collected from 1860 to 1994 in North America. Fossil data are one of the best materials to test the glacial refugia theory. Glacial refugia represent areas where flora and fauna were preserved during the glacial period, characterized by richness in species and endemic species at present. This means that these (endemic) species should have distributed purely or primarily in these areas during the glacial period. The refugia can therefore be identified by fossil records of that period. If it is not the case, the richness in (endemic) species may not be the result of the glacial refugia. By exploring where mammals lived during the Refugium Stage (> 15 ka), seven refugia in North America can be identified: the California Refugium, the Mexico Refugium, the Florida Refugium, the Appalachia Refugium, the Great Basin Refugium, the Rocky Mountain Refugium and the Great Lake Refugium. The first five refugia coincide well with De Lattin- dispersal centers recognized by biogeographical methods using data on modern distributions. The individuals of a species are not evenly distributed over its Arealsystem. Brown- Hot Spots Model shows that in most cases there is an enormous variation in abundance within an areal of a species: In a census, zero or only a very few individuals occur at most sample locations, but tens or hundreds are found at a few sample sites. Locations where only a few individuals can be sampled in a survey are called "cool spots", and sites where tens or hundreds of individuals can be observed in a survey are called "hot spots". Many areas within the areal are uninhabited, which are called "holes". This model has direct implications for analyzing fossil data: Hot spots have a much higher local population density than cool spots. The chances to discover fossil individuals of a species are much higher in sediments located in a "hot spot" area than in a "cool spot" area. Therefore much higher MNIs (Minimum Number of Individuals) of the species should be found in fossil localities located in the hot spot than in the cool spot area. There are only a few hot spots but many cool spots within an areal of a single hypothetical species, consequently only a few fossil sites can provide with much high MNIs, whereas most other sites can only provide with very low MNIs. This prediction has been proved to be true by analysis of 70 species in FAUMAP containing over 100 fossil records. The temporal and spatial variation in abundance can be reconstructed from the temporospatial distribution of the MNIs of a species over its Arealsystem. Areas with no fossil records from the last thousands of years may be holes, and sites with much higher MNIs may be hot spots, while locations with low MNIs may be cool spots. Although the hot spots of many species can remain unchanged in an area over thousands of years, our study shows that a large shift of hot spots occurred mainly around 1,500-1,000 years ago. There are three directions of movement: from the west side to the east side of the Rockies, from the East of the USA to the east side of the Rockies and from the west side of the Rockies to the Southwest of the USA. The first two directions of shift are called Lewis and Clark- pattern, which can be verified with the observations mad by Lewis and Clark during their expedition in 1805-1806. The historical process of this pattern may well explain the 200-year-old puzzle why big game then abundant on the east side were rare on the west side of the Rocky Mountains noted by modern ecologists and biogeographers. The third direction of shift is called Bayham- pattern. This pattern can be tested by the model of Late Holocene resource intensification first described by Frank E. Bayham. The historical process creating the Bayham pattern will challenge the classic explanation of the Late Holocene resource intensification. An environmental change model has been proposed to account for the shift of hot spots. Implications of glacial refugia and hot spots areas for wildlife management and effective conservation are discussed. Suggestions for paleontologists and zooarchaeologists regarding how to provide more valuable information in their future excavation and research for other disciplines are given.
We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.
The efficacy and effectiveness of psychotherapeutic interventions have been proven time and again. We therefore know that, in general, evidence-based treatments work for the average patient. However, it has also repeatedly been shown that some patients do not profit from or even deteriorate during treatment. Patient-focused psychotherapy research takes these differences between patients into account by focusing on the individual patient. The aim of this research approach is to analyze individual treatment courses in order to evaluate when and under which circumstances a generally effective treatment works for an individual patient. The goal is to identify evidence based clinical decision rules for the adaptation of treatment to prevent treatment failure. Patient-focused research has illustrated how different intake indicators and early change patterns predict the individual course of treatment, but they leave a lot of variance unexplained. The thesis at hand analyzed whether Ecological Momentary Assessment (EMA) strategies could be integrated into patient-focused psychotherapy research in order to improve treatment response prediction models. EMA is an electronically supported diary approach, in which multiple real-time assessments are conducted in participants" everyday lives. We applied EMA over a two-week period before treatment onset in a mixed sample of patients seeking outpatient treatment. The four daily measurements in the patients" everyday environment focused on assessing momentary affect and levels of rumination, perceived self-efficacy, social support and positive or negative life events since the previous assessment. The aim of this thesis project was threefold: First, to test the feasibility of EMA in a routine care outpatient setting. Second, to analyze the interrelation of different psychological processes within patients" everyday lives. Third and last, to test whether individual indicators of psychological processes during everyday life, which were assessed before treatment onset, could be used to improve prediction models of early treatment response. Results from Study I indicate good feasibility of EMA application during the waiting period for outpatient treatment. High average compliance rates over the entire assessment period and low average burdens perceived by the patients support good applicability. Technical challenges and the results of in-depth missing analyses are reported to guide future EMA applications in outpatient settings. Results from Study II shed further light on the rumination-affect link. We replicated results from earlier studies, which identified a negative association between state rumination and affect on a within-person level and additionally showed a) that this finding holds for the majority but not every individual in a diverse patient sample with mixed Axis-I disorders, b) that rumination is linked to negative but also to positive affect and c) that dispositional rumination significantly affects the state rumination-affect association. The results provide exploratory evidence that rumination might be considered a transdiagnostic mechanism of psychological functioning and well-being. Results from Study III finally suggest that the integration of indicators derived from EMA applications before treatment onset can improve prediction models of early treatment response. Positive-negative affect ratios as well as fluctuations in negative affect measured during patients" daily lives allow the prediction of early treatment response. Our results indicate that the combination of commonly applied intake predictors and EMA indicators of individual patients" daily experiences can improve treatment response predictions models. We therefore conclude that EMA can successfully be integrated into patient-focused research approaches in routine care settings to ameliorate or optimize individual care.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
This dissertation focuses on e-marketing strategy's effective elements in tourism industry. As case study, research focus is on Airlines, tour operator, chain hotels in Iran and Germany. It aims to show various possibilities to enhance the company- e-marketing strategy and successfully performance e-marketing strategies with recognition effective elements and their important during the strategy designing and implementation process. For the purpose of this research due to the nature of the research, Explanatory -exploratory-applicable; after studying and consulting, Delphi technique has been chosen. In results, we have some effective elements and their important according the Delphi and AHP method. For example between elements "Tourists' Needs, Experience and Expects" with the importance coefficient of %204 is the most remarkable elements and "Customer satisfactions' elements group" with average value 5.54 according the research results have more important than other groups.
Imagery-based techniques have received increasing interest in psychotherapy research. Whereas their effectiveness has been shown for various psychological disorders, their underlying mechanisms remain unclear. Current research predominantly investigates intrapersonal processes, while interpersonal processes have received no attention to date. The aim of the current dissertation was to fill this lacuna. The three interrelated studies comprising this dissertation were the first to examine the effectiveness of imagery-based techniques in the treatment of test anxiety, relate physiological arousal to emotional processing, and investigate the association between physiological synchrony and multiple process measures.
Study I investigated the feasibility of a newly developed protocol, which integrates imagery-based and cognitive-behavioral components, to treat test anxiety in a sample of 31 students. The results indicated the protocol as acceptable, feasible, and effective in the treatment of test anxiety. Additionally, the imagery-based component was positively associated with therapeutic bond, session evaluation, and emotional experience.
Study II shifted the focus from the effectiveness of imagery-based techniques to client-therapist physiological synchrony as a putative mechanism of change in the same sample. The results suggested that physiological synchrony was greater than chance during both imagery-based and cognitive-behavioral components. Variability of physiological synchrony on the session-level during the imagery-based components and variability on both levels (session and dyad) during the cognitive-behavioral components were demonstrated. Furthermore, physiological synchrony of the imagery-based segments was positively assocatied with therapeutic bond. No association was found for the cognitive-behavioral components.
Study III examined both intrapersonal (i.e., clients’ electrodermal activity) and interpersonal (i.e., client-therapist electrodermal activity synchrony) processes and their associations with emotional processing in a sample of 49 client-therapist-dyads. The results suggested that higher client physiological arousal and a moderate level of physiological synchrony were associated with deeper emotional processing.
Taken together, the results highlight the effectiveness of imagery-based techniques in the treatment of test anxiety. Furthermore, the results of Studies II and III support the idea of physiological synchrony as a mechanism of change in imagery with and without rescripting. The current dissertation takes an important step towards optimizing process research within psychotherapy and contributes to a better understanding of the potency and mechanisms of change of imagery-based techniques. We hope that these studies’ implications will support everyday clinical practice.
Stress and pain are common experiences in human lives. Both, the stress and the pain system have adaptive functions and try to protect the organism in case of harm and danger. However, stress and pain are two of the most challenging problems for the society and the health system. Chronic stress, as often seen in modern societies, has much impact on health and can lead to chronic stress disorders. These disorders also include a number of chronic pain syndromes. However, pain can also be regarded as a stressor itself, especially when we consider how much patients suffer from long-lasting pain and the impact of pain on life quality. In this way, the effects of stress on pain can be fostered. For the generation and manifestation of chronic pain symptoms also learning processes such as classical conditioning play an important role. Processes of classical conditioning can also be influenced by stress. These facts illustrate the complex and various interactions between the pain and the stress systems. Both systems communicate permanently with each other and help to protect the organism and to keep a homeostatic state. They have various ways of communication, for example mechanisms related to endogenous opioids, immune parameters, glucocorticoids and baroreflexes. But an overactivation of the systems, for example caused by ongoing stress, can lead to severe health problems. Therefore, it is of great importance to understand these interactions and their underlying mechanisms. The present work deals with the relationship of stress and pain. A special focus is put on stress related hypocortisolism and pain processing, stress induced hypoalgesia via baroreceptor related mechanisms and stress related cortisol effects on aversive conditioning (as a model of pain learning). This work is a contribution to the wide field of research that tries to understand the complex interactions of stress and pain. To demonstrate the variety, the selected studies highlight different aspects of these interactions. In the first chapter I will give a short introduction on the pain and the stress systems and their ways of interaction. Furthermore, I will give a short summary of the studies presented in Chapter II to V and their background. The results and their meaning for future research will be discussed in the last part of the first chapter. Chronic pain syndromes have been associated with chronic stress and alterations of the HPA axis resulting in chronic hypocortisolism. But if these alterations may play a causal role in the pathophysiology of chronic pain remains unclear. Thus, the study described in Chapter II investigated the effects of pharmacological induced hypocortisolism on pain perception. Both, the stress and the pain system are related to the cardiovascular system. Increase of blood pressure is part of the stress reaction and leads to reduced pain perception. Therefore, it is important for the usage of pain tests to keep in mind potential interferences from activation of the cardiovascular system, especially when pain inhibitory processes are investigated. For this reason we compared two commonly and interchangeably used pain tests with regard to the triggered autonomic reactions. This study is described in chapter III. Chapter IV and V deal with the role of learning processes in pain and related influences of stress. Processes of classical conditioning play an important role for symptom generation and manifestation. In both studies aversive eyeblink conditioning was used as a model for pain learning. In the study described in Chapter IV we compared classical eyeblink conditioning in healthy volunteers to patients suffering from fibromyalgia, a chronic pain disorder. Also, differences of the HPA axis, as part of the stress system, were taken in account. The study of Chapter V investigated effects of the very first stress reaction, particularly rapid non-genomic cortisol effects. Healthy volunteers received an intravenous cortisol administration immediately before the eyeblink conditioning. Rapid effects have only been demonstrated on a cellular level and on animal behavior so far. In general, the studies presented in this work may give an impression of the broad variety of possible interactions between the pain and the stress system. Furthermore, they contribute to our knowledge about theses interactions. However, more research is needed to complete the picture.
In this thesis, global surrogate models for responses of expensive simulations are investigated. Computational fluid dynamics (CFD) have become an indispensable tool in the aircraft industry. But simulations of realistic aircraft configurations remain challenging and computationally expensive despite the sustained advances in computing power. With the demand for numerous simulations to describe the behavior of an output quantity over a design space, the need for surrogate models arises. They are easy to evaluate and approximate quantities of interest of a computer code. Only a few number of evaluations of the simulation are stored for determining the behavior of the response over a whole range of the input parameter domain. The Kriging method is capable of interpolating highly nonlinear, deterministic functions based on scattered datasets. Using correlation functions, distinct sensitivities of the response with respect to the input parameters can be considered automatically. Kriging can be extended to incorporate not only evaluations of the simulation, but also gradient information, which is called gradient-enhanced Kriging. Adaptive sampling strategies can generate more efficient surrogate models. Contrary to traditional one-stage approaches, the surrogate model is built step-by-step. In every stage of an adaptive process, the current surrogate is assessed in order to determine new sample locations, where the response is evaluated and the new samples are added to the existing set of samples. In this way, the sampling strategy learns about the behavior of the response and a problem-specific design is generated. Critical regions of the input parameter space are identified automatically and sampled more densely for reproducing the response's behavior correctly. The number of required expensive simulations is decreased considerably. All these approaches treat the response itself more or less as an unknown output of a black-box. A new approach is motivated by the assumption that for a predefined problem class, the behavior of the response is not arbitrary, but rather related to other instances of the mutual problem class. In CFD, for example, responses of aerodynamic coefficients share structural similarities for different airfoil geometries. The goal is to identify the similarities in a database of responses via principal component analysis and to use them for a generic surrogate model. Characteristic structures of the problem class can be used for increasing the approximation quality in new test cases. Traditional approaches still require a large number of response evaluations, in order to achieve a globally high approximation quality. Validating the generic surrogate model for industrial relevant test cases shows that they generate efficient surrogates, which are more accurate than common interpolations. Thus practical, i.e. affordable surrogates are possible already for moderate sample sizes. So far, interpolation problems were regarded as separate problems. The new approach uses the structural similarities of a mutual problem class innovatively for surrogate modeling. Concepts from response surface methods, variable-fidelity modeling, design of experiments, image registration and statistical shape analysis are connected in an interdisciplinary way. Generic surrogate modeling is not restricted to aerodynamic simulation. It can be applied, whenever expensive simulations can be assigned to a larger problem class, in which structural similarities are expected.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.