Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (333) (entfernen)
Sprache
- Englisch (333) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (16)
- Hydrocortison (13)
- Fernerkundung (10)
- Modellierung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (50)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (19)
- Informatik (16)
- Anglistik (11)
- Fachbereich 2 (7)
- Fachbereich 6 (7)
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
A huge number of clinical studies and meta-analyses have shown that psychotherapy is effective on average. However, not every patient profits from psychotherapy and some patients even deteriorate in treatment. Due to this result and the restricted generalization of clinical studies to clinical practice, a more patient-focused research strategy has emerged. The question whether a particular treatment works for an individual case is the focus of this paradigm. The use of repeated assessments and the feedback of this information to therapists is a major ingredient of patient-focused research. Improving patient outcomes and reducing dropout rates by the use of psychometric feedback seems to be a promising path. Therapists seem to differ in the degree to which they make use of and profit from such feedback systems. This dissertation aims to better understand therapist differences in the context of patient-focused research and the impact of therapists on psychotherapy. Three different studies are included, which focus on different aspects within the field:
Study I (Chapter 5) investigated how therapists use psychometric feedback in their work with patients and how much therapists differ in their usage. Data from 72 therapists treating 648 patients were analyzed. It could be shown that therapists used the psychometric feedback for most of their patients. Substantial variance in the use of feedback (between 27% and 52%) was attributable to therapists. Therapists were more likely to use feedback when they reported being satisfied with the graphical information they received. The results therefore indicated that not only patient characteristics or treatment progress affected the use of feedback.
Study II (Chapter 6) picked up on the idea of analyzing systematic differences in therapists and applied it to the criterion of premature treatment termination (dropout). To answer the question whether therapist effects occur in terms of patients’ dropout rates, data from 707 patients treated by 66 therapists were investigated. It was shown that approximately six percent of variance in dropout rates could be attributed to therapists, even when initial impairment was controlled for. Other predictors of dropout were initial impairment, sex, education, personality styles, and treatment expectations.
Study III (Chapter 7) extends the dissertation by investigating the impact of a transfer from one therapist to another within ongoing treatments. Data from 124 patients who agreed to and experienced a transfer during their treatment were analyzed. A significant drop in patient-rated as well as therapist-rated alliance levels could be observed after a transfer. On average, there seemed to be no difficulties establishing a good therapeutic alliance with the new therapist, although differences between patients were observed. There was no increase in symptom severity due to therapy transfer. Various predictors of alliance and symptom development after transfer were investigated. Impacts on clinical practice were discussed.
Results of the three studies are discussed and general conclusions are drawn. Implications for future research as well as their utility for clinical practice and decision-making are presented.
In order to investigate the psychobiological consequences of acute stress under laboratory conditions, a wide range of methods for socially evaluative stress induction have been developed. The present dissertation is concerned with evaluating a virtual reality (VR)-based adaptation of one of the most widely used of those methods, the Trier Social Stress Test (TSST). In the three empirical studies collected in this dissertation, we aimed to examine the efficacy and possible areas of application of the adaptation of this well-established psychosocial stressor in a virtual environment. We found that the TSST-VR reliably incites the activation of the major stress effector systems in the human body, albeit in a slightly less pronounced way than the original paradigm. Moreover, the experience of presence is discussed as one potential factor of influence in the origin of the psychophysiological stress response. Lastly, we present a use scenario for the TSST-VR in which we employed the method to investigate the effects of acute stress on emotion recognition performance. We conclude that, due to its advantages concerning versatility, standardization and economic administration, the paradigm harbors enormous potential not only for psychobiological research, but other applications such as clinical practice as well. Future studies should further explore the underlying effect mechanisms of stress in the virtual realm and the implementation of VR-based paradigms in different fields of application.
Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)
An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.
Competitive analysis is a well known method for analyzing online algorithms.
Two online optimization problems, the scheduling problems and the list accessing problems, are considered in the thesis of Yida Zhu in the respect of this method.
For both problems, several existing online and offline algorithms are studied. Their performances are compared with the performances of corresponding offline optimal algorithms.
In particular, the list accessing algorithm BIT is carefully reviewed.
The classical proof of its worst case performance get simplified by adapting the knowledge about the optimal offline algorithm.
With regard to average case analysis, a new closed formula is developed to determine the performance of BIT on specific class of instances.
All algorithm considered in this thesis are also implemented in Julia.
Their empirical performances are studied and compared with each other directly.
This cumulative thesis encompass three studies focusing on the Weddell Sea region in the Antarctic. The first study produces and evaluates a high quality data set of wind measurements for this region. The second study produces and evaluates a 15 year regional climate simulation for the Weddell Sea region. And the third study produces and evaluates a climatology of low level jets (LLJs) from the simulation data set. The evaluations were done in the attached three publications and the produced data sets are published online.
In 2015/2016, the RV Polarstern undertook an Antarctic expedition in the Weddell Sea. We operated a Doppler wind lidar on board during that time running different scan patterns. The resulting data was evaluated, corrected, processed and we derived horizontal wind speed and directions for vertical profiles with up to 2 km height. The measurements cover 38 days with a temporal resolution of 10-15 minutes. A comparisons with other radio sounding data showed only minor differences.
The resulting data set was used alongside other measurements to evaluate temperature and wind of simulation data. The simulation data was produced with the regional climate model CCLM for the period of 2002 to 2016 for the Weddell Sea region. Only smaller biases were found except for a strong warm bias during winter near the surface of the Antarctic Plateau. Thus we adapted the model setup and were able to remove the bias in a second simulation.
This new simulation data was then used to derive a climatology of low level jets (LLJs). Statistics of occurrence frequency, height and wind speed of LLJs for the Weddell Sea region are presented along other parameters. Another evaluation with measurements was also performed in the last study.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
The classic Capital Asset Pricing Model and the portfolio theory suggest that investors hold the market portfolio to diversify idiosyncratic risks. The theory predicts that expected return of assets is positive and that reacts linearly on the overall market. However, in reality, we observe that investors often do not have perfectly diversified portfolios. Empirical studies find that new factors influence the deviation from the theoretical optimal investment. In the first part of this work (Chapter 2) we study such an example, namely the influence of maximum daily returns on subsequent returns. Here we follow ideas of Bali et al. (2011). The goal is to find cross-sectional relations between extremely positive returns and expected average returns. We take account a larger number of markets worldwide. Bali et al. (2011) report with respect to the U.S. market a robust negative relation between MAX (the maximum daily return) and the expected return in the subsequent time. We extent substantially their database to a number of other countries, and also take more recent data into account (until end of 2009). From that we conclude that the relation between MAX and expected returns is not consistent in all countries. Moreover, we test the robustness of the results of Bali et al. (2011) in two time-periods using the same data from CRSP. The results show that the effect of extremely positive returns is not stable over time. Indeed we find a negative cross-sectional relation between the extremely positive returns and the average returns for the first half of the time series, however, we do not find significant effects for the second half. The main results of this chapter serve as a basis for an unpublished working paper Yuan and Rieger (2014b). While in Chapter 2 we have studied factors that prevent optimal diversification, we consider in Chapter 3 and 4 situations where the optimal structure of diversification was previously unknown, namely diversification of options (or structured financial products). Financial derivatives are important additional investment form with respect to diversification. Not only common call and put options, but also structured products enable investors to pursue a multitude of investment strategies to improve the risk-return profile. Since derivatives become more and more important, diversification of portfolios with dimension of derivatives is of particularly practical relevance. We investigate the optimal diversification strategies in connection with underlying stocks for classical rational investors with constant relative risk aversion (CRRA). In particular, we apply Monte Carlo method based on the Black-Scholes model and the Heston model for stochastic volatility to model the stock market processes and the pricing of the derivatives. Afterwards, we compare the benchmark portfolio which consists of derivatives on single assets with derivatives on the index of these assets. First we compute the utility improvement of an investment in the risk-free assets and plain-vanilla options for CRRA investors in various scenarios. Furthermore, we extend our analysis to several kinds of structured products, in particular capital protected notes (CPNs), discount certificates (DCs) and bonus certificates (BCs). We find that the decision of an investor between these two diversification strategies leads to remarkable differences. The difference in the utility improvement is influenced by risk-preferences of investors, stock prices and the properties of the derivatives in the portfolio. The results will be presented in Chapter 3 and are the basis for a yet unpublished working paper Yuan and Rieger (2014a). To check furthermore whether underlyings of structured products influence decisions of investors, we discuss explicitly the utility gain of a stock-based product and an index-based product for an investor whose preferences are described by cumulative prospect theory (CPT) (Chapter 4, compare to Yuan (2014)). The goal is that to investigate the dependence of structured products on their underlying where we put emphasis on the difference between index-products and single-stock-products, in particular with respect to loss-aversion and mental accounting. We consider capital protected notes and discount certificates as examples, and model the stock prices and the index of these stocks via Monte Carlo simulations in the Black-Scholes framework. The results point out that market conditions, particularly the expected returns and volatility of the stocks play a crucial role in determining the preferences of investors for stock-based CPNs and index-based CPNs. A median CPT investor prefers the index-based CPNs if the expected return is higher and the volatility is lower, while he prefers the stock-based CPNs in the other situation. We also show that index-based DCs are robustly more attractive as compared to stock-based DCs for CPT investors.
Do Personality Traits, Trust and Fairness Shape the Stock-Investing Decisions of an Individual?
(2023)
This thesis is comprised of three projects, all of which are fundamentally connected to the choices that individuals make about stock investments. Differences in stock market participation (SMP) across countries are large and difficult to explain. The second chapter focuses on differences between Germany (low SMP) and East Asian countries (mostly high SMP). The study hypothesis is that cultural differences regarding social preferences and attitudes towards inequality lead to different attitudes towards stock markets and subsequently to different SMPs. Using a large-scale survey, it is found that these factors can, indeed, explain a substantial amount of the country differences that other known factors (financial literacy, risk preferences, etc.) could not. This suggests that social preferences should be given a more central role in programs that aim to enhance SMP in countries like Germany. The third chapter documented the importance of trust as well as herding for stock ownership decisions. The findings show that trust as a general concept has no significant contribution to stock investment intention. A thorough examination of general trust elements reveals that in group and out-group trust have an impact on individual stock market investment. Higher out group trust directly influences a person's decision to invest in stocks, whereas higher in-group trust increases herding attitudes in stock investment decisions and thus can potentially increase the likelihood of stock investments as well. The last chapter investigates the significance of personality traits in stock investing and home bias in portfolio selection. Findings show that personality traits do indeed have a significant impact on stock investment and portfolio allocation decisions. Despite the fact that the magnitude and significance of characteristics differ between two groups of investors, inexperienced and experienced, conscientiousness and neuroticism play an important role in stock investments and preferences. Moreover, high conscientiousness scores increase stock investment desire and portfolio allocation to risky assets like stocks, discouraging home bias in asset allocation. Regarding neuroticism, a higher-level increases home bias in portfolio selection and decreases willingness to stock investment and portfolio share. Finally, when an investor has no prior experience with portfolio selection, patriotism generates home bias. For experienced investors, having a low neuroticism score and a high conscientiousness and openness score seemed to be a constant factor in deciding to invest in a well-diversified international portfolio
Krylov subspace methods are often used to solve large-scale linear equations arising from optimization problems involving partial differential equations (PDEs). Appropriate preconditioning is vital for designing efficient iterative solvers of this type. This research consists of two parts. In the first part, we compare two different kinds of preconditioners for a conjugate gradient (CG) solver attacking one partial integro-differential equation (PIDE) in finance, both theoretically and numerically. An analysis on mesh independence and rate of convergence of the CG solver is included. The knowledge of preconditioning the PIDE is applied to a relevant optimization problem. The second part aims at developing a new preconditioning technique by embedding reduced order models of nonlinear PDEs, which are generated by proper orthogonal decomposition (POD), into deflated Krylov subspace algorithms in solving corresponding optimization problems. Numerical results are reported for a series of test problems.
This thesis discusses revue as a significantly inter-cultural genre in the history of global theatre. During the ‘modernisation’ period in Europe, America and Japan, most major urban cities experienced a boom in revue venues and performances. Few studies about revue have yet been done in theatre studies or in urban cultural studies. My thesis will attempt to reevaluate and redefine revue as a highly intercultural theatre genre by using the concept of liminality. In other words, the aim is to examine revue as a genre built on ‘modern composition of betweenness’, bridging seemingly opposing elements, such as the foreign and the domestic, the classic and the innovative, the traditional and the modern, the professional and the amateur, high and low culture, and the feminine and the masculine. The goal is to regard revue as a liminal genre constructed amidst the negotiations between these binaries, existing in a state of constant flux.
The purpose of this approach is to capture revue as a transitory phenomena in five dimensions: conceptual, spatial, temporal, categorical and physical. Over the course of six chapters, this
inter-disciplinary discussion will reveal the reasons why and the ways by which revue came to establish its prominent position in the Japanese theatre industry. The whole structure is also an attempt to provide plausible ways to apply sociological considerations to theatre studies.
Entrepreneurship is a process of discovering and exploiting opportunities, during which two crucial milestones emerge: in the very beginning when entrepreneurs start their businesses, and in the end when they determine the future of the business. This dissertation examines the establishment and exit of newly created as well as of acquired firms, in particular the behavior and performance of entrepreneurs at these two important stages of entrepreneurship. The first part of the dissertation investigates the impact of characteristics at the individual and at the firm level on an entrepreneur- selection of entry modes across new venture start-up and business takeover. The second part of the dissertation compares firm performance across different entrepreneurship entry modes and then examines management succession issues that family firm owners have to confront. This study has four main findings. First, previous work experience in small firms, same sector experience, and management experience affect an entrepreneur- choice of entry modes. Second, the choice of entry mode for hybrid entrepreneurs is associated with their characteristics, such as occupational experience, level of education, and gender, as well as with the characteristics of their firms, such as location. Third, business takeovers survive longer than new venture start-ups, and both entry modes have different survival determinants. Fourth, the family firm- decision of recruiting a family or a nonfamily manager is not only determined by a manager- abilities, but also by the relationship between the firm- economic and non-economic goals and the measurability of these goals. The findings of this study extend our knowledge on entrepreneurship entry modes by showing that new venture start-ups and business takeovers are two distinct entrepreneurship entry modes in terms of their founders" profiles, their survival rates and survival determinants. Moreover, this study contributes to the literature on top management hiring in family firms: it establishes family firm- non-economic goals as another factor that impacts the family firm- hiring decision between a family and a nonfamily manager.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Numerous RCTs demonstrate that cognitive behavioral therapy (CBT) for depression is effective. However, these findings are not necessarily representative of CBT under routine care conditions. Routine care studies are not usually subjected to comparable standardizations, e.g. often therapists may not follow treatment manuals and patients are less homogeneous with regard to their diagnoses and sociodemographic variables. Results on the transferability of findings from clinical trials to routine care are sparse and point in different directions. As RCT samples are selective due to a stringent application of inclusion/exclusion criteria, comparisons between routine care and clinical trials must be based on a consistent analytic strategy. The present work demonstrates the merits of propensity score matching (PSM), which offers solutions to reduce bias by balancing two samples based on a range of pretreatment differences. The objective of this dissertation is the investigation of the transferability of findings from RCTs to routine care settings.
Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related activities, most of them enter their professional life unprepared, as classroom assessment is often not part of their educational training. Since teacher judgments matter for the educational development of students, the judgments should be up to a high standard. The present dissertation comprises three studies focusing on accuracy of teacher judgments (Study 1), consequences of (mis-)judgment regarding teacher nomination for gifted programming (Study 2) and teacher recommendations for secondary school tracks (Study 3), and individual student characteristics that impact and potentially bias teacher judgment (Studies 1 through 3). All studies were designed to contribute to a further understanding of classroom assessment skills of teachers. Overall, the results implied that, teacher judgment of cognitive ability was an important constant for teacher nominations and recommendations but lacked accuracy. Furthermore, teacher judgments of various traits and school achievement were substantially related to social background variables, especially the parents" educational background. However, multivariate analysis showed social background variables to impact nomination and recommendation only marginally if at all. All results indicated differentiated but potentially biased teacher judgments to impact their far-reaching referral decisions directly, while the influence of social background on the referral decisions itself seems mediated. Implications regarding further research practices and educational assessment strategies are discussed. The implications on the needs of teachers to be educated on judgment and educational assessment are of particular interest and importance.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
Copositive programming is concerned with the problem of optimizing a linear function over the copositive cone, or its dual, the completely positive cone. It is an active field of research and has received a growing amount of attention in recent years. This is because many combinatorial as well as quadratic problems can be formulated as copositive optimization problems. The complexity of these problems is then moved entirely to the cone constraint, showing that general copositive programs are hard to solve. A better understanding of the copositive and the completely positive cone can therefore help in solving (certain classes of) quadratic problems. In this thesis, several aspects of copositive programming are considered. We start by studying the problem of computing the projection of a given matrix onto the copositive and the completely positive cone. These projections can be used to compute factorizations of completely positive matrices. As a second application, we use them to construct cutting planes to separate a matrix from the completely positive cone. Besides the cuts based on copositive projections, we will study another approach to separate a triangle-free doubly nonnegative matrix from the completely positive cone. A special focus is on copositive and completely positive programs that arise as reformulations of quadratic optimization problems. Among those we start by studying the standard quadratic optimization problem. We will show that for several classes of objective functions, the relaxation resulting from replacing the copositive or the completely positive cone in the conic reformulation by a tractable cone is exact. Based on these results, we develop two algorithms for solving standard quadratic optimization problems and discuss numerical results. The methods presented cannot immediately be adapted to general quadratic optimization problems. This is illustrated with examples.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
In this thesis, we investigate the quantization problem of Gaussian measures on Banach spaces by means of constructive methods. That is, for a random variable X and a natural number N, we are searching for those N elements in the underlying Banach space which give the best approximation to X in the average sense. We particularly focus on centered Gaussians on the space of continuous functions on [0,1] equipped with the supremum-norm, since in that case all known methods failed to achieve the optimal quantization rate for important Gauss-processes. In fact, by means of Spline-approximations and a scheme based on the Best-Approximations in the sense of the Kolmogorov n-width we were able to attain the optimal rate of convergence to zero for these quantization problems. Moreover, we established a new upper bound for the quantization error, which is based on a very simple criterion, the modulus of smoothness of the covariance function. Finally, we explicitly constructed those quantizers numerically.
Geographic ranges of species and their determinants are of great interest in the field of biogeography and are often studied in terms of the species" ecological niches. In this context, the range of a species is defined by the accessibility of an area, abiotic factors and biotic interactions, which affect a species" distributions with different intensities across spatial scales. Parapatry describes a distributional pattern in which the ranges of two species meet along sharp range limits with narrow contact zones. Such parapatric range limits are determined by changing abiotic conditions along sharp environmental gradients or can result from interspecific resource competition. However, it has been shown that often the interplay of abiotic conditions and species interactions determine parapatry. The geographic ranges of the land salamanders, Salamandra salamandra and S. atra, narrowly overlap in the European Alps with only few syntopic localities and to date, the cause of parapatry is unknown. The goal of this thesis was thus to identify the importance of abiotic and biotic factors for their parapatric range limits at different spatial scales. On a broad spatial scale, the role of climate for the parapatric range limits of the species was investigated within three contact zones in Switzerland. Climatic conditions at species" records were analysed and species distribution modelling techniques were used to explore the species" climatic niches and to quantify the interspecific niche overlap. Furthermore, it was tested whether the parapatric range limit coincides with a strong climatic gradient. The results revealed distinct niches for the species as well as the presence of strong climatic gradients which could explain the parapatric range limits of the species. Yet, there was a moderate interspecific niche overlap in all contact zones indicating that the species may co-occur and interact with each other in areas where they both find adequate conditions. Comparison among contact zones revealed geographic variation in the species" niches as well as in the climatic conditions at their records suggesting that the species can occur in a much wider range of conditions than they actually do. These findings imply that climate represents a main factor for the species" parapatric range limits. Yet, interspecific niche overlap and the geographic variation provide indirect evidence that interspecific interaction may also affect their spatial distribution. To test whether competition restricts the species" ranges on the habitat scale and to understand local syntopic co-occurrence of the salamanders within their contact zones, site-occupancy modelling was used. This approach allowed to find the habitat predictors that best explain the species" local distribution. While the slope of the site positively affected the occupancy probability of S. salamandra, no tested predictor explained that of S. atra. Also, there was no effect of the occurrence of one species on the occupancy probability of the other providing no evidence for competition. Should competition occur, it does not lead to spatial segregation of the species on this scale. Because biotic interactions most significantly affect the ranges of species on small spatial scales, the microhabitat conditions at locations of the species within syntopic contact zones were compared and a null model analysis was applied to determine their niche overlap. Resource selection probability function models were used to assess those attributes that affect the species" habitat selections. The results revealed species-specific microhabitat preferences related to leaf litter cover, tree number and that the species were active at different temperatures as well as times of the day. The high degree of diurnal activity of S. atra may be due to its preference of forest floor microhabitats that long remain suitable during daytime. Besides, there was a great niche overlap for shelters indicating that the species may compete for this resource. Differential habitat selection and the use of the available shelters at different times of the day may minimize species interactions and allow their local co-occurrence within contact zones. To identify whether the potential infection with the pathogenic chytrid fungus could serve as an alternative biotic explanation for the range margins of S. atra, several populations throughout its range were screened for infection. Since the occurrence of this pathogen was detected mostly at lower altitudes of the Alps, it may confine the range of S. atra to higher elevations. Because chytrid was not detected in any of the samples, the pathogen unlikely plays a role in determining its range limits. Overall, these findings underline the complexity of mechanisms that determine the range margins of parapatric species and provide an important basis for subsequent studies regarding the determinants of the parapatric distribution of the two salamander species.
Shape optimization is of interest in many fields of application. In particular, shape optimization problems arise frequently in technological processes which are modelled by partial differential equations (PDEs). In a lot of practical circumstances, the shape under investigation is parametrized by a finite number of parameters, which, on the one hand, allows the application of standard optimization approaches, but, on the other hand, unnecessarily limits the space of reachable shapes. Shape calculus presents a way to circumvent this dilemma. However, so far shape optimization based on shape calculus is mainly performed using gradient descent methods. One reason for this is the lack of symmetry of second order shape derivatives or shape Hessians. A major difference between shape optimization and the standard PDE constrained optimization framework is the lack of a linear space structure on shape spaces. If one cannot use a linear space structure, then the next best structure is a Riemannian manifold structure, in which one works with Riemannian shape Hessians. They possess the often sought property of symmetry, characterize well-posedness of optimization problems and define sufficient optimality conditions. In general, shape Hessians are used to accelerate gradient-based shape optimization methods. This thesis deals with shape optimization problems constrained by PDEs and embeds these problems in the framework of optimization on Riemannian manifolds to provide efficient techniques for PDE constrained shape optimization problems on shape spaces. A Lagrange-Newton and a quasi-Newton technique in shape spaces for PDE constrained shape optimization problems are formulated. These techniques are based on the Hadamard-form of shape derivatives, i.e., on the form of integrals over the surface of the shape under investigation. It is often a very tedious, not to say painful, process to derive such surface expressions. Along the way, volume formulations in the form of integrals over the entire domain appear as an intermediate step. This thesis couples volume integral formulations of shape derivatives with optimization strategies on shape spaces in order to establish efficient shape algorithms reducing analytical effort and programming work. In this context, a novel shape space is proposed.
The overall objective of this thesis was to gain a deeper understanding of the antecedents, processes, and manifestations of uniqueness-driven consumer behavior. To achieve this goal, five studies have been conducted in Germany and Switzerland with a total of 1048 participants across different demographic and socio-economic backgrounds. Two concepts were employed in all studies: Consumer need for uniqueness (CNFU) and general uniqueness perception (GUP). CNFU (Tian, Bearden, & Hunter, 2001), a mainly US"based consumer research concept, measures the individual need, and thus the motivation to acquire, use, and dispose consumer goods in order to develop a unique image. GUP, adapted from the two-component theory of individuality (Kampmeier, 2001), represents a global and direct measure of self-ascribed uniqueness. Study #1 looked at the interrelation of the uniqueness-driven concepts. Therefore, GUP and CNFU were employed in the study as potential psychological factors that influence and predict uniqueness-driven consumer behavior. Different behavioral measures were used: The newly developed possession of individualized products (POIP), the newly developed products for uniqueness display (PFUD), and the already established uniqueness-enhancing behaviors (UEB). Analyses showed that CNFU mediates the relationship between GUP and the behavioral measures in a German speaking setting. Thus, GUP (representing self-perception) was identified as the driver behind CNFU (representing motivation) and the actual consumer behavior. Study #2 examined further manifestations of uniqueness-driven consumer behavior. For this purpose, an extreme form of uniqueness-increasing behavior was researched: Tattooing. The influence of GUP and CNFU on tattooing behavior was investigated using a sample derived from a tattoo exhibition. To do so, a newly developed measure to determine the percentage of the body covered by tattoos was employed. It was revealed that individuals with higher GUP and CNFU levels indeed have a higher tattooing degree. Study #3 further explored the predictive possibilities and limitations of the GUP and CNFU concepts. On the one hand, study #3 specifically looked at the consumption of customized apparel products as mass customization is said to become the standard of the century (Piller & Müller, 2004). It was shown that individuals with higher CNFU levels not only purchased more customized apparel products in the last six months, but also spend more money on them. On the other hand, uniqueness-enhancing activities (UEA), such as travel to exotic places or extreme sports, were investigated by using a newly developed 30-item scale. It was revealed that CNFU partly mediates the GUP and UEA relationship, proving that CNFU indeed predicts a broad range of consumer behaviors and that GUP is the driver behind the need and the behavior. Study #4, entered a new terrain. In contrast to the previous three studies, it explored the so termed "passive" side of uniqueness-seeking in the consumer context. Individuals might feel unique because business companies treat them in a special way. Such a unique customer treatment (UCT) involves activities like customer service or customer relationship management. Study #4 investigated if individuals differ in their need for such a treatment. Hence, with the need for unique customer treatment (NFUCT) a new uniqueness-driven consumer need was introduced and its impact on customer loyalty examined. Analyses, for example, revealed that individuals with high NFUCT levels receiving a high unique customer treatment (UCT) showed the highest customer loyalty, whereas the lowest customer loyalty was found among those individuals with high NFUCT levels receiving a low unique customer treatment (UCT). Study #5 mainly examined the processes behind uniqueness-driven consumer behavior. Here, not only the psychological influences, but also situational influences were examined. This study investigated the impact of a non-personal "indirect" uniqueness manipulation on the consumption of customized apparel products by simultaneously controlling for the influence of GUP and CNFU. Therefore, two equal experimental groups were created. Afterwards, these groups either received an e-mail with a "pro-individualism" campaign or a "pro-collectivism" campaign especially developed for study #5. The conducted experiment revealed that, individuals receiving a "pro-individualism" poster campaign telling the participants that uniqueness is socially appropriate and desired were willing to spend more money on customization options compared to individuals receiving a "pro-collectivism" poster campaign. Hence, not only psychological antecedents such as GUP and CNFU influence uniqueness-driven consumer behavior, but also situational factors.
Software and interactive systems that adapt their behavior to the user are often referred to as Adaptive Systems. These systems infer the user's goals, knowledge or preferences by observing the user's actions. A synposis of 43 published studies demonstrated that only few of the existing systems are evaluated empirically. Most studies failed to show an advantage of the user model. A new framework is proposed that categorizes existing studies and defines an evaluation procedure which is able to uncover failures and maladaptations in the user model. It consists of four layers: evaluation of input data, evaluation of inference, evaluation of adaptation decision and evaluation of total interaction. Exemplary, the framework has been applied to the HTML-Tutor, an online-course that adapts to the learners' knowledge. Several empirical studies are described that test the accuracy of the user models, and explore the effects of adaptation to knowledge respectively prior knowledge. Generalization issues of the approach are discussed.
Legalisation cannot be fully explained by interest politics. If that were the case, the attitudes towards legalisation would be expected to be based on objective interests and actual policies in France and Germany would be expected to be more similar. Nor can it be explained by institutional agency, because there are no hints that states struggle with different normative traditions. Rather, political actors seek to make use of the structures that already exist to guar-antee legitimacy for their actions. If the main concern of governmental actors really is to accumulate legitimacy, as stated in the introduction, then politicians have a good starting position in the case of legalisation of illegal foreigners. Citizens" negative attitudes towards legalisation cannot be explained by imagined labour market competition; income effects play only a secondary role. The most important explanatory factor is the educational level of each individual. Objective interests do not trigger attitudes towards legalisation, but rather a basic men-tal predisposition for or against illegal immigrants who are eligible for legalisation. Politics concerning amnesties are thus not tied to an objectively given structure like the socio-economic composition of the electorate, but are open for political discretion. Attitudes on legalising illegal immigrants can be regarded as being mediated by beliefs and perceptions, which can be used by political agents or altered by political developments. However, politicians must adhere to a national frame of legitimating strategies that cannot be neglected without consequences. It was evident in the cross-country comparison of political debates that there are national systems of reference that provide patterns of interpretation. Legalisation is seen and incorporated into immigration policy in a very specific way that differs from one country to the next. In both countries investigated in this study, there are fundamental debates about which basic principles apply to legalisation and which of these should be held in higher esteem: a legal system able to work, humanitarian rights, practical considerations, etc. The results suggest that legalisation is "technicized" in France by describing it as an unusual but possible pragmatic instrument for the adjustment of the inefficient rule of law. In Germany, however, legalisation is discussed at a more normative level. Proponents of conservative immigration policies regard it as a substantial infringement on the rule of law, so that even defenders of a humanitarian solution for illegal immigrants are not able to challenge this view without significant political harm. But the arguments brought to bear in the debate on legalisation are not necessarily sound because they are not irrefutable facts, but instruments to generate legitimacy, and there are enough possibilities for arguing and persuading because socio-economic factors play a minor role. One of the most important arguments, the alleged pull effect of legalisation, has been subjected to an empirical investigation. In the political debate, it does not make any dif-ference whether this is true or not, insofar as it is not contested by incontrovertible findings. In reality, the results suggest that amnesties indeed exert a small attracting influence on illegal immigration, which has been contested by immigration friendly politicians in the French par-liament. The effect, however, is not large; therefore, some conservative politicians may put too much stress on this argument. Moreover, one can see legalisation as an instrument to restore legitimacy that has slipped away from immigration politics because of a high number of illegally residing foreigners. This aspect explains some of the peculiarities in the French debate on legalisation, e.g. the idea that the coherence of the law is secured by creating exceptional rules for legalising illegal immigrants. It has become clear that the politics of legalisation are susceptible to manipulation by introducing certain interpretations into the political debate, which become predominant and supersede other views. In this study, there are no signs of a systematic misuse of this constellation by any certain actor. However, the history of immigration policy is full of examples of symbolic politics in which a certain measure has been initiated while the actors are totally aware of its lack of effect. Legalisation has escaped this fate so far because it is a specific instrument that is the result of neglecting populist mechanisms rather than an ex-ample of a superficial measure. This result does not apply to policies concerning illegal immi-gration in general, both with regard to concealing a lack of control and flexing the state- muscles.
The daily dose of health information: A psychological view on the health information seeking process
(2021)
The search for health information is becoming increasingly important in everyday life, as well as socially and scientifically relevant Previous studies have mainly focused on the design and communication of information. However, the view of the seeker as well as individual
differences in skills and abilities has been a neglected topic so far. A psychological perspective on the process of searching for health information would provide important starting points for promoting the general dissemination of relevant information and thus improving health behaviour and health status. Within the present dissertation, the process of seeking health information was thus divided into sequential stages to identify relevant personality traits and skills. Accordignly, three studies are presented that focus on one stage
of the process respectively and empirically test potential crucial traits and skills: Study I investigates possible determinants of an intention for a comprehensive search for health information. Building an intention is considered as the basic step of the search process.
Motivational dispositions and self-regulatory skills were related to each other in a structural equation model and empirically tested based on theoretical investigations. Model fit showed an overall good fit and specific direct and indirect effects from approach and avoidance
motivation on the intention to seek comprehensively could be found, which supports the theoretical assumptions. The results show that as early as the formation of intention, the psychological perspective reveals influential personality traits and skills. Study II deals with the subsequent step, the selection of information sources. The preference for basic characteristics of information sources (i.e., accessibility, expertise, and interaction) is related to health information literacy as a collective term for relevant skills and intelligence as a personality trait. Furthermore, the study considers the influence of possible over- or underestimation of these characteristics. The results show not only a different predictive
contribution of health literacy and intelligence, but also the relevance of subjective and objective measurement.
Finally, Study III deals with the selection and evaluation of the health information previously found. The phenomenon of selective exposure is analysed, as this can be considered problematic in the health context. For this purpose, an experimental design was implemented in which a varying health threat was suggested to the participants. Relevant information was presented and the selective choice of this information was assessed. Health literacy was tested
as a moderator in a function of the induced threat and perceived vulnerability, triggering defence motives on the degree of bias. Findings show the importance of the consideration of the defence motives, which could cause a bias in the form of selective exposure. Furthermore, health literacy even seems to amplify this effect.
Results of the three studies are synthesized, discussed and general conclusions are drawn and implications for further research are determined.
The presented research aims at providing a first empirical investigation on lexical structure in Chinese with appropriate quantitative methods. The research objects contain individual properties of words (part of speech, polyfunctionality, polysemy, word length), the relationships between properties (part of speech and polyfunctionality, polyfunctionality and polysemy, polysemy and word length) and the lexical structure composed by those properties. Some extant hypotheses in QL, such as distributions of polysemy and the relationship between word length and polysemy, are tested on the data of Chinese, which enrich the applicability of the laws with a language not tested yet. Several original hypotheses such as the distribution of polyfunctionality and the relationship between polyfunctionality and polysemy are set up and inspected.
A big challenge for agriculture in the 21st century is the provision of food safety to fast growing world- population, which not only demands the well utilisation of the available agricultural resources but also to develop new advancements in the mass production of food crops. Wheat is the third largest food crop of the world and Pakistan is the eighth largest wheat producing country globally. Rice is the second most important staple food of Pakistan after wheat, grown in all provinces of the country. Maize is the world- top ranking food crop followed by wheat and rice. The harvested produts have to be stored in different types of storage structures on small or large scale for food as well as seed purpose. In Pakistan, the harvested grains are stored for the whole year till the introduction of fresh produce in order to ensure the regular food supply throughout the year. However, it is this extended storage period making the commodity more vulnerable to insect attacks. Rhyzopertha dominica (Coleoptera: Bostrychidae), Cryptolestes ferrugineus (Coleoptera: Laemophloeidae), Tribolium castaneum (Coleoptera: Tenebrionidae) and Liposcelis spp. (Psocoptera: Liposcelididae) are the major and most damaging insect pests of stored products all around the world. Various management strategies have been adopted for stored grain insect pests mostly relying upon the use of a broad spectrum of insecticides, but the injudicious use of these chemicals raised various environmental and human health related issues, which necessitate the safe use of the prevailing control measures and evaluation of new and alternative control methods. The application of new chemical insecticides, microbial insecticides (particularly entomopathogenic fungi) and the use of inert dusts (diatomaceous earths) is believed amongst the potential alternatives to generally used insecticides in stored grain insect management system. In the current investigations, laboratory bioassays conducted to evaluate the effects of combining Imidacloprid (new chemistry insecticide) with and without Protect-It (diatomaceous earth formulation) against R. dominica, L. paeta, C. ferrugineus and T. castaneum, on three different grain commodities (i.e. wheat, maize and rice) revealed differences in adult mortality levels among grains and insect species tested. Individually, Imidacloprid was more effective as compared with Protect-It alone and the highest numbers of dead adults were recorded in wheat. The insecticidal efficacy of B. bassiana with Protect-It and DEBBM was also assessed against all test insect species under laboratory conditions. The findings of these studies revealed that the more extended exposure period and the higher combined application rate of B. bassiana and DEs provided the highest mortality of the test insect species. The progeny emergence of each insect species was also greatly suppressed where the highest dose rates of the combined treatments were applied. The residual efficacy of all three control measures Imidacloprid, B. bassiana and DEBBM formulation was also evaluated against all test insect species. The bioassays were carried out after grain treatments and monthly for 6 months. The results indicated that the adult mortality of each test insect species was decreased within the six month storage period, and the integarted application of the test grain protectants enhanced the mortality rates than their alone treatments. The maximum mortality was noted in the combined treatment of DEBBM with Imidacloprid. At the end, the effectiveness of B. bassiana, DEBBM and Imidacloprid applied alone as well as in combinations, against all above mentioned test insect species was also evaluated under field conditions in trials conducted in four districts of Punjab, Pakistan. For each district, a significant difference was observed between treatments, while the combined treatments gave better control of test species as compared with them alone. The least number of surviving adults and minimum percentage of grain damage was observed for the DEBBM and Imidacloprid combination, but DEBBM with B. bassiana provided the best long-term protection as compared with the remaining treatments.
Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Nonlocal operators are used in a wide variety of models and applications due to many natural phenomena being driven by nonlocal dynamics. Nonlocal operators are integral operators allowing for interactions between two distinct points in space. The nonlocal models investigated in this thesis involve kernels that are assumed to have a finite range of nonlocal interactions. Kernels of this type are used in nonlocal elasticity and convection-diffusion models as well as finance and image analysis. Also within the mathematical theory they arouse great interest, as they are asymptotically related to fractional and classical differential equations.
The results in this thesis can be grouped according to the following three aspects: modeling and analysis, discretization and optimization.
Mathematical models demonstrate their true usefulness when put into numerical practice. For computational purposes, it is important that the support of the kernel is clearly determined. Therefore nonlocal interactions are typically assumed to occur within an Euclidean ball of finite radius. In this thesis we consider more general interaction sets including norm induced balls as special cases and extend established results about well-posedness and asymptotic limits.
The discretization of integral equations is a challenging endeavor. Especially kernels which are truncated by Euclidean balls require carefully designed quadrature rules for the implementation of efficient finite element codes. In this thesis we investigate the computational benefits of polyhedral interaction sets as well as geometrically approximated interaction sets. In addition to that we outline the computational advantages of sufficiently structured problem settings.
Shape optimization methods have been proven useful for identifying interfaces in models governed by partial differential equations. Here we consider a class of shape optimization problems constrained by nonlocal equations which involve interface-dependent kernels. We derive the shape derivative associated to the nonlocal system model and solve the problem by established numerical techniques.
For the first time, the German Census 2011 will be conducted via a new method the register based census. In contrast to a traditional census, where all inhabitants are surveyed, the German government will mainly attempt to count individuals using population registers of administrative authorities, such as the municipalities and the Federal Employment Agency. Census data that cannot be collected from the registers, such as information on education, training, and occupation, will be collected by an interview-based sample survey. Moreover, the new method reduces citizens' obligations to provide information and helps reduce costs significantly. The use of sample surveys is limited if results with a detailed regional or subject-matter breakdown have to be prepared. Classical estimation methods are sometimes criticized, since estimation is often problematic for small samples. Fortunately, model based small area estimators serve as an alternative. These methods help to increase the information, and hence the effective sample size. In the German Census 2011 it is possible to embed areas on a map in a geographical context. This may offer additional information, such as neighborhood relations or spatial interactions. Standard small area models, like Fay-Herriot or Battese-Harter-Fuller, do not account for such interactions explicitly. The aim of our work is to extend the classical models by integrating the spatial information explicitly into the model. In addition, the possible gain in efficiency will be analyzed.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
In this thesis, in order to shed light on the biological function of the membrane-bound Glucocorticoid Receptor (mGR), proteomic changes induced by 15 min in vivo acute stress and by short in vitro activation of the mGR were analyzed in T-lymphocytes. The numerous overlaps between the two datasets suggest that the mGR mediates physiologically relevant actions and participates in the early stress response, triggering rapid early priming events that pave the way for the slower genomic GC activities. In addition, a new commercially available method with suitable sensitivity to detect the human mGR is reported and the transcriptional origin of this protein investigated. Our results indicates that specific GR-transcripts, containing exon 1C and 1D, are associated with the expression of this membrane isoform.
For decades, academics and practitioners aim to understand whether and how (economic) events affect firm value. Optimally, these events occur exogenously, i.e. suddenly and unexpectedly, so that an accurate evaluation of the effects on firm value can be conducted. However, recent studies show that even the evaluation of exogenous events is often prone to many challenges that can lead to diverse interpretations, resulting in heated debates. Recently, there have been intense debates in particular on the impact of takeover defenses and of Covid-19 on firm value. The announcements of takeover defenses and the propagation of Covid-19 are exogenous events that occur worldwide and are economically important, but have been insufficiently examined. By answering open research questions, this dissertation aims to provide a greater understanding about the heterogeneous effects that exogenous events such as the announcements of takeover defenses and the propagation of Covid-19 have on firm value. In addition, this dissertation analyzes the influence of certain firm characteristics on the effects of these two exogenous events and identifies influencing factors that explain contradictory results in the existing literature and thus can reconcile different views.
Today obesity has been recognized as a disease. Evidence suggests that obesity often has Genetic, environmental, psychological and other factors. Growing evidence points to heredity as a strong determining factor of obesity. The characterization of uncoupling proteins (UCP) represents a major breakthrough of genetic factors towards understanding the molecular basis for energy expenditure and therefore likely to have important implication for the cause and treatment of human obesity. UCPs as mitochondrial anion carriers which creates a pathway that allows dissipation of the proton electrochemical gradient therefore which when deregulated are key risk factors in the development of obesity and other eating disorders. In order to better understand the roles of both UCP2 and UCP3 which considered as prime candidate genes involved in the pathogenesis of obesity, this study elucidate (1) Genomic organization: The human UCP2 (3) gene spans over 8.7 kb (7.5 kb) distributed on 8 (7) exons. Three UCP genes may have evolved from a common ancestor or are the result from gene duplication events. Two mRNA transcripts are generated from hUCP3 gene, the long and short form of hUCP3 is differing by the presence or absence of 37 amino acid residues at the C-terminus. (2) Mutational analysis revealed a mutation in exon 4 of hUCP2 resulting in the substitution of an alanine by a valine at codon 55 and an insertion polymorphism in exon 8 consisted of a 45 bp repeat located 150 bp downstream of the stop codon in the 3'-UTR. The allele frequencies of both polymorphisms were not significantly elevated in a subgroup of children characterized by low Resting Metabolic Rates (RMR). (3) Promoter Analysis showed that the promoter region of hUCP2 lacks a classical TATA or CAAT box. Functional characterization of hUCP2 promoter showed that minimal promoter activity was observed within 65 bp upstream of the transcriptional start site. 75 bp further upstream a strong cis-acting regulatory element was identified which significantly enhanced basal promoter activity. The regulation of human UCP2 gene expression involves complex interactions among positive and negative regulatory elements. the 5"-flanking region of the hUCP3 gene were characterized in which contains both TATA and CAAT boxes as well as consensus motifs for PPRE, TRE, CRE and muscle-specific MyoD and MEF2 sites. Functional characterization identified a cis-acting negative regulatory element between - 2983 and -982 while the region between -982 and -284 showed greatly increased basal promoter activity suggesting the presence of a strong enhancer element. Promoter activity was particularly enhanced in the murine skeletal muscle cell line C2C12 reflecting the tissue-selective expression pattern of UCP3.
Cortisol exhibits typical ultradian and circadian rhythm and disturbances in its secretory pattern have been described in stress-related pathology. The aim of this thesis was to dissect the underlying structure of cortisol pulsatility and to develop tools to investigate the effects of this pulsatility on immune cell trafficking and the responsiveness of the neuroendocrine system and GR target genes to stress. Deconvolution modeling was set up as a tool for investigation of the pulsatile secretion underlying the ultradian cortisol rhythm. This further allowed us to investigate the role of the single cortisol pulses on the immune cell trafficking and the role of induced cortisol pulses on the kinetics of expression of GR target genes. The development of these three tools, would allow to induce and investigate in future the significance of single cortisol pulses for health and disease.
The skin is continuously challenged by environmental antigens that may penetrate and elicit a skin sensitization, which can develop into allergic contact dermatitis. Medical treatment for allergic contact dermatitis is limited - in fact only acute symptoms can be cured and for secondary prevention of the disease a lifelong avoidance of the allergen(s) is necessary. Therefore, the screening of the sensitization potential of substance used in commercially available products is indispensable to prevent such diseases. Hence, risk assessment is deduced from data obtained by murine local lymph node assay predominantly, but there exists a need to develop methods capable of providing the same information that do not require the use of animals in view of legislative initiatives such as REACH (registration, evaluation, authorization of chemicals) as well as the 7th Amendment to the Cosmetics Directive (2003/15/EC). Therefore, a number of promising in silico and in vitro approaches are being developed to address this need. In vitro test systems using the response of dendritic cells, which are the key player in the elicitation process of contact dermatitis, are established, but, although these novel methods for hazard identification might find application in the context of screening, it is not clear whether these approaches are useful for the purposes of risk assessment and risk management to predict allergic potency. Therefore, it was investigated whether on the one hand in vitro generated dendritic cells from primary blood monocytes (MoDC) and on the other hand a continuous monocytic cell line, the THP-1 cells, suggested as dendritic cell surrogate, react to a presumably weak allergen. Ascaridol, predicted as one of the possible causes for tea tree oil contact dermatitis, was studied and its effects in these two in vitro skin sensitization models were explored. Thus, the surface expression of CD86, HLADR, CD54, and CD40, which are known as activation markers in both in vitro models, were measured via flow cytometry. For MoDC, an augmented CD86 and HLADR surface expression in comparison to untreated cells were determined after 24 h exposure with ascaridol. An increased CD54 and CD40 surface expression were found only in some donors. After long term incubation of 96 h, ascaridol-treated MoDC still up-regulated CD86 and additionally an augmented CD40 expression was measured in all studied donors. An enhanced CD54 expression was determined for 50 percentage of all investigated donors. Furthermore, CD80, CD83 and CD209 protein expression were up-regulated in MoDC after 96 h of ascaridol incubation. In addition, it was determined that after 24 h ascaridol-treated MoDC showed an increased capacity to uptake antigens, whereas after 96 h this capacity got lost and antigen-capturing devices were reduced in comparison to non-treated MoDC. Moreover, the cytokine release of ascaridol-treated MoDC were measured after 24 h. Tumor necrosis factor (TNF)alpha, interleukin (IL)-1beta and IL 6 secretion were determined in some donors. Furthermore, IL-8 release was clearly increased after 24 h ascaridol treatment. By the same token, THP-1 cells were analyzed after ascaridol treatment for several activation markers. We found a similar response pattern as measured in MoDC. Ascaridol induced CD86 expression as well as CD54 after 24 h incubation. Additionally, the impact of ascaridol on phosphorylation of p38 mitogen-activated protein kinase, which had been shown to be involved in increased expression of activation markers like CD86 by others, were studied via Western blot analysis. A phosphorylation of p38 was determined after 15 min of ascaridol stimulation. Moreover, an augmented CD40 and HLADR surface expression were measured in a dose-response manner after 24 h ascaridol treatment. Also similar to MoDC an enhanced IL-8 secretion after ascaridol stimulation was observed in THP-1 cells. Hence, for the first time it was shown that ascaridol has immuno-modulating effects. The obtained data from both in vitro systems, MoDC and THP-1 cells, identified ascaridol as a sensitizer. Although for both systems there remain significant challenges to overcome for potency assessment, ascaridol is presumed to be a weak sensitizer probably. Interestingly, ascaridol treatment of THP-1 cells resulted also in an increased augmentation of CD184 and CCR2, two chemokine receptors expressed on monocyte. Therefore, these data encouraged the exploration of chemokine receptors as tools in skin sensitization prediction. Consequently, the combination of chemical assays with in vitro techniques may provide a useful surrogate to animal testing for skin sensitization. Due to the continuously changing environmental conditions, it is necessary to regularly monitor and update the spectrum of sensitizers that elicit contact dermatitis. Therefore, both debated in vitro test systems will become indispensable tools.
Even though proper research on Cauchy transforms has been done, there are still a lot of open questions. For example, in the case of representation theorems, i.e. the question when a function can be represented as a Cauchy transform, there is 'still no completely satisfactory answer' ([9], p. 84). There are characterizations for measures on the circle as presented in the monograph [7] and for general compactly supported measures on the complex plane as presented in [27]. However, there seems to exist no systematic treatise of the Cauchy transform as an operator on $L_p$ spaces and weighted $L_p$ spaces on the real axis.
This is the point where this thesis draws on and we are interested in developing several characterizations for the representability of a function by Cauchy transforms of $L_p$ functions. Moreover, we will attack the issue of integrability of Cauchy transforms of functions and measures, a topic which is only partly explored (see [43]). We will develop different approaches involving Fourier transforms and potential theory and investigate into sufficient conditions and characterizations.
For our purposes, we shall need some notation and the concept of Hardy spaces which will be part of the preliminary Chapter 1. Moreover, we introduce Fourier transforms and their complex analogue, namely Fourier-Laplace transforms. This will be of extraordinary usage due to the close connection of Cauchy and Fourier(-Laplace) transforms.
In the second chapter we shall begin our research with a discussion of the Cauchy transformation on the classical (unweighted) $L_p$ spaces. Therefore, we start with the boundary behavior of Cauchy transforms including an adapted version of the Sokhotski-Plemelj formula. This result will turn out helpful for the determination of the image of the Cauchy transformation under $L_p(\R)$ for $p\in(1,\infty).$ The cases $p=1$ and $p=\infty$ are playing special roles here which justifies a treatise in separate sections. For $p=1$ we will involve the real Hardy space $H_{1}(\R)$ whereas the case $p=\infty$ shall be attacked by an approach incorporating intersections of Hardy spaces and certain subspaces of $L_{\infty}(\R).$
The third chapter prepares ourselves for the study of the Cauchy transformation on subspaces of $L_{p}(\R).$ We shall give a short overview of the basic facts about Cauchy transforms of measures and then proceed to Cauchy transforms of functions with support in a closed set $X\subset\R.$ Our goal is to build up the main theory on which we can fall back in the subsequent chapters.
The fourth chapter deals with Cauchy transforms of functions and measures supported by an unbounded interval which is not the entire real axis. For convenience we restrict ourselves to the interval $[0,\infty).$ Bringing once again the Fourier-Laplace transform into play, we deduce complex characterizations for the Cauchy transforms of functions in $L_{2}(0,\infty).$ Moreover, we analyze the behavior of Cauchy transform on several half-planes and shall use these results for a fairly general geometric characterization. In the second section of this chapter, we focus on Cauchy transforms of measures with support in $[0,\infty).$ In this context, we shall derive a reconstruction formula for these Cauchy transforms holding under pretty general conditions as well as results on the behaviur on the left half-plane. We close this chapter by rather technical real-type conditions and characterizations for Cauchy transforms of functions in $L_p(0,\infty)$ basing on an approach in [82].
The most common case of Cauchy transforms, those of compactly supported functions or measures, is the subject of Chapter 5. After complex and geometric characterizations originating from similar ideas as in the fourth chapter, we adapt a functional-analytic approach in [27] to special measures, namely those with densities to a given complex measure $\mu.$ The chapter is closed with a study of the Cauchy transformation on weighted $L_p$ spaces. Here, we choose an ansatz through the finite Hilbert transform on $(-1,1).$
The sixth chapter is devoted to the issue of integrability of Cauchy transforms. Since this topic has no comprehensive treatise in literature yet, we start with an introduction of weighted Bergman spaces and general results on the interaction of the Cauchy transformation in these spaces. Afterwards, we combine the theory of Zen spaces with Cauchy transforms by using once again their connection with Fourier transforms. Here, we shall encounter general Paley-Wiener theorems of the recent past. Lastly, we attack the issue of integrability of Cauchy transforms by means of potential theory. Therefore, we derive a Fourier integral formula for the logarithmic energy in one and multiple dimensions and give applications to Fourier and hence Cauchy transforms.
Two appendices are annexed to this thesis. The first one covers important definitions and results from measure theory with a special focus on complex measures. The second appendix contains Cauchy transforms of frequently used measures and functions with detailed calculations.
Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
The main aim of "Her Idoll Selfe"? Shaping Identity in Early Modern Women- Self-Writings is to offer fresh readings of as yet little-read early modern women- texts. I look at a variety of texts that are either explicitly concerned with the constitution of the writer- self, such as the autobiographies by Lady Grace Mildmay and Martha Moulsworth, or in which the preoccupation with the self is of a more indirect nature, as in the mothers" advice books by Elizabeth Grymeston, Dorothy Leigh, Elizabeth Richardson or the anonymous M. R., or even in women- poetry, drama and religious verse. I situate the texts in the context of early modern discourses of femininity and subjectivity to pursue the question in how far it was possible for early modern women to achieve a sense of agency in spite of their culturally marginal position. In that, my readings aim to contribute to the ongoing critical process of decentring the early modern period. At the same time, I draw on contemporary theory as a methodological tool that can open up further dimensions of the texts, especially in places where the texts provide clues and parallels that lend themselves to a theoretical approach. Conversely, the texts themselves shed interesting light on feminist and poststructuralist theory and can serve as testing grounds for the current critical fascination with fragmentation and hybridity. Having outlined the theoretical and methodological framework of my study, I then analyse the women- writings with reference to a matrix of paradigmatic dimensions that encompass their most prominently recurring themes: the notion of writing the self, relationships between self and other, demarcations of private and public, the women- notorious preoccupation with self-loss and death, as well as the recurrent theme of the "golden meane". I suggest that this motif can provide the vital cue to early modern women- constitution of self. The idea of a precarious "golden meane" links in with to parallel discourses of moderation and balance at the time, but reinterprets them in a manner that can present a workable and innovative paradigm of subjectivity. Instead of subscribing to a model of decentred selfhood, early modern women- presentations of self suggest that a concluding but contested compromise is a workable strategy to achieve a form of selfhood that can responsibly be lived with.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
Cortisol is a stress hormone that acts on the central nervous system in order to support adaptation and time-adjusted coping processes. Whereas previous research has focused on slow emerging, genomic effects of cortisol likely mediated by protein synthesis, there is only limited knowledge about rapid, non-genomic cortisol effects on in vivo neuronal cell activity in humans. Three independent placebo-controlled studies in healthy men were conducted to test effects of 4 mg cortisol on central nervous system activity, occurring within 15 minutes after intravenous administration. Two of the studies (N = 26; N = 9) used continuous arterial spin labeling as a magnetic resonance imaging sequence, and found rapid bilateral thalamic perfusion decrements. The third study (N = 14) revealed rapid cortisol-induced changes in global signal strength and map complexity of the electroencephalogram. The observed changes in neuronal functioning suggest that cortisol may act on the thalamic relay of non-relevant background as well as on task specific sensory information in order to facilitate the adaptation to stress challenges. In conclusion, these results are the first to coherently suggest that a physiologically plausible amount of cortisol profoundly affects functioning and perfusion of the human CNS in vivo by a rapid, non-genomic mechanism.
Design and structural optimization has become a very important field in industrial applications over the last years. Due to economical and ecological reasons, the efficient use of material is of highly industrial interest. Therefore, computational tools based on optimization theory have been developed and studied in the last decades. In this work, different structural optimization methods are considered. Special attention lies on the applicability to three-dimensional, large-scale, multiphysic problems, which arise from different areas of the industry. Based on the theory of PDE-constraint optimization, descent methods in structural optimization require knowledge of the (partial) derivatives with respect to shape or topology variations. Therefore, shape and topology sensitivity analysis is introduced and the connection between both sensitivities is given by the Topological-Shape Sensitivity Method. This method leads to a systematic procedure to compute the topological derivative by terms of the shape sensitivity. Due to the framework of moving boundaries in structural optimization, different interface tracking techniques are presented. If the topology of the domain is preserved during the optimization process, explicit interface tracking techniques, combined with mesh-deformation, are used to capture the interface. This techniques fit very well the requirements in classical shape optimization. Otherwise, an implicit representation of the interface is of advantage if the optimal topology is unknown. In this case, the level set method is combined with the concept of the topological derivative to deal with topological perturbation. The resulting methods are applied to different industrial problems. On the one hand, interface shape optimization for solid bodies subject to a transient heat-up phase governed by both linear elasticity and thermal stresses is considered. Therefore, the shape calculus is applied to coupled heat and elasticity problems and a generalized compliance objective function is studied. The resulting thermo-elastic shape optimization scheme is used for compliance reduction of realistic hotplates. On the other hand, structural optimization based on the topological derivative for three-dimensional elasticity problems is observed. In order to comply typical volume constraints, a one-shot augmented Lagrangian method is proposed. Additionally, a multiphase optimization approach based on mesh-refinement is used to reduce the computational costs and the method is illustrated by classical minimum compliance problems. Finally, the topology optimization algorithm is applied to aero-elastic problems and numerical results are presented.
Hydrodynamic processes play a fundamental role in the distribution of salt within mangrove-fringed estuaries and mangrove forests. In this thesis, two hydrodynamic processes and their ecological implications were examined. (1) Passive Irrigation and Functional Morphology of Crustacean Burrows in Rhizophora-forests. The mangrove Rhizophora excludes more than 90% of the seawater salt at water intake at the roots. By means of conductivity methods and resin casting, it was found that crustacean burrows play a key role in the removal of excess salt from the root zone. Salt diffuses from the roots into the burrows, and is efficiently flushed from the burrows by rainwater infiltration and tidal irrigation. The burrows contribute significantly to favourable conditions for the growth of Rhizophora trees. (2) Trapping of Mangrove Propagules due to Density-driven Secondary Circulation in Tropical Estuaries. In North East Australian estuaries, mangrove propagules are drifted upstream by density-driven axial surface convergences. Propagules accumulate in hydrodynamic traps upstream from suitable habitat, where they are trapped at least for the entire tropical dry season. Axial convergences may provide an efficient barrier for propagule exchange across estuaries. In such estuaries, mangrove populations can be regarded as floristically isolated, not unlike island communities, even though the populations lie on a continuous coastline. This effect may contribute to the disjunct distribution observed in some mangrove species. The outcomes of this work contribute to the understanding of the importance of salt as a growth and habitat-restricting factor in the mangrove environment.
During the last decade, anatomic and physiological neuroscience research has yielded extensive information on the physiological regulators of short-term satiety, visceral and interoceptive sensation. Distinct neural circuits regulate the elements of food ingestion physiologically. The general aim of the current studies is to elucidate the peripheral neural pathways to the brain in healthy subjects to establish the groundwork for the study of the pathophysiology of bulimia nervosa (BN). We aimed to define the central activation pattern during non-nutritive gastric distension in humans, and aimed to define the cognitive responses to this mechanical gastric distension. We estimated regional cerebral blood flow with 15O-water positron emission tomography during intragastric balloon inflation and deflation in 18 healthy young women of normal weight. The contrast between inflated minus deflated in the exploratory analysis revealed activation in more than 20 brain regions. The analysis confirmed several well known areas in the central nervous system that contribute to visceral processing: the inferior frontal cortex, representing a zone of convergence for food related stimuli; the insula and operculum referred to as "visceral cortex"; the anterior cingulate gyrus (and insula), processing affective information; and the brainstem, a site of vagal relay for visceral afferent stimuli. Brain activation in the left ventrolateral prefrontal cortex was reproducible. This area is well known for higher cognitive processing, especially reward-related stimuli. The ventrolateral prefrontal cortex with the insular regions may provide a link between the affective and rewarding components of eating and disordered eating as observed in BN and binge-eating obesity. Gastric distension caused a significant rapid, reversible, and reproducible increase in the feelings of fullness, sleepiness, and gastric discomfort as well as a significant rapid, reversible, and reproducible decrease in the feeling of hunger. We showed that mechanical activation of the neurocircuitry involved in meal termination led to the described phenomena. The current brain activation studies of non-painful, proximal gastric distension could provide groundwork in the field of abnormal eating behavior by suggesting a link between visceral sensation and abnormal eating patterns. A potential treatment for disordered eating and obesity could alter the conscious and unconscious perception and interoceptive awareness of gastric distension contributing to meal termination.
The main research question of this thesis was to set up a framework to allow for the identification of land use changes in drylands and reveal their underlying drivers. The concept of describing land cover change processes in a framework of global change syndrome was introduced by Schellnhuber et al. (1997). In a first step the syndrome approach was implemented for semi-natural areas of the Iberian Peninsula based on time series analysis of the MEDOKADS archive. In the subsequent study the approach was expanded and adapted to other land cover strata. Furthermore, results of an analysis of the relationship of annual NDVI and rainfall data were incorporated to designate areas that show a significant relationship indicating that at least a part of the variability found in NDVI time series was caused by precipitation. Additionally, a first step was taken towards the integration of socio-economic data into the analysis; population density changes between 1961 and 2008 were utilized to support the identification of processes related to land abandonment accompanied by cessation of agricultural practices on the one hand and urbanization on the other. The main findings of the studies comprise three major land cover change processes caused by human interaction: (i) shrub and woody vegetation encroachment in the wake of land abandonment of marginal areas, (ii) intensification of non-irrigated and irrigated, intensively used fertile regions and (iii) urbanization trends along the coastline caused by migration and the increase of mass tourism. Land abandonment of cultivated fields and the give-up of grazing areas in marginal mountainous areas often lead to the encroachment of shrubs and woody vegetation in the course of succession or reforestation. Whereas this cover change has positive effects concerning soil stabilization and carbon sequestration the increase of biomass involves also negative consequences for ecosystem goods and services; these include decreased water yield as a result of increased evapotranspiration, increasing fire risk, decreasing biodiversity due to landscape homogenization and loss of aesthetic value. Arable land in intensively used fertile zones of Spain was further intensified including the expansion of irrigated arable land. The intensification of agriculture has also generated land abandonment in these areas because less people are needed in the agricultural labour sector due to mechanization. Urbanization effects due to migration and the growth of the tourism sector were mapped along the eastern Mediterranean coast. Urban sprawl was only partly detectable by means of the MEDOKADS archive as the changes of urbanization are often too subtle to be detected by data with a spatial resolution of 1 km-². This is in line with a comparison of a Landsat TM time series and the NOAA AVHRR archive for a study area in the Greece that showed that small scale changes cannot be detected based on this approach, even though they might be of high relevance for local management of resources. This underlines the fact that land degradation processes are multi-scale problems and that data of several spatial and temporal scales are mandatory to build a comprehensive dryland observation system. Further land cover processes related to a decrease of greenness did not play an important role in the observation period. Thus, only few patches were identified, suggesting that no large-scale land degradation processes are taking place in the sense of decline of primary productivity after disturbances. Nevertheless, the land cover processes detected impact ecosystem functioning and using the example of shrub encroachment, bear risks for the provision of goods and services which can be valued as land degradation in the sense of a decline of important ecosystem goods and services. This risk is not only confined to the affected ecosystem itself but can also impact adjacent ecosystems due to inter-linkages. In drylands water availability is of major importance and the management of water resources is an important political issue. In view of climate change this topic will become even more important because aridity in Spain did increase within the last decades and is likely to further do so. In addition, the land cover changes detected by the syndrome approach could even augment water scarcity problems. Whereas the water yield of marginal areas, which often serve as headwaters of rivers, decreases with increasing biomass, water demand of agriculture and tourism is not expected to decline. In this context it will be of major importance to evaluate the trade-offs between different land uses and to take decisions that maintain the future functioning of the ecosystems for human well-being.
The human brain is characterised by two apparently symmetrical cerebral hemispheres. However, the functions attributed to each half of the brain are very distinct with a relative specialisation of the left hemisphere for language processing. Most laterality research has been performed on a behavioural level, using techniques such as visual half-field presentation. The visual half-field technique involves the presentation of stimuli in the left or right visual field for a very short time (about 200 ms). During the presentation of lateralized stimuli, the gaze of the participants is fixated on a centrally presented fixation cross. This technique takes advantage of the anatomy of the visual pathway as the temporal hemiretinae project ipsilateral, while the nasal hemiretinae project contralateral. Thus, stimuli presented in the left or right visual field are initially processed in the contralateral hemisphere. Language organisation can also be directly investigated using functional magnetic resonance imaging (fMRI). Both behavioural and neuroimaging studies showed that about 95% of right-handed men have a left hemispheric specialisation for language. In contrast, data on language organisation in women are ambiguous. It is supposed that this ambivalent picture might be associated with changes in gonadal steroid levels in blood during the menstrual cycle. However, gonadal steroid effects are complex and their role in functional cerebral lateralization is still open to discussion. The aim of this PhD project was to investigate, using fMRI: (1) the processing of linguistic information initially received in the specialised, non-specialised or both hemispheres; (2) linking the associated brain activation pattern with progesterone levels during the menstrual cycle. Firstly, brain activation was measured in 16 right-handed, healthy males during processing of different components of language (orthography, phonology and semantics) after reception in the left, right or both hemispheres. Secondly, to investigate changes in language organisation during the menstrual cycle, we conducted an event-related fMRI study during semantic and phonological processing also using visual half-field and central presentation of linguistic stimuli. Our results revealed higher BOLD signal intensity change in the visual cortex contralateral to the visual field of stimulus presentation compared to the ipsilateral visual cortex reflecting the crossing of visual pathways. We also found support for the hypothesis that the superiority of word recognition in the left VWFA is the result of a reduced activity in the right VWFA under left hemispheric control. Further, linguistic information received in the subdominant RH, is interhemispheric transferred to the left hemisphere for phonological processing. Semantic processing in contrast occurs in the specialised and in the non-specialised hemisphere. For the group of women, data analysis revealed that during semantic processing, salivary progesterone levels correlated positively with brain activity of the left superior frontal gyrus, left middle and inferior occipital gyri and bilateral fusiform gyrus. In contrast, the brain activation pattern for phonological processing did not change significantly across the menstrual cycle. In conclusion, the effect of serum progesterone levels on brain activity is task and region specific.
Today, usage of complex circuit designs in computers, in multimedia applications and communication devices is widespread and still increasing. At the same time, due to Moore's Law we do not expect to see an end in the growth of the complexity of digital circuits. The decreasing ability of common validation techniques -- like simulation -- to assure correctness of a circuit design enlarges the need for formal verification techniques. Formal verification delivers a mathematical proof that a given implementation of a design fulfills its specification. One of the basic and during the last years widely used data structure in formal verification are the so called Ordered Binary Decision Diagrams (OBDDs) introduced by R. Bryant in 1986. The topic of this thesis is integration of structural high-level information in the OBDD-based formal verification of sequential systems. This work consist of three major parts, covering different layers of formal verification applications: At the application layer, an assertion checking methodology, integrated in the verification flow of the high-level design and verification tool Protocol Compiler is presented. At the algorithmic layer, new approaches for partitioning of transition relations of complex finite state machines, that significantly improve the performance of OBDD-based sequential verification are introduced. Finally, at the data structure level, dynamic variable reordering techniques that drastically reduce the time required for reordering without a trade-off in OBDD-size are described. Overall, this work demonstrates how a tighter integration of applications by using structural information can significantly improve the efficiency of formal verification applications in an industrial setting.
This thesis presents a study of tsunami deposits created by the 2004 Indian Ocean tsunami at the Thai Andaman coast. The outcomes of a study are the characteristics of tsunami deposit for paleo-tsunami database, the identification of major sediment layers in tsunami deposit and the reconstructing tsunami run-ups from the characteristics of tsunami deposit for a coastal development program. The investigations of tsunami deposit are made almost 3 years after the event. Field investigations characterize the tsunami deposit as a distinct sediment layer variable in thickness of gray sand deposited with an erosional basis on a pre-existing soil. The best location for the observation of recent tsunami deposit is the area located about 50-200 m inland from the coastline. In most cases, the deposit layer is normally graded. In some cases, the deposit contains rip-up clasts of muddy soils and/or organic matters. The tsunami deposits are compared with three deposits from coastal sub-environments. The mean grain-size and standard deviation of deposits show that the shoreface deposits are fine to very fine sand, poorly to moderately well sorted; the swash zone deposits are coarse to fine sand, poorly to well sorted; the berm/dune deposits are medium to fine sand, poorly to well sorted; and the tsunami deposits are coarse to very fine sand, poorly to moderately well sorted. The plots of deposit mean grain-size versus sorting indicate that the tsunami deposits are composed of shoreface deposits, swash zone deposits and berm/dune deposits as well. The vertical variation of the texture of tsunami deposit shows that the mean grain-size fines upward and fining landward. The analysis and interpretation of the run-up numbers from the characteristics of tsunami deposits get three run-ups for the 2004 Indian Ocean tsunami at the Thai Andaman coast. It corresponds to field observations from the eye-witness reports and local people- affirmations. The total deposition is a major transportation pattern of onshore tsunami sediments. The sediments must fine in the direction of transport. In general, the major origins of the sediment are the swash zone and berm/dune zone where coarse to medium sand is a significant material, the minor origin of tsunami sediment is a shoreface where a significant material is fine to very fine sand. Only at an area with flat slope shorface, the major origin of tsunami sediment is the shoreface. The thicknesses, the mean grain-sizes, and the standard deviations of tsunami deposits are used to evaluate the influences of coastal morphology on the sediment characteristics. The evaluations show that the tsunami affected areas were attacked by the variable energy waves. Wave energies at the direct tsunami wave affected areas are higher than at the indirect tsunami wave affected areas. Tsunami wave energy is highly dissipated at an area with steep slope shoreface. In the same way, tsunami run-up energy is highly dissipated at an area with steep slope onshore. A channel paralleled to the coastline decreases the run-up velocity, slightly dissipates run-up energy. The road and pond highly influence the characteristics of tsunami deposit and tsunami run-up. A road obstructs the run-up velocity, dissipates run-up energy. A pond decreases run-up velocity, dissipates run-up energy. The characteristics of tsunami deposit can be interpreted for reconstructing the characteristics of tsunami run-up such as a run-up height and a flow velocity. Soulsby et al.(2007)- model is applied for reconstructing tsunami run-up at the study areas. The input parameters are sediment grain-size and sediment inundation distance. Ao Kheuy beach and Khuk Khak beach, Phang Nga province, Thailand are the areas listed for reconstructing tsunami run-up. The evaluated run-up heights are 4.2-4.9 m at Ao Kheuy beach, and 5.4-9.4 m at Khuk Khak beach. The evaluated run-up velocities are 12.8-19.2 m/s (maximum) and 0.2-1.9 m/s (mean) at the coastline and onshore, respectively. Hence, a reasonably good agreement between the evaluated and observed run-up is found. Tsunami run-up height and velocity can be used for coastal development and risk management in the tsunami affected areas. The case studies from the Thai Andaman coast suggest that in the area from coastline to about 70-140 m inland was flooded by the high velocity (high energy) run-ups, and those run-up energies were dissipated there. That area ought to be a non-residential area or a physical protection construction area (flood barrier, forest planting, etc.).
The economic growth theory analyses which factors affect economic growth and tries to analyze how it can last. A popular neoclassical growth model is the Ramsey-Cass-Koopmans model, which aims to determine how much of its income a nation or an economy should save in order to maximize its welfare. In this thesis, we present and analyze an extended capital accumulation equation of a spatial version of the Ramsey model, balancing diffusive and agglomerative effects. We model the capital mobility in space via a nonlocal diffusion operator which allows for jumps of the capital stock from one location to an other. Moreover, this operator smooths out heterogeneities in the factor distributions slower, which generated a more realistic behavior of capital flows. In addition to that, we introduce an endogenous productivity-production operator which depends on time and on the capital distribution in space. This operator models the technological progress of the economy. The resulting mathematical model is an optimal control problem under a semilinear parabolic integro-differential equation with initial and volume constraints, which are a nonlocal analog to local boundary conditions, and box-constraints on the state and the control variables. In this thesis, we consider this problem on a bounded and unbounded spatial domain, in both cases with a finite time horizon. We derive existence results of weak solutions for the capital accumulation equations in both settings and we proof the existence of a Ramsey equilibrium in the unbounded case. Moreover, we solve the optimal control problem numerically and discuss the results in the economic context.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
Service innovation has increasingly gained acknowledgement to contribute to economic growth and well-being. Despite this increased relevance in practice, service innovation is a developing research field. To advance literature on service innovation, this work analyzes with a qualitative study how firms manage service innovation activities in their organization differently. In addition, it evaluates the influence of top management commitment and corporate service innovativeness on service innovation capabilities of a firm and their implications for firm-level performance by conducting a quantitative study. Accordingly, the main overall research questions of this dissertation are: 1.) How and why do firms manage service innovation activities in their organization differently? 2.) What influence do top management commitment and corporate service innovativeness have on service innovation capabilities of a firm and what are the implications for firm-level performance? To respond to the first research question the way firms manage service innovation activities in their organization is investigated and by whom and how service innovations are developed. Moreover, it is examined why firms implement their service innovation activities differently. To achieve this a qualitative empirical study is conducted which included 22 semi-structured interviews with 15 firms in the sectors of construction, financial services, IT services, and logistics. Addressing the second research question, the aim is to improve the understanding about factors that enhance firm-level performance through service innovations. Deploying a dynamic capabilities perspective, a quantitative study is performed which underlines the importance of service innovation capabilities. More specifically, a theoretical framework is developed that proposes a positive relationship of top management commitment and corporate service innovativeness with service innovation capabilities and a positive relationship between service innovation capabilities and the firm-level performance indicators market performance, competitive advantage, and efficiency. A survey with double respondents from 87 companies from the sectors construction, financial services, IT services, and logistics was conducted to test the proposed theoretical framework by applying partial least squares structural equation modeling (PLS-SEM).
This thesis presents a study of the visual change detection mechanism. This mechanism is thought to be responsible for the detection of sudden and unexpected changes in our visual environment. As the brain is a capacity limited system and has to deal with a continuous stream of information from its surroundings only a part of the vast amount of information can be completely processed and be brought to conscious awareness. This information, which passes through attentional filters, is used for goal-directed behaviour. Therefore, the change detection mechanism is a very useful aid to cope with important information which is outside the focus of our attention. rnIt is thought that a neural memory trace of repetitive visual information is stored. Each new information input is compared to this existing memory trace by a so-called change or mismatch detection system. Following a sudden change, the comparison process leads to a mismatch and the detection system elicits a warning signal, to which an orienting response can follow. This involves a change in the focus of attention towards this sudden environmental change which can then be evaluated for potential danger and allows for a behavioural adaptation to the new situation. rnTo this purpose a paradigm was developed combining a 2-choice response time task with in the background a mismatch detection task of which the subjects were not aware. This paradigm was implemented in an ERP and an fMRI study and was used to study the the change detection mechanism and its relationship with impulsivity.rnIn previous studies a change detection system for auditory information had already been established. As the brain is a very efficient system it was thought to be unlikely that this change detection system is only available for the processing of auditory information. rnIndeed, a modality specific mismatch response at the sensory specific occipital cortex and a more general response at the frontocentral midline, both resembling the components shown in auditory research, were found in the ERP study.rnAdditionally, magnetic resonance imaging revealed a possible functional network of regions, which responded specifically to the processing of a deviant. These regions included the occipital gyrus, premotor cortex, inferior frontal cortex, thalamas, insula, and parts of the cingular cortex. rnThe relationship between impulsivity measures and visual change detection was established in an additional study. More impulsive subjects showed less detection of deviant stimuli, which was most likely due to too fast and imprecise information processing.rnIn summary it can be said, that the work presented in this thesis demonstrates that visual mismatch negativity was established, a number of regions could be associated with change detection and additionally the relevance of change detection in information processing was shown.rn
At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.
The search for relevant determinants of knowledge acquisition has a long tradition in educational research, with systematic analyses having started over a century ago. To date, a variety of relevant environmental and learner-related characteristics have been identified, providing a wide body of empirical evidence. However, there are still some gaps in the literature, which are highlighted in the current dissertation. The dissertation includes two meta-analyses summarizing the evidence on the effectiveness of electrical brain stimulation and the effects of prior knowledge on later learning outcomes and one empirical study employing latent profile transition analysis to investigate the changes in conceptual knowledge over time. The results from the three studies demonstrate how learning outcomes can be advanced by input from the environment and that they are highly related to the students" level of prior knowledge. It is concluded that the effects of environmental and learner-related variables impact both the biological and cognitive processes underlying knowledge acquisition. Based on the findings from the three studies, methodological and practical implications are provided, followed by an outline of four recommendations for future research on knowledge acquisition.
Our goal is to approximate energy forms on suitable fractals by discrete graph energies and certain metric measure spaces, using the notion of quasi-unitary equivalence. Quasi-unitary equivalence generalises the two concepts of unitary equivalence and norm resolvent convergence to the case of operators and energy forms defined in varying Hilbert spaces.
More precisely, we prove that the canonical sequence of discrete graph energies (associated with the fractal energy form) converges to the energy form (induced by a resistance form) on a finitely ramified fractal in the sense of quasi-unitary equivalence. Moreover, we allow a perturbation by magnetic potentials and we specify the corresponding errors.
This aforementioned approach is an approximation of the fractal from within (by an increasing sequence of finitely many points). The natural step that follows this realisation is the question whether one can also approximate fractals from outside, i.e., by a suitable sequence of shrinking supersets. We partly answer this question by restricting ourselves to a very specific structure of the approximating sets, namely so-called graph-like manifolds that respect the structure of the fractals resp. the underlying discrete graphs. Again, we show that the canonical (properly rescaled) energy forms on such a sequence of graph-like manifolds converge to the fractal energy form (in the sense of quasi-unitary equivalence).
From the quasi-unitary equivalence of energy forms, we conclude the convergence of the associated linear operators, convergence of the spectra and convergence of functions of the operators – thus essentially the same as in the case of the usual norm resolvent convergence.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.
This work investigates the industrial applicability of graphics and stream processors in the field of fluid simulations. For this purpose, an explicit Runge-Kutta discontinuous Galerkin method in arbitrarily high order is implemented completely for the hardware architecture of GPUs. The same functionality is simultaneously realized for CPUs and compared to GPUs. Explicit time steppings as well as established implicit methods are under consideration for the CPU. This work aims at the simulation of inviscid, transsonic flows over the ONERA M6 wing. The discontinuities which typically arise in hyperbolic equations are treated with an artificial viscosity approach. It is further investigated how this approach fits into the explicit time stepping and works together with the special architecture of the GPU. Since the treatment of artificial viscosity is close to the simulation of the Navier-Stokes equations, it is reviewed how GPU-accelerated methods could be applied for computing viscous flows. This work is based on a nodal discontinuous Galerkin approach for linear hyperbolic problems. Here, it is extended to non-linear problems, which makes the application of numerical quadrature obligatory. Moreover, the representation of complex geometries is realized using isoparametric mappings. Higher order methods are typically very sensitive with respect to boundaries which are not properly resolved. For this purpose, an approach is presented which fits straight-sided DG meshes to curved geometries which are described by NURBS surfaces. The mesh is modeled as an elastic body and deformed according to the solution of closest point problems in order to minimize the gap to the original spline surface. The sensitivity with respect to geometry representations is reviewed in the end of this work in the context of shape optimization. Here, the aerodynamic drag of the ONERA M6 wing is minimized according to the shape gradient which is implicitly smoothed within the mesh deformation approach. In this context a comparison to the classical Laplace-Beltrami operator is made in a Stokes flow situation.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
Modern decision making in the digital age is highly driven by the massive amount of
data collected from different technologies and thus affects both individuals as well as
economic businesses. The benefit of using these data and turning them into knowledge
requires appropriate statistical models that describe the underlying observations well.
Imposing a certain parametric statistical model goes along with the need of finding
optimal parameters such that the model describes the data best. This often results in
challenging mathematical optimization problems with respect to the model’s parameters
which potentially involve covariance matrices. Positive definiteness of covariance matrices
is required for many advanced statistical models and these constraints must be imposed
for standard Euclidean nonlinear optimization methods which often results in a high
computational effort. As Riemannian optimization techniques proved efficient to handle
difficult matrix-valued geometric constraints, we consider optimization over the manifold
of positive definite matrices to estimate parameters of statistical models. The statistical
models treated in this thesis assume that the underlying data sets used for parameter
fitting have a clustering structure which results in complex optimization problems. This
motivates to use the intrinsic geometric structure of the parameter space. In this thesis,
we analyze the appropriateness of Riemannian optimization over the manifold of positive
definite matrices on two advanced statistical models. We establish important problem-
specific Riemannian characteristics of the two problems and demonstrate the importance
of exploiting the Riemannian geometry of covariance matrices based on numerical studies.
Every action we perform, no matter how simple or complex, has a cognitive representation. It is commonly assumed that these are organized hierarchically. Thus, the representation of a complex action consists of multiple simpler actions. The representation of a simple action, in turn, consists of stimulus, response, and effect features. These are integrated into one representation upon the execution of an action and can be retrieved if a feature is repeated. Depending on whether retrieved features match or only partially match the current action episode, this might benefit or impair the execution of a subsequent action. This pattern of costs and benefits results in binding effects that indicate the strength of common representation between features. Binding effects occur also in more complex actions: Multiple simple actions seem to form representations on a higher level through the integration and retrieval of sequentially given responses, resulting in so-called response-response binding effects. This dissertation aimed to investigate what factors determine whether simple actions form more complex representations. The first line of research (Articles 1-3) focused on dissecting the internal structure of simple actions. Specifically, I investigated whether the spatial relation of stimuli, responses, or effects, that are part of two different simple actions, influenced whether these simple actions are represented as one more complex action. The second line of research (Articles 2, 4, and 5) investigated the role of context on the formation and strength of more complex action representations. Results suggest that spatial separation of responses as well as context might affect the strength of more complex action representations. In sum, findings help to specify assumptions on the structure of complex action representations. However, it may be important to distinguish factors that influence the strength and structure of action representations from factors that terminate action representations.
In addition to the well-recognised effects of both, genes and adult environment, it is now broadly accepted that adverse conditions during pregnancy contribute to the development of mental and somatic disorders in the offspring, such as cardiovascular disorders, endocrinological disorders, metabolic disorders, schizophrenia, anxious and depressive behaviour and attention deficit hyperactivity disorder (ADHD). Early life events may have long lasting impact on tissue structure and function and these effects appear to underlie the developmental origins of vulnerability to chronic diseases. The assumption that prenatal adversity, such as maternal emotional states during pregnancy, may have adverse effects on the developing infant is not new. Accordant references can be found in an ancient Indian text (ca. 1050 before Christ), in biblical texts and in documents originating during the Middle Ages. Even Hippocrates stated possible effects of maternal emotional states on the developing fetus. Since the mid-1950s, research examining the effects of maternal psychosocial stress during pregnancy appeared in the literature. Extensive research in this field has been conducted since the early 1990s. Thus, the relationship between early life events and long-term health outcomes was already postulated over 20 years ago. David Barker and colleagues demonstrated that children of lower birth weight - which represents a crude marker of an adverse intrauterine environment - were at increased risk of high blood pressure, cardiovascular disorders, and type-2 diabetes later in life. These provocative findings led to a large amount of subsequent research, initially focussing on the role of undernutrition in determining fetal outcomes. The phenomenon of prenatal influences that determine in part the risk of suffering from chronic disease later in life has been named the "fetal origins of health and disease" paradigm. The concept of "prenatal programming" has now been extended to many other domains, such as the effects of prenatal maternal stress, prenatal tobacco exposure, alcohol intake, medication, toxins, as well as maternal infection and diseases. During the process of prenatal programming, environmental agents are transmitted across the placenta and act on specific fetal tissues during sensitive periods of development. Thus, developmental trajectories are changed and the organisation and function of tissue structure and organ system is altered. The biological purpose of those "early life programming" may consist in evolutionary advantages. The offspring adapts its development to the expected extrauterine environment which is forecast by the clues available during fetal life. If the fetus receives signals of a challenging environment, e.g. due to maternal stress hormones or maternal undernutrition, its survival may be promoted due to developmental adaptation processes. However, if the expected environment does not match with the real environment, maladapation and later disease risk may result. For example, a possible indicator of a "response ready" trait, such as hyperactivity/inattention may have been advantageous in an adverse ancient environment. However, it is of disadvantage when the postnatal environment demands oppositional skills, such as attention and concentration " e.g. in the classroom, at school, to achieve academic success. Borderline personality disorder (BPD) is a prevalent psychiatric disorder, characterized by impulsivity, affective instability, dysfunctional interpersonal relationships and identity disturbance. Although many studies report different risk factors, the exact etiologic mechanisms are not yet understood. In addition to the well-recognised effects of genetic components and adverse childhood experiences, BPD may potentially be co-determined by further environmental influences, acting very early in life: during pre- and perinatal period. There are several hints that may suggest possible prenatal programming processes in BPD. For example, patients with BPD are characterized by elevated stress sensitivity and reactivity and dysfunctions of the neuroendocrine stress system, such as the hypothalamic pituitary adrenal (HPA) axis. Furthermore, patients with BPD show a broad range of somatic comorbidities " especially those disorders for which prenatal programming processes have been described. During infancy and childhood, BPD patients already show behavioural and emotional abnormalities as well as pronounced temperamental traits, such as impulsivity, emotional dysregulation and inattention that may potentially be co-determined by prenatal programming processes. Such temperamental traits - similar to those, seen in patients with ADHD - have been described to be associated with low birthweight which indicates a suboptimal intrauterine environment. Moreover, the functional and structural alterations in the central nervous system (CNS) in patients with BPD might also be mediated in part by prenatal agents, such as prenatal tobacco exposure. Prenatal adversity may thus constitute a further, additional component in the multifactorial genesis of BPD. The association between BPD and prenatal risk factors has not yet been studied in such detail. We are not aware of any further study that assessed pre- and perinatal risk factors, such as maternal psychoscocial stress, smoking, alcohol intake, obstetric complications and lack of breastfeeding in patients with BPD.
In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.
Memory consists of multiple anatomically and functionally distinct systems. Animal studies suggest that stress modulates multiple memory systems in a manner that favors nucleus caudatus-based stimulus-response learning at the expense of hippocampus-based spatial learning. The present work aimed (i) to translate these findings to humans, (ii) to determine the involvement of the stress hormone cortisol in this effect, and (iii) to assess whether the use of stimulus-response and spatial strategies is a long lasting person characteristic. To address these issues we developed a new paradigm that differentiates the use of spatial and stimulus-response learning in humans. Our findings indicate that (i) psychosocial stress (Trier Social Stress Test) modulates the use of spatial and stimulus-response learning in humans, (ii) cortisol plays a key role in this modulatory effect of stress, and (iii) the use of spatial and stimulus-response learning is affected by situational rather than long lasting person factors.
In this thesis, we study the convergence behavior of an efficient optimization method used for the identification of parameters for underdetermined systems. The research is motivated by optimization problems arising from the estimation of parameters in neural networks as well as in option pricing models. In the first application, we are concerned with neural networks used to forecasting stock market indices. Since neural networks are able to describe extremely complex nonlinear structures they are used to improve the modelling of the nonlinear dependencies occurring in the financial markets. Applying neural networks to the forecasting of economic indicators, we are confronted with a nonlinear least squares problem of large dimension. Furthermore, in this application the number of parameters of the neural network to be determined is usually much larger than the number of patterns which are available for the determination of the unknowns. Hence, the residual function of our least squares problem is underdetermined. In option pricing, an important but usually not known parameter is the volatility of the underlying asset of the option. Assuming that the underlying asset follows a one-factor continuous diffusion model with nonconstant drift and volatility term, the value of an European call option satisfies a parabolic initial value problem with the volatility function appearing in one of the coefficients of the parabolic differential equation. Using this system equation, the estimation of the volatility function is described by a nonlinear least squares problem. Since the adaption of the volatility function is based only on a small number of observed market data these problems are naturally ill-posed. For the solution of these large-scale underdetermined nonlinear least squares problems we use a fully iterative inexact Gauss-Newton algorithm. We show how the structure of a neural network as well as that of the European call price model can be exploited using iterative methods. Moreover, we present theoretical statements for the convergence of the inexact Gauss-Newton algorithm applied to the less examined case of underdetermined nonlinear least squares problems. Finally, we present numerical results for the application of neural networks to the forecasting of stock market indices as well as for the construction of the volatility function in European option pricing models. In case of the latter application, we discretize the parabolic differential equation using a finite difference scheme and we elucidate convergence problems of the discrete scheme when the initial condition is not everywhere differentiable.
The present thesis addresses the validity of Binge Eating Disorder (BED) as well as underlying mechanisms of BED from three different angles. Three studies provide data discriminating obesity with BED from obesity without BED. Study 1 demonstrates differences between obese individuals with and without BED regarding eating in the natural environment, psychiatric comorbidity, negative affect as well as self reported tendencies in eating behavior. Evidence for possible psychological mechanisms explaining increased intake of BED individuals in the natural environment was given by analyzing associations of negative affect, emotional eating, restrained eating and caloric intake in obese BED compared to NBED controls. Study 2 demonstrated stress-induced changes in the eating behavior of obese individuals with BED. The impact of a psychosocial stressor, the Trier Social Stress Test (TSST, Kirschbaum, Pirke, &amp;amp; Hellhammer, 1993), on behavioral patterns of eating behavior in laboratory was investigated. Special attention was given to stress-induced changes in variables that reflect mechanisms of appetite regulation in obese BED individuals compared to controls. To further explore by which mechanisms stress might trigger binge eating, study 3 investigated differences in stress-induced cortisol secretion after a socially evaluated cold pressure test (SECPT, Schwabe, Haddad, &amp;amp; Schachinger, 2008) in obese BED as compared to obese NBED individuals.
The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.
Interoception - the perception of bodily processes - plays a crucial role in the subjective experience of emotion, consciousness and symptom genesis. As an alternative to interoceptive paradigms that depend on the participants" active cooperation, five studies are presented to show that startle methodology may be employed to study visceral afferent processing. Study 1 (38 volunteers) showed that startle responses to acoustic stimuli of 105 dB(A) intensity were smaller when elicited during the cardiac systole (R-wave +230 ms) as compared to the diastole (R +530 ms). In Study 2, 31 diabetic patients were divided into two groups with normal or diminished (< 6 ms/mmHg) baroreflex sensitivity (BRS) of heart rate control. Patients with normal BRS showed a startle inhibition during the cardiac systole as was found for healthy volunteers. Diabetic patients with diminished BRS did not show this pattern. Because diminished BRS is an indicator of impaired baro-afferent signal transmission, we concluded that cardiac modulation of startle is associated with intact arterial baro-afferent feedback. Thus, pre-attentive startle methodology is feasible to study visceral afferent processing. rnVisceral- and baro-afferent information has been found to be mainly processed in the right hemisphere. To explore whether cardiac modulation of startle eye blink is lateralized as well, in Study 3, 37 healthy volunteers received 160 unilateral acoustic startle stimuli presented to both ears, one at a time (R +0, 100, 230, 530 ms). Startle response magnitude was only diminished at R +230 ms and for left-ear presentation. This lateralization effect in the cardiac modulation of startle eye blink may reflect the previously described advantages of right-hemispheric brain structures in relaying viscero- and baro-afferent signal transmission. rnThis lateralization effect implies that higher cognitive processes may also play a role in the cardiac modulation of startle. To address this question, in Study 4, 25 volunteers responded first by 'fast as possible' button pushes (reaction time, RT), and second, rated perceived intensity of 60 acoustic startle stimuli (85, 95, or 105 dB; R +230, 530 ms). RT was divided into evaluation and motor response time. Increasing stimulus intensity enhanced startle eye blink, intensity ratings, and RT components. Eye blinks and intensity judgments were lower when startle was elicited at a latency of R +230 ms, but RT components were differentially affected. It is concluded that the cardiac cycle affects the attentive processing of acoustic startle stimuli. rnBeside the arterial baroreceptors, the cardiopulmonary baroreceptors represent another important system of cardiovascular perception that may have similar effects on startle responsiveness. To clarify this issue, in Study 5, Lower Body Negative Pressure at gradients of 0, -10, -20, and -30 mmHg was applied to unload cardiopulmonary baroreceptors in 12 healthy males, while acoustic startle stimuli were presented (R +230, 530 ms). Unloading of cardiopulmonary baroreceptors increased startle eye blink responsiveness. Furthermore, the effect of relative loading/unloading of arterial baroreceptors on startle eye blink responsiveness was replicated. These results demonstrate that the loading status of cardiopulmonary baroreceptors also has an impact on brainstem-based CNS processes. rnThus, the cardiac modulation of acoustic startle is feasible to reflect baro-afferent signal transmission of multiple neural sources, it represents a pre-attentive method that is independent of active cooperation, but its modulatory effects also reach higher cognitive, attentive processes.rn
This thesis comprises of four research papers on the economics of education and industrial relations, which contribute to the field of empirical economic research. All of the corresponding papers focus on analysing how much time individuals spend on specific activities. The allocation of available time resources is a decision that individuals make throughout their lifetime. In this thesis, we consider individuals at different stages of their lives - students at school, university students, and dependent employees at the workplace.
Part I includes two research studies on student's behaviour in secondary and tertiary education.
Chapter 2 explores whether students who are relatively younger or older within the school year exhibit differential time allocation. Building on previous findings showing that relatively younger students perform worse in school, the study shows that relatively younger students are aware of their poor performance in school and feel more strain as a result. Nevertheless, there are no clear differences to be found in terms of time spent on homework, while relatively younger students spend more time watching television and less time on sports activities. Thus, the results suggest that the lower learning outcomes are not associated with different time allocations between school-related activities and non-school-related activities.
Chapter 3 analyses how individual ability and labour market prospects affect study behaviour. The theoretical modelling predicts that both determinants increase study effort. The empirical investigation is based on cross-sectional data from the National Educational Panel Study (NEPS) and includes thousands of students in Germany. The analyses show that more gifted students exhibit lower subjective effort levels and invest less time in self-study. In contrast, very good labour market prospects lead to more effort exerted by the student, both qualitatively and quantitatively. The potential endogeneity problem is taken into account by using regional unemployment data as an instrumental variable.
Part II includes two labour economic studies on determinants of overtime. Both studies belong to the field of industrial relations, as they focus on union membership on the one hand and the interplay of works councils and collective bargaining coverage on the other.
Chapter 4 shows that union members work less overtime than non-members do. The econometric approach takes the problem of unobserved heterogeneity into account; but provides no evidence that this issue affects the results. Different channels that could lead to this relationship are analysed by examining relevant subgroups separately. For example, this effect of union membership can also be observed in establishments with works councils and for workers who are very likely to be covered by collective bargaining agreements. The study concludes that the observed effect is due to the fact that union membership can protect workers from corresponding increased working time demands by employers.
Chapter 5 builds on previous studies showing a negative effect of works councils on overtime. In addition to co-determination by works councils at the firm level, collective bargaining coverage is an important factor in the German industrial relations system. Corresponding data was not available in the SOEP for quite some time. Therefore, the study uses recent SOEP data, which also contains information on collective bargaining coverage. A cross-sectional analysis is conducted to examine the effects of works councils in establishments with and without collective bargaining coverage. Similar to studies analysing other outcome variables, the results show that the effect of works councils exists only for employees covered by a collective bargaining agreement.
Chemical communication in the reproductive behaviour of Neotropical poison frogs (Dendrobatidae)
(2013)
Chemical communication is the evolutionary oldest communication system in the animal kingdom that triggers intra- and interspecific interactions. It is initiated by the emitter releasing either a signal or a cue that causes a reaction of the receiving individual. Compared to other animals there are relatively few studies regarding chemical communication in anurans. In this thesis the impact of chemical communication on the behaviour of the poison frog Ranitomeya variabilis (Dendrobatidae) and its parental care performance was investigated. This species uses phytotelmata (small water bodies in plants) for both clutch and tadpole depositions. Since tadpoles are cannibalistic, adult frogs do not only avoid conspecifics when depositing their eggs but also transport their tadpoles individually into separated phytotelmata. The recognition of already occupied phytotelmata was shown to be due to chemical substances released by the conspecific tadpoles. In order to gain a deeper comprehension about the ability of adult R. variabilis to generally recognize and avoid tadpoles, in-situ pool choice experiments were conducted, offering chemical substances of tadpole of different species to the frogs (Chapter I). It turned out that they were able to recognize all species and avoid their chemical substances for clutch depositions. However, for tadpole depositions only dendrobatid tadpoles occurring in phytotelmata were avoided, while those species living in rivers were not. Additionally, the chemical substances of a treefrog tadpole (Hylidae) were recognized by R. variabilis. Yet, they were not avoided but preferred for tadpole depositions; thus these tadpoles might be recognized as a potential prey for the predatory poison frog larvae. One of the poison frog species which was avoided for both tadpole and clutch depositions, was the phytotelmata breeding Hyloxalus azureiventris. The chemical substances released by its tadpoles were analysed together with those of the R. variabilis tadpoles (Chapter II). After finding a suitable solid-phase extraction sorbent (DSC-18), the active chemical compounds from the water of both tadpole species were extracted and fractionated. In order to determine which fractions triggered the avoidance behaviour of the frogs, in-situ bioassays were conducted. It was found that the biologically active compounds differed between both species. Since the avoidance of the conspecific tadpoles is not advantageous to the releaser tadpoles (losing a potential food resource) the chemicals released by them might be defined as chemical cues. However, as it turned out that the avoidance of the heterospecific tadpoles was not triggered by a mere byproduct based on the close evolutionary relationship between the two species, the chemical compounds released by H. azureiventris tadpoles might be defined as chemical signals (being advantageous to the releasing tadpoles) or, more specifically as synomones, interspecificly acting chemicals that are advantageous for both emitter and receiver (since R. variabilis avoids a competition situation for its offspring, too). Another interspecific communication system investigated in this thesis was the avoidance of predator kairomones (Chapter III). Using chemical substances from damselfly larvae, it could be shown that R. variabilis was unable to recognize and avoid kairomones of these tadpole predators. However, when physically present, damselfly larvae were avoided by the frogs. For the recognition of conspecific tadpoles in contrast, chemical substances were necessary, since purely visible artificial tadpole models were not avoided. If R. variabilis is also capable to chemically communicate with adult conspecifics was investigated by presenting chemical cues/signals of same-sex or opposite-sex conspecifics to the frogs (Chapter IV). It was suggested that males would be attracted to chemical substances of females and repelled by those of conspecific males. But instead all individuals showed avoidance behaviour towards the conspecific chemicals. This was suggested to be an artefact due to confinement stress of the releaser animals, emitting disturbance cues that triggered avoidance behaviour in their conspecifics. The knowledge gained about chemical communication in parental care thus far, was used to further investigate a possible provisioning behaviour in R. variabilis. In-situ pool-choice experiments with chemical cues of conspecific tadpoles were carried out throughout the change from rainy to dry season (Chapter V). With a changepoint analysis, the exact seasonal change was defined and differences between frogs" choices were analysed. It turned out that R. variabilis does not avoid but prefer conspecific cues during the dry season for tadpole depositions, what might be interpreted as a way to provide their tadpoles with food (i.e. younger tadpoles) in order to accelerate their development when facing desiccation risk. That tadpoles were also occasionally fed with fertilized eggs could be shown in a comparative study, where phytotelmata that contained a tadpole deposited by the frogs themselves received more clutch depositions than freshly erected artificial phytotelmata containing unfamiliar tadpoles (i.e. their chemical cues; Chapter VI). Conducting home range calculations with ArcGIS, it turned out that R. variabilis males showed unexpectedly strong site fidelity, leading to the suggestion that they recognize their offspring by phytotelmata location. However, in order to test if R. variabilis is furthermore able to perform chemical offspring recognition, frogs were confronted in in-situ pool-choice experiments with chemical cues of single tadpoles that were found in their home ranges (Chapter VII). Genetic kinship analyses were conducted between those tadpoles emitting the chemical cues and those deposited together with or next to them. The results, however, indicated that frogs did not choose to deposit their offspring with or without another tadpole due to relatedness, i.e. kin recognition by chemical cues could not be confirmed in R. variabilis.
In this psycho-neuro-endocrine study the molecular basis of different variants of steroid receptors as well as highly conserved non steroidal receptors was investigated. These nuclear receptors (NRs) are important key regulators of a wide variety of different physiological and pathophysiological challenges ranging from inflammation and stress to complex behaviour and disease. NRs control gene transcription in a ligand dependent manner and are embedded in the huge interaction network of the neuroendocrine and immune system. Two receptors, the glucocorticoid receptor (GR) and the chicken ovalbumin upstream promoter-transcription factorII (Coup-TFII), both expressed in the immune and nervous system, were investigated regarding possible splice variants and their implication in the control of gene transcription. Both NRs are known to interact and modulate each other- target gene regulation. This study could be shown that both NRs have different splice variants that are expressed in a tissue specific manner. The different 5-´alternative transcript variants of the human GR were in silico identified in other species and evidence for a highly conserved and tightly controlled function was provided. Investigations of the N-terminal transactivation domain of the GR showed a deletion suggesting an altered glucocorticoid-dependent transactivation profile. The newly identified alternative transcript variant of Coup-TFII leads to a DNA binding deficient Coup-TFII isoform that is highly expressed in the brain. This Coup-TFII isoform alters Coup-TFII target gene expression and is suggested to interact with GR via its ligand binding domain resulting in an impaired GR target gene regulation in the nervous system. In this thesis it was demonstrated that NR variants are important for the understanding of the enormous regulatory potential of this receptor family and have to be taken into account for the development of therapeutic strategies for complex diseases such as stress related and neurodegenerative disorders.
We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
The Role of Dopamine and Acetylcholine as Modulators of Selective Attention and Response Speed
(2015)
The principles of top-down and bottom-up processing are essential to cognitive psychology. At their broadest, most general definition, they denote that processing can be driven either by the salience of the stimulus input or by individual goals and strategies. Selective top-down attention, specifically, consists in the deliberate prioritizing of stimuli that are deemed goal-relevant, while selective bottom-up attention relies on the automatic allocation of attention to salient stimuli (Connor, Egeth, & Yantis, 2004; Schneider, Schote, Meyer, & Frings, 2014). Variations within neurotransmitter systems can modulate cognitive performance in a domain-specific fashion (Greenwood, Fossella, & Parasuraman, 2005). Noudoost and Moore (2011a) proposed that the influence of the dopaminergic neurotransmitter system on selective top-down attention might be greater than the influence of this system on selective bottom-up attention; likewise, they assumed that the cholinergic neurotransmitter system might be more important for selective bottom-up than top-down attention. To test this hypothesis, naturally occurring variations within the two neurotransmitter systems were assessed. Five polymorphisms were selected; two of the dopaminergic system (the COMT Val158Met polymorphism and the DAT1 polymorphism) and three of the cholinergic system (the CHRNA4 rs1044396 polymorphism, the CHRNA5 rs3841324 polymorphism, and the CHRNA5 rs16969968 polymorphism). It was tested whether these polymorphisms modulated the performance in tasks of selective top-down attention (a Stroop task and a Negative priming task) and in a task of selective bottom-up attention (a Posner-Cuing task). Indeed, the dopaminergic polymorphisms influenced selective top-down attention, but exerted no effects on bottom-up attention. This aligned with the hypothesis proposed by Noudoost and Moore (2011a). In contrast, the cholinergic polymorphisms were not found to modulate selective bottom-up attention. The three cholinergic polymorphisms, however, affected the general response speed in the Stroop task, Negative priming task, and Posner-Cuing task (irrespective of attentional processing). In sum, the findings of this study provide strong indications that the dopaminergic system modulates selective top-down attention, while the cholinergic system is highly relevant for the general speed of information processing.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
This work is concerned with arbitrage bounds for prices of contingent claims under transaction costs, but regardless of other conceivable market frictions. Assumptions on the underlying market are held as weak as convenient for the deduction of meaningful results that make good economic sense. In discrete time we also allow for underlying price processes with uncountable state space. In continuous time the underlying price process is modeled by a semimartingale. For the most part we could avoid any stronger assumptions. The main problems with which we deal in this work are the modelling of (proportional) transaction costs, Fundamental Theorems of Asset Pricing under transaction costs, dual characterizations of arbitrage bounds under transaction costs, Quantile-Hedging under transaction costs, alternatives to the Black-Scholes model in continuous time (under transaction costs). The results apply to stock and currency markets.
Fostering positive and realistic self-concepts of individuals is a major goal in education worldwide (Trautwein & Möller, 2016). Individuals spend most of their childhood and adolescence in school. Thus, schools are important contexts for individuals to develop positive self-perceptions such as self-concepts. In order to enhance positive self-concepts in educational settings and in general, it is indispensable to have a comprehensive knowledge about the development and structure of self-concepts and their determinants. To date, extensive empirical and theoretical work on antecedents and change processes of self-concept has been conducted. However, several research gaps still exist, and several of these are the focus of the present dissertation. Specifically, these research gaps encompass (a) the development of multiple self-concepts from multiple perspectives regarding stability and change, (b) the direction of longitudinal interplay between self-concept facets over the entire time period from childhood to late adolescence, and (c) the evidence that a recently developed structural model of academic self-concept (nested Marsh/Shavelson model [Brunner et al., 2010]) fits the data in elementary school students, (d) the investigation of structural changes in academic self-concept profile formation within this model, (e) the investigation of dimensional comparison processes as determinants of academic self-concept profile formation in elementary school students within the internal/external frame of reference model (I/E model; Marsh, 1986), (f) the test of moderating variables for dimensional comparison processes in elementary school, (g) the test of the key assumptions of the I/E model that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, and (h) the generalizability of the findings regarding the I/E model over different statistical analytic methods. Thus, the aim of the present dissertation is to contribute to close these gaps with three studies. Thereby, data from German students enrolled in elementary school to secondary school education were gathered in three projects comprising the developmental time span from childhood to adolescence (ages 6 to 20). Three vital self-concept areas in childhood and adolescence were in-vestigated: general self-concept (i.e., self-esteem), academic self-concepts (general, math, reading, writing, native language), and social self-concepts (of acceptance and assertion). In all studies, data were analyzed within a latent variable framework. Findings are discussed with respect to the research aims of acquiring more comprehensive knowledge on the structure and development of significant self-concept in childhood and adolescence and their determinants. In addition, theoretical and practical implications derived from the findings of the present studies are outlined. Strengths and limitations of the present dissertation are discussed. Finally, an outlook for future research on self-concepts is given.
The demand for reliable statistics has been growing over the past decades, because more and more political and economic decisions are based on statistics, e.g. regional planning, allocation of funds or business decisions. Therefore, it has become increasingly important to develop and to obtain precise regional indicators as well as disaggregated values in order to compare regions or specific groups. In general, surveys provide the information for these indicators only for larger areas like countries or administrative divisions. However, in practice, it is more interesting to obtain indicators for specific subdivisions like on NUTS 2 or NUTS 3 levels. The Nomenclature of Units for Territorial Statistics (NUTS) is a hierarchical system of the European Union used in statistics to refer to subdivisions of countries. In many cases, the sample information on such detailed levels is not available. Thus, there are projects such as the European Census, which have the goal to provide precise numbers on NUTS 3 or even community level. The European Census is conducted amongst others in Germany and Switzerland in 2011. Most of the participating countries use sample and register information in a combined form for the estimation process. The classical estimation methods of small areas or subgroups, such as the Horvitz-Thompson (HT) estimator or the generalized regression (GREG) estimator, suffer from small area-specific sample sizes which cause high variances of the estimates. The application of small area methods, for instance the empirical best linear unbiased predictor (EBLUP), reduces the variance of the estimates by including auxiliary information to increase the effective sample size. These estimation methods lead to higher accuracy of the variables of interest. Small area estimation is also used in the context of business data. For example during the estimation of the revenues of specific subgroups like on NACE 3 or NACE 4 levels, small sample sizes can occur. The Nomenclature statistique des activités économiques dans la Communauté européenne (NACE) is a system of the European Union which defines an industry standard classification. Besides small sample sizes, business data have further special characteristics. The main challenge is that business data have skewed distributions with a few large companies and many small businesses. For instance, in the automotive industry in Germany, there are many small suppliers but only few large original equipment manufacturers (OEM). Altogether, highly influential units and outliers can be observed in business statistics. These extreme values in connection with small sample sizes cause severe problems when standard small area models are applied. These models are generally based on the normality assumption, which does not hold in the case of outliers. One way to solve these peculiarities is to apply outlier robust small area methods. The availability of adequate covariates is important for the accuracy of the above described small area methods. However, in business data, the auxiliary variables are hardly available on population level. One of several reasons for that is the fact that in Germany a lot of enterprises are not reflected in business registers due to truncation limits. Furthermore, only listed enterprises or companies which trespass specific thresholds are obligated to publish their results. This limits the number of potential auxiliary variables for the estimation. Even though there are issues with available covariates, business data often include spatial dependencies which can be used to enhance small area methods. Next to spatial information based on geographic characteristics, group-specific similarities like related industries based on NACE codes can be used. For instance, enterprises from the same NACE 2 level, e.g. sector 47 retail trade, behave more similar than two companies from different NACE 2 levels, e.g. sector 05 mining of coal and sector 64 financial services. This spatial correlation can be incorporated by extending the general linear mixed model trough the integration of spatially correlated random effects. In business data, outliers as well as geographic or content-wise spatial dependencies between areas or domains are closely linked. The coincidence of these two factors and the resulting consequences have not been fully covered in the relevant literature. The only approach that combines robust small area methods with spatial dependencies is the M-quantile geographically weighted regression model. In the context of EBLUP-based small area models, the combination of robust and spatial methods has not been considered yet. Therefore, this thesis provides a theoretical approach to this scientific and practical problem and shows its relevance in an empirical study.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
It has been the overall aim of this research work to assess the potential of hyperspectral remote sensing data for the determination of forest attributes relevant to forest ecosystem simulation modeling and forest inventory purposes. A number of approaches for the determination of structural and chemical attributes from hyperspectral remote sensing have been applied to the collected data sets. Many of the methods to be found in the literature were up to now just applied to broadband multispectral data, applied to vegetation canopies other than forests, reported to work on the leaf level or with modelled data, not validated with ground truth data, or not systematically compared to other methods. Attributes that describe the properties of the forest canopy and that are potentially open to remote sensing were identified, appropriate methods for their retrieval were implemented and field, laboratory and image data (HyMap sensor) were acquired over a number of forest plots. The study on structural attributes compared statistical and physical approaches. In the statistical section, linear predictive models between vegetation indices derived from HyMap data and field measurements of structural forest stand attributes were systematically evaluated. The study demonstrates that for hyperspectral image data, linear regression models can be applied to quantify leaf area index and crown volume with good accuracy. For broadband multispectral data, the accuracy was generally lower. The physically-based approach used the invertible forest reflectance model (INFORM), a combination of well established sub-models FLIM, SAIL and LIBERTY. The model was inverted with HyMap data using a neural network approach. In comparison to the statistical approach, it could be shown that the reflectance model inversion works equally well. In opposition to empirically derived prediction functions that are generally limited to the local conditions at a certain point in time and to a specified sensor type, the calibrated reflectance model can be applied more easily to different optical remote sensing data acquired over central European forests. The study on chemical forest attributes evaluated the information content of HyMap data for the estimation of nitrogen, chlorophyll and water concentration. A number of needle samples of Norway spruce were analysed for their total chlorophyll, nitrogen and water concentrations. The chemical data was linked to needle spectra measured in the laboratory and canopy spectra measured by the HyMap sensor. Wavebands selected in statistical models were often located in spectral regions that are known to be important for chlorophyll detection (red edge, green peak). Predictive models were applied on the HyMap image to compute maps of chlorophyll concentration and nitrogen concentration. Results of map overlay operations revealed coherence between total chlorophyll and zones of stand development stage and between total chlorophyll and zones of soil type. Finally, it can be stated that the hyperspectral remote sensing data generally contains more information relevant to the estimation of the forest attributes compared to multispectral data. Structural forest attributes, except biomass, can be determined with good accuracy from a hyperspectral sensor type like HyMap. Among the chemical attributes, chlorophyll concentration can be determined with good accuracy and nitrogen concentration with moderate accuracy. For future research, additional dimensions have to be taken into account, for instance through exploitation of multi-view angle data. Additionally, existing forest canopy reflectance models should be further improved.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.
Fast and Slow Effects of Cortisol on Several Functions of the Central Nervous System in Humans
(2014)
Cortisol is one of the key substances released during stress to restore homeostasis. Our knowledge of the impact of this glucocorticoid on cognition and behavior in humans is, however, still limited. Two modes of action of cortisol are known, a rapid, nongenomic and a slow, genomic mode. Both mechanisms appear to be involved in mediating the various effects of stress on cognition. Here, three experiments are presented that investigated fast and slow effects of cortisol on several functions of the human brain. The first experiment investigated the interaction between insulin and slow, genomic cortisol effects on resting regional cerebral blood flow (rCBF) in 48 young men. A bilateral, locally distinct increase in rCBF in the insular cortex was observed 37 to 58 minutes after intranasal insulin admission. Cortisol did not influence rCBF, neither alone nor in interaction with insulin. This finding suggests that cortisol does not influence resting cerebral blood flow within a genomic timeframe. The second experiment examined fast cortisol effects on memory retrieval. 40 participants (20 of them female) learned associations between neutral male faces and social descriptions and were tested for recall one week later. Cortisol administered intravenously 8 minutes before retrieval influenced recall performance in an inverted U-shaped dose-response relationship. This study demonstrates a rapid, presumably nongenomic cortisol effect on memory retrieval in humans. The third experiment studied rapid cortisol effects on early multisensory integration. 24 male participants were tested twice in a focused cross-modal choice reaction time paradigm, once after cortisol and once after placebo infusion. Cortisol acutely enhanced the integration of visual targets and startling auditory distractors, when both stimuli appeared in the same sensory hemi-field. The rapidity of effect onset strongly suggests that cortisol changes multisensory integration by a nongenomic mechanism. The work presented in this thesis highlights the essential role of cortisol as a fast acting agent during the stress response. Both the second and the third experiment provide new evidence of nongenomic cortisol effects on human cognition and behavior. Future studies should continue to investigate the impact of rapid cortisol effects on the functioning of the human brain.
Theoretical and empirical research assumes a negative development of student achievement motivation over the course of their school careers (i.e., mean-level declines of achievement motivation). However, the exact magnitude of this motivational change remains elusive and it is unclear whether different motivational constructs show similar developmental trends. Furthermore, it is unknown whether motivational declines are related to a particular school stage (i.e., elementary, middle, or high school) or the school transition, and which additional changes are associated with motivational decreases (e.g., changes in student achievement). Finally, previous research has remained inconsistent regarding the question whether ability grouping of students helps prevent motivational declines or results in additional motivational “costs” for students.
This dissertation presents three articles that were designed to address these research questions. In Article 1, a meta-analysis based on 107 independent longitudinal studies investigated student mean-level changes in self-esteem, academic self-concept, academic self-efficacy, intrinsic motivation, and achievement goals from first to 13th grade. Article 2 comprised two longitudinal studies with German adolescents (Study: n = 745 students assessed in four waves in grades 5-7; Study 2: n = 1420 students assessed in four waves in grades 5-8). Both longitudinal studies investigated the separate and the joint development of achievement goals, interest, and achievement in math. In Article 3, a longitudinal study (n = 296 high-ability students assessed in four waves in grades 5-7) investigated the effects of full-time ability grouping on student development of academic self-concept and achievement in math.
The meta-analysis revealed significant decreases in math and language academic self-concept, intrinsic motivation, and mastery and performance-approach goals, whereas no significant changes in self-esteem, general academic self-concept, academic self-efficacy, and performance-avoidance goals were found. Interestingly, motivational declines were not related to school stage or school transition. In Article 2, decreases in interest and mastery, performance-approach, and performance-avoidance goals were indicated by both longitudinal studies. Development of mastery and performance-approach goals was positively related or unrelated to development in interest and achievement, whereas development of performance-avoidance goals was negatively related or unrelated to development of interest and achievement. Finally, the longitudinal study in Article 3 revealed no significant change in student academic self-concept in math over time. Ability grouping showed no positive or negative effects on student academic self-concept. However, high-ability students that were grouped together demonstrated greater gains in their achievement than high-ability students in regular classes.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
The main objective of the present thesis was to investigate whether antibody effects observed in earlier in vitro studies can translate into the protection against chemical carcinogenesis in vivo as the basis of an immunoprophylactic approach against carcinogens. As model for chemical carcinogenesis, we selected B[a]P the prototype polycyclic aromatic hydrocarbon (PAH), an environmental pollutant emanating from both natural and anthropogenic sources. Many in vivo models conveniently use high doses of carcinogens mostly given as single bolus, which provides simple surrogate readouts, but poorly reflects chronic exposure to the low concentrations found in the environment. In addition, these concentrations cannot be matched with equimolar antibody concentrations obtained by immunisation. However, low B[a]P concentrations do not permit to directly measure chemical carcinogenesis. Therefore, in the present thesis, the pharmacokinetic, metabolism and B[a]P mediated immunotoxicity were chosen as experimental read-outs. B[a]P conjugate vaccines based on ovalbumin, tetanus toxoid and diphtheria toxoid (DT) as carrier proteins were developed to actively immunise mice against B[a]P. B[a]P-DT conjugate induced the most robust immune response. The antibodies reacted not only with B[a]P but also with the proximate carcinogen 7,8-diol-B[a]P. Antibodies modulated the bioavailability of B[a]P and its metabolic activation in a dose-dependent manner by sequestration in the blood. In order to further improve the vaccination, we replaced the protein carrier by promiscuous T-helper cell epitopes to induce higher antibody titer with increased specificity for the B[a]P hapten. We hypothesised that a reduction of B cell binding sites on the carrier, compared to whole protein carrier, should favour the activation of B cells recognising the hapten instead of the carrier protein. An internal processing of the carrier and cleavage of the B[a]P-BA and subsequent presentation of the carrier peptide by MHC II molecules to T cell receptor should induce a B cell dependent immune response by activating B cells capable to recognise B[a]P. We demonstrated that a vaccination against B[a]P using promiscuous T-helper cell epitopes as a carrier is feasible and some tested peptide conjugates were more immunogenic as whole protein conjugates with increased specificity. We showed that vaccination against B[a]P reduces immunotoxicity. B[a]P suppressed the proliferative response of both T and B cells after a sub-acute administration, an effect that was completely reversed by vaccination. In immunized mice the immunotoxic effect of B[a]P on IFN-γ, Il-12, TNF-ï¡ production and B cell activation was restored. In addition, specific antibodies inhibited the induction of Cyp1a1 by B[a]P in lymphocytes and Cyp1b1 in the liver, enzymes that are known to convert the procarcinogen B[a]P to the ultimate DNA-adduct forming metabolite, a major risk factor of chemical carcinogenesis. In order to replace Freund adjuvant and to improve the immunisation strategy in terms of antibody quantity and quality, several adjuvants that are potentially compatible with their use in humans were tested. In combination with Freund adjuvant, the conjugate-vaccine induced high levels of B[a]P-specific antibodies. We showed that all adjuvants tested induced specific antibodies against B[a]P and 7,8-diol-B[a]P, its carcinogenic metabolite. The highest antibody levels were obtained with Quil A, MF-59 and Alum. Biological activity in terms of enhanced retention of B[a]P was confirmed in mice immunised with Quil A, Montanide, Alum and MF-59. Our findings demonstrate that a vaccination against B[a]P is feasible in combination with adjuvants licensed in humans. Based on these results and with the current understanding of the mechanisms of chemical carcinogenesis of the ubiquitous carcinogen B[a]P and of the effects of specific antibodies, an immunoprophylactic approach against chemical carcinogenesis is absolutely warranted. Nevertheless, the direct effects of B[a]P-specific antibodies on the different stages of carcinogenesis (e.g. adduct formation) and whether these effects may translate into long-term protective effect against tumourigenesis needs to be proven in further experiments.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
Recent non-comparative studies diverge in their assessments of the extent to which German and Japanese post-Cold War foreign policies are characterized by continuity or change. While the majority of analyses on Germany find overall continuity in policies and guiding principles, prominent works on Japan see the country undergoing drastic and fundamental change. Using an explicitly comparative framework for analysis based on a role theoretical approach, this study reevaluates the question of change and continuity in the two countries" regional foreign policies, focusing on the time period from 1990 to 2010. Through a qualitative content analysis of key foreign policy speeches, this dissertation traces and compares German and Japanese national role conceptions (NRCs) by identifying policymakers" perceived duties and responsibilities of their country in international politics. Furthermore, it investigates actual foreign policy behavior in two case studies about German and Japanese policies on missile defense and on textbook disputes. The dissertation examines whether the NRCs identified in the content analysis are useful to understand and explain each country- particular conduct. Both qualitative content analysis and case studies demonstrate the influence of normative and ideational variables in foreign policymaking. Incremental adaptations in foreign policy preferences can be found in Germany as well as Japan, but they are anchored in established normative guidelines and represent attempts to harmonize existing preferences with the conditions of the post-Cold War era. The dissertation argues that scholars have overstated and misconstrued the changes underway by asserting that Japan is undergoing a sweeping transformation in its foreign policy.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.