### Refine

#### Year of publication

- 2010 (14) (remove)

#### Document Type

- Doctoral Thesis (14) (remove)

#### Language

- English (14) (remove)

#### Keywords

- Stress (4)
- Physiologische Psychologie (3)
- Aerodynamic Design (2)
- Hydrocortison (2)
- Numerische Strömungssimulation (2)
- One-Shot (2)
- Partielle Differentialgleichung (2)
- Sequentielle quadratische Optimierung (2)
- Shape Optimization (2)
- stress (2)

#### Institute

- Mathematik (4)
- Psychologie (4)
- Geographie und Geowissenschaften (2)
- Informatik (2)
- Wirtschaftswissenschaften (2)

For the first time, the German Census 2011 will be conducted via a new method the register based census. In contrast to a traditional census, where all inhabitants are surveyed, the German government will mainly attempt to count individuals using population registers of administrative authorities, such as the municipalities and the Federal Employment Agency. Census data that cannot be collected from the registers, such as information on education, training, and occupation, will be collected by an interview-based sample survey. Moreover, the new method reduces citizens' obligations to provide information and helps reduce costs significantly. The use of sample surveys is limited if results with a detailed regional or subject-matter breakdown have to be prepared. Classical estimation methods are sometimes criticized, since estimation is often problematic for small samples. Fortunately, model based small area estimators serve as an alternative. These methods help to increase the information, and hence the effective sample size. In the German Census 2011 it is possible to embed areas on a map in a geographical context. This may offer additional information, such as neighborhood relations or spatial interactions. Standard small area models, like Fay-Herriot or Battese-Harter-Fuller, do not account for such interactions explicitly. The aim of our work is to extend the classical models by integrating the spatial information explicitly into the model. In addition, the possible gain in efficiency will be analyzed.

Stress and pain are common experiences in human lives. Both, the stress and the pain system have adaptive functions and try to protect the organism in case of harm and danger. However, stress and pain are two of the most challenging problems for the society and the health system. Chronic stress, as often seen in modern societies, has much impact on health and can lead to chronic stress disorders. These disorders also include a number of chronic pain syndromes. However, pain can also be regarded as a stressor itself, especially when we consider how much patients suffer from long-lasting pain and the impact of pain on life quality. In this way, the effects of stress on pain can be fostered. For the generation and manifestation of chronic pain symptoms also learning processes such as classical conditioning play an important role. Processes of classical conditioning can also be influenced by stress. These facts illustrate the complex and various interactions between the pain and the stress systems. Both systems communicate permanently with each other and help to protect the organism and to keep a homeostatic state. They have various ways of communication, for example mechanisms related to endogenous opioids, immune parameters, glucocorticoids and baroreflexes. But an overactivation of the systems, for example caused by ongoing stress, can lead to severe health problems. Therefore, it is of great importance to understand these interactions and their underlying mechanisms. The present work deals with the relationship of stress and pain. A special focus is put on stress related hypocortisolism and pain processing, stress induced hypoalgesia via baroreceptor related mechanisms and stress related cortisol effects on aversive conditioning (as a model of pain learning). This work is a contribution to the wide field of research that tries to understand the complex interactions of stress and pain. To demonstrate the variety, the selected studies highlight different aspects of these interactions. In the first chapter I will give a short introduction on the pain and the stress systems and their ways of interaction. Furthermore, I will give a short summary of the studies presented in Chapter II to V and their background. The results and their meaning for future research will be discussed in the last part of the first chapter. Chronic pain syndromes have been associated with chronic stress and alterations of the HPA axis resulting in chronic hypocortisolism. But if these alterations may play a causal role in the pathophysiology of chronic pain remains unclear. Thus, the study described in Chapter II investigated the effects of pharmacological induced hypocortisolism on pain perception. Both, the stress and the pain system are related to the cardiovascular system. Increase of blood pressure is part of the stress reaction and leads to reduced pain perception. Therefore, it is important for the usage of pain tests to keep in mind potential interferences from activation of the cardiovascular system, especially when pain inhibitory processes are investigated. For this reason we compared two commonly and interchangeably used pain tests with regard to the triggered autonomic reactions. This study is described in chapter III. Chapter IV and V deal with the role of learning processes in pain and related influences of stress. Processes of classical conditioning play an important role for symptom generation and manifestation. In both studies aversive eyeblink conditioning was used as a model for pain learning. In the study described in Chapter IV we compared classical eyeblink conditioning in healthy volunteers to patients suffering from fibromyalgia, a chronic pain disorder. Also, differences of the HPA axis, as part of the stress system, were taken in account. The study of Chapter V investigated effects of the very first stress reaction, particularly rapid non-genomic cortisol effects. Healthy volunteers received an intravenous cortisol administration immediately before the eyeblink conditioning. Rapid effects have only been demonstrated on a cellular level and on animal behavior so far. In general, the studies presented in this work may give an impression of the broad variety of possible interactions between the pain and the stress system. Furthermore, they contribute to our knowledge about theses interactions. However, more research is needed to complete the picture.

The subject of this thesis is a homological approach to the splitting theory of PLS-spaces, i.e. to the question for which topologically exact short sequences 0->X->Y->Z->0 of PLS-spaces X,Y,Z the right-hand map admits a right inverse. We show that the category (PLS) of PLS-spaces and continuous linear maps is an additive category in which every morphism admits a kernel and a cokernel, i.e. it is pre-abelian. However, we also show that it is neither quasi-abelian nor semi-abelian. As a foundation for our homological constructions we show the more general result that every pre-abelian category admits a largest exact structure in the sense of Quillen. In the pre-abelian category (PLS) this exact structure consists precisely of the topologically exact short sequences of PLS-spaces. Using a construction of Ext-functors due to Yoneda, we show that one can define for each PLS-space A and every natural number k the k-th abelian-group valued covariant and contravariant Ext-functors acting on the category (PLS) of PLS-spaces, which induce for every topologically exact short sequence of PLS-spaces a long exact sequence of abelian groups and group morphisms. These functors are studied in detail and we establish a connection between the Ext-functors of PLS-spaces and the Ext-functors for LS-spaces. Through this connection we arrive at an analogue of a result for Fréchet spaces which connects the first derived functor of the projective limit with the first Ext-functor and also gives sufficient conditions for the vanishing of the higher Ext-functors. Finally, we show that Ext^k(E,F) = 0 for a k greater or equal than 1, whenever E is a closed subspace and F is a Hausdorff-quotient of the space of distributions, which generalizes a result of Wengenroth that is itself a generalization of results due to Domanski and Vogt.

Legalisation cannot be fully explained by interest politics. If that were the case, the attitudes towards legalisation would be expected to be based on objective interests and actual policies in France and Germany would be expected to be more similar. Nor can it be explained by institutional agency, because there are no hints that states struggle with different normative traditions. Rather, political actors seek to make use of the structures that already exist to guar-antee legitimacy for their actions. If the main concern of governmental actors really is to accumulate legitimacy, as stated in the introduction, then politicians have a good starting position in the case of legalisation of illegal foreigners. Citizens" negative attitudes towards legalisation cannot be explained by imagined labour market competition; income effects play only a secondary role. The most important explanatory factor is the educational level of each individual. Objective interests do not trigger attitudes towards legalisation, but rather a basic men-tal predisposition for or against illegal immigrants who are eligible for legalisation. Politics concerning amnesties are thus not tied to an objectively given structure like the socio-economic composition of the electorate, but are open for political discretion. Attitudes on legalising illegal immigrants can be regarded as being mediated by beliefs and perceptions, which can be used by political agents or altered by political developments. However, politicians must adhere to a national frame of legitimating strategies that cannot be neglected without consequences. It was evident in the cross-country comparison of political debates that there are national systems of reference that provide patterns of interpretation. Legalisation is seen and incorporated into immigration policy in a very specific way that differs from one country to the next. In both countries investigated in this study, there are fundamental debates about which basic principles apply to legalisation and which of these should be held in higher esteem: a legal system able to work, humanitarian rights, practical considerations, etc. The results suggest that legalisation is "technicized" in France by describing it as an unusual but possible pragmatic instrument for the adjustment of the inefficient rule of law. In Germany, however, legalisation is discussed at a more normative level. Proponents of conservative immigration policies regard it as a substantial infringement on the rule of law, so that even defenders of a humanitarian solution for illegal immigrants are not able to challenge this view without significant political harm. But the arguments brought to bear in the debate on legalisation are not necessarily sound because they are not irrefutable facts, but instruments to generate legitimacy, and there are enough possibilities for arguing and persuading because socio-economic factors play a minor role. One of the most important arguments, the alleged pull effect of legalisation, has been subjected to an empirical investigation. In the political debate, it does not make any dif-ference whether this is true or not, insofar as it is not contested by incontrovertible findings. In reality, the results suggest that amnesties indeed exert a small attracting influence on illegal immigration, which has been contested by immigration friendly politicians in the French par-liament. The effect, however, is not large; therefore, some conservative politicians may put too much stress on this argument. Moreover, one can see legalisation as an instrument to restore legitimacy that has slipped away from immigration politics because of a high number of illegally residing foreigners. This aspect explains some of the peculiarities in the French debate on legalisation, e.g. the idea that the coherence of the law is secured by creating exceptional rules for legalising illegal immigrants. It has become clear that the politics of legalisation are susceptible to manipulation by introducing certain interpretations into the political debate, which become predominant and supersede other views. In this study, there are no signs of a systematic misuse of this constellation by any certain actor. However, the history of immigration policy is full of examples of symbolic politics in which a certain measure has been initiated while the actors are totally aware of its lack of effect. Legalisation has escaped this fate so far because it is a specific instrument that is the result of neglecting populist mechanisms rather than an ex-ample of a superficial measure. This result does not apply to policies concerning illegal immi-gration in general, both with regard to concealing a lack of control and flexing the state- muscles.

This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).

Stress is a common phenomenon for animals living in the wild, but also for humans in modern societies. Originally, the body's stress response is an adaptive reaction to a possibly life-threatening situation, and it has been shown to impact on energy distribution and metabolism, thereby increasing the chance of survival. However, stress has also been shown to impact on mating behaviour and reproductive strategies in animals and humans. This work deals with the effect of stress on reproductive behavior. Up to now, research has only focused on the effects of stress on reproduction in general. The effects of stress on reproduction may be looked at from two points of view. First, stress affects reproductive functioning by endocrine (e.g. glucocorticoid) actions on the reproductive system. However, stress can also influence reproductive behavior, i.e. mate choice and mating preferences. Animals and humans do not mate randomly, but exhibit preferences towards mating partners. One factor by which animals and humans choose their mating partners is similarity vs. dissimilarity: Similar mates usually carry more of one's own genes and the cooperation between similar mates is, at least theoretically, less hampered by expressing diverse behaviors. By mating with dissimilar mates on the other hand one may acquire new qualities for oneself, but also for one's offspring, useful to cope with environmental challenge. In humans we usually find a preference for similar mates. Due to the high costs of breeding, variables like cooperation and life-long partnerships may play a greater role than the acquaintance of new qualities.The present work focuses on stress effects on mating preferences of humans and will give a first answer to the question whether stress may affect our preference for similar mates. Stress and mating preferences are at the centre of this work. Thus, in the first Chapter I will give an introduction on stress and mating preferences and link these topics to each other. Furthermore, I will give a short summary of the studies described in Chapter II - Chapter IV and close the chapter with a general discussion of the findings and directions for further research on stress and mating preferences. Human mating behavior is complex, and many aspects of it may not relate to biology but social conventions and education. This work will not focus on those aspects but rather on cognitive and affective processing of erotic and sexually-relevant stimuli, since we assume that these aspects of mating behaviour are likely related to psychobiological stress mechanisms. Therefore, a paradigm is needed that measures such aspects of mating preferences in humans. The studies presented in Chapter II and Chapter III were performed in order to develop such a paradigm. In these studies we show that affective startle modulation may be used to indicate differences in sexual approach motivation to potential mating partners with different similarity levels to the participant. In Chapter IV, I will describe a study that aimed to investigate the effects of stress on human mating preferences. We showed that stress reverses human mating preferences: While unstressed individuals show a preference for similar mates, stressed individuals seem to prefer dissimilar mates. Overall, the studies presented in this work showed that affective startle modulation can be employed to measure mating preferences in humans and that these mating preferences are influenced by stress.

We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.

Aggression is one of the most researched topics in psychology. This is understandable, since aggression behavior does a lot of harm to individuals and groups. A lot is known already about the biology of aggression, but one system that seems to be of vital importance in animals has largely been overlooked: the hypothalamic-pituitary-adrenal (HPA) axis. Menno Kruk and Jószef Haller and their research teams developed rodent models of adaptive, normal, and abnormal aggressive behavior. They found the acute HPA axis (re)activity, but also chronic basal levels to be causally relevant in the elicitation and escalation of aggressive behavior. As a mediating variable, changes in the processing of relevant social information is proposed, although this could not be tested in animals. In humans, not a lot of research has been done, but there is evidence for both the association between acute and basal cortisol levels in (abnormal) aggression. However, not many of these studies have been experimental of nature. rnrnOur aim was to add to the understanding of both basal chronic levels of HPA axis activity, as well as acute levels in the formation of aggressive behavior. Therefore, we did two experiments, both with healthy student samples. In both studies we induced aggression with a well validated paradigm from social psychology: the Taylor Aggression Paradigm. Half of the subjects, however, only went through a non-provoking control condition. We measured trait basal levels of HPA axis activity on three days prior. We took several cortisol samples before, during, and after the task. After the induction of aggression, we measured the behavioral and electrophysiological brain response to relevant social stimuli, i.e., emotional facial expressions embedded in an emotional Stroop task. In the second study, we pharmacologically manipulated cortisol levels 60min before the beginning of the experiment. To do that, half of the subjects were administered 20mg of hydrocortisone, which elevates circulating cortisol levels (cortisol group), the other half was administered a placebo (placebo group). Results showed that acute HPA axis activity is indeed relevant for aggressive behavior. We found in Study 1 a difference in cortisol levels after the aggression induction in the provoked group compared to the non-provoked group (i.e., a heightened reactivity of the HPA axis). However, this could not be replicated in Study 2. Furthermore, the pharmacological elevation of cortisol levels led to an increase in aggressive behavior in women compared to the placebo group. There were no effects in men, so that while men were significantly more aggressive than women in the placebo group, they were equally aggressive in the cortisol group. Furthermore, there was an interaction of cortisol treatment with block of the Taylor Aggression Paradigm, in that the cortisol group was significantly more aggressive in the third block of the task. Concerning basal HPA axis activity, we found an effect on aggressive behavior in both studies, albeit more consistently in women and in the provoked and non-provoked groups. However, the effect was not apparent in the cortisol group. After the aggressive encounter, information processing patterns were changed in the provoked compared to the non-provoked group for all facial expressions, especially anger. These results indicate that the HPA axis plays an important role in the formation of aggressive behavior in humans, as well. Importantly, different changes within the system, be it basal or acute, are associated with the same outcome in this task. More studies are needed, however, to better understand the role that each plays in different kinds of aggressive behavior, and the role information processing plays as a possible mediating variable. This extensive knowledge is necessary for better behavioral interventions.

Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.

In this thesis, three studies investigating the impact of stress on the protective startle eye blink reflex are reported. In the first study a decrease in prepulse inhibition of the startle reflex was observed after intravenous low dose cortisol application. In the second study a decrease in reflex magnitude of the startle reflex was observed after pharmacological suppression of endogenous cortisol production. In the third study, a higher reflex magnitude of the startle reflex was observed at reduced arterial and central venous blood pressure. These results can be interpreted in terms of an adaption to hostile environments.

Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.

This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.

N-acetylation by N-acetyltransferase 1 (NAT1) is an important biotransformation pathway of the human skin and it is involved in the deactivation of the arylamine and well-known contact allergen para-phenylenediamine (PPD). Here, NAT1 expression and activity were analyzed in antigen presenting cells (monocyte-derived dendritic cells, MoDCs, a model for epidermal Langerhans cells) and human keratinocytes. The latter were used to study exogenous and endogenous NAT1 activity modulations. Within this thesis, MoDCs were found to express metabolically active NAT1. Activities were between 23.4 and 26.6 nmol/mg/min and thus comparable to peripheral blood mononuclear cells. These data suggest that epidermal Langerhans cells contribute to the cutaneous N-acetylation capacity. Keratinocytes, which are known for their efficient N-acetylation, were analyzed in a comparative study using primary keratinocytes (NHEK) and different shipments of the immortalized keratinocyte cell line HaCaT, in order to investigate the ability of the cell line to model epidermal biotransformation. N-acetylation of the substrate para-aminobenzoic acid (PABA) was 3.4-fold higher in HaCaT compared to NHEK and varied between the HaCaT shipments (range 12.0"44.5 nmol/mg/min). Since B[a]P induced cytochrome p450 1 (CYP1) activities were also higher in HaCaT compared to NHEK, the cell line can be considered as an in vitro tool to qualitatively model epidermal metabolism, regarding NAT1 and CYP1. The HaCaT shipment with the highest NAT1 activity showed only minimal reduction of cell viability after treatment with PPD and was subsequently used to study interactions between NAT1 and PPD in keratinocytes. Treatment with PPD induced expression of cyclooxygenases (COX) in HaCaT, but in parallel, PPD N-acetylation was found to saturate with increasing PPD concentration. This saturation explains the presence of the PPD induced COX induction despite the high N-acetylation capacities. A detailed analysis of the effect of PPD on NAT1 revealed that the saturation of PPD N-acetylation was caused by a PPD-induced decrease of NAT1 activity. This inhibition was found in HaCaT as well as in primary keratinocytes after treatment with PPD and PABA. Regarding the mechanism, reduced NAT1 protein level and unaffected NAT1 mRNA expression after PPD treatment adduced clear evidences for substrate-dependent NAT1 downregulation. These results expand the existing knowledge about substrate-dependent NAT1 downregulation to human epithelial skin cells and demonstrate that NAT1 activity in keratinocytes can be modulated by exogenous factors. Further analysis of HaCaT cells from different shipments revealed an accelerated progression through the cell cycle in HaCaT cells with high NAT1 activities. These findings suggest an association between NAT1 and proliferation in keratinocytes as it has been proposed earlier for tumor cells. In conclusion, N-acetylation capacity of MoDCs as well as keratinocytes contribute to the overall N-acetylation capacity of human skin. NAT1 activity of keratinocytes and consequently the detoxification capacities of human skin can be modulated by the presence of exogenous NAT1 substrates and endogenous by the cell proliferation status of keratinocytes.

Mechanical and Biological Treatment (MBT) generally aims to reduce the amount of solid waste and emissions in landfills and enhance the recoveries. MBT technology has been studied in various countries in Europe and Asia. Techniques of solid waste treatment are distinctly different in the study areas. A better understanding of MBT waste characteristics can lead to an optimization of the MBT technology. For a sustainable waste management, it is essential to determine the characteristics of the final MBT waste, the effectiveness of the treatment system as well as the potential application of the final material regarding future utilization. This study aims to define and compare the characteristics of the final MBT materials in the following countries: Luxembourg (using a high degree technology), Fridhaff in Diekirch/Erpeldange, Germany (using a well regulated technology), Singhofen in Rhein-Lahn district, Thailand (using a low cost technology): Phitsanulok in Phitsanulok province. The three countries were chosen for this comparative study due to their unique performance in the MBT implementation. The samples were taken from the composting heaps of the final treatment process prior to sending them to landfills, using a random sampling standard strategy from August 2008 onwards. The size of the sample was reduced to manageable sizes before characterization. The size reduction was achieved by the quartering method. The samples were first analyzed for the size fraction on the day of collection. They were screened into three fractions by the method of dry sieving: small size with a diameter of <10 mm, medium size with a diameter of 10-40 mm and large size with a diameter of >40 mm. These fractions were further analyzed for their physical and chemical parameters such as particle size distribution (total into 12 size fractions), particle shape, porosity, composition, water content, water retention capacity and respiratory activity. The extracted eluate was analyzed for pH-value, heavy metals (lead, cadmium and arsenic), chemical oxygen demand, ammonium, sulfate and chloride. In order to describe and evaluate the potential application of the small size material as a final cover of landfills, the fraction of small size samples were tested for the geotechnical properties as well. The geotechnical parameters were the compaction test, permeability test and shear strength test. The detailed description of the treatment facilities and methods of the study areas were included in the results. The samples from the three countries are visibly smaller than waste without pretreatment. Maximum particle size is found to be less than 100 mm. The samples are found to consist of dust to coarse fractions. The small size with a diameter of <10 mm was highest in the sample from Germany (average 60% by weight), secondly in the sample from Luxembourg (average 43% by weight) and lowest in the sample from Thailand (average 15% by weight). The content of biodegradable material generally increased with decreasing particle sizes. Primary components are organic, plastics, fibrous materials and inert materials (glass and ceramics). The percentage of each components greatly depends on the MBT process of each country. Other important characteristics are significantly reduced water content, reduced total organic carbon and reduced potential heavy metals. The geotechnical results show that the small fraction is highly compact, has a low permeability and lot of water adsorbed material. The utilization of MBT material in this study shows a good trend as it proved to be a safe material which contained very low amounts of loadings and concentrations of chemical oxygen demand, ammonium, and heavy metals. The organic part can be developed to be a soil conditioner. It is also suitably utilized as a bio-filter layer in the final cover of landfill or as a temporary cover during the MBT process. This study showed how to identify the most appropriate technology for municipal solid waste disposal through the study of waste characterization.