Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (517) (entfernen)
Volltext vorhanden
- ja (517) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (19)
- Fernerkundung (18)
- Optimierung (17)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Finanzierung (9)
- cortisol (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (53)
- Mathematik (47)
- Fachbereich 6 (38)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (24)
- Informatik (19)
- Anglistik (14)
- Rechtswissenschaft (14)
The World's second oldest system of judicial review of national legislation emerged through court practice from the very first years after the adoption of the Constitution of Norway in 1814. The review is exercised by the ordinary courts at all levels with the single Supreme Court as the last instance. No specialized constitutional court has been established. The independence of the judiciary is generally recognized as high. But what degree of legitimacy should judges appointed in order to ensure ordinary judicial business enjoy when exercising a basically political function like reviewing and possibly setting aside acts of Parliament?
Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.
In this thesis, we consider the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Due to the exponentially increasing computational complexity in the number of dimensions—the so-called curse of dimensionality—they present a considerable computational challenge and become infeasible even for moderate problem sizes.
Multilinear algebra and tensor numerical methods have a wide range of applications in the fields of data science and scientific computing. Due to the typically large problem sizes in practical settings, efficient methods, which exploit low-rank structures, are essential. In this thesis, we consider an application each in both of these fields.
Tensor completion, or imputation of unknown values in partially known multiway data is an important problem, which appears in statistics, mathematical imaging science and data science. Under the assumption of redundancy in the underlying data, this is a well-defined problem and methods of mathematical optimization can be applied to it.
Due to the fact that tensors of fixed rank form a Riemannian submanifold of the ambient high-dimensional tensor space, Riemannian optimization is a natural framework for these problems, which is both mathematically rigorous and computationally efficient.
We present a novel Riemannian trust-region scheme, which compares favourably with the state of the art on selected application cases and outperforms known methods on some test problems.
Optimization problems governed by partial differential equations form an area of scientific computing which has applications in a variety of areas, ranging from physics to financial mathematics. Due to the inherent high dimensionality of optimization problems arising from discretized differential equations, these problems present computational challenges, especially in the case of three or more dimensions. An even more challenging class of optimization problems has operators of integral instead of differential type in the constraint. These operators are nonlocal, and therefore lead to large, dense discrete systems of equations. We present a novel solution method, based on separation of spatial dimensions and provably low-rank approximation of the nonlocal operator. Our approach allows the solution of multidimensional problems with a complexity which is only slightly larger than linear in the univariate grid size; this improves the state of the art for a particular test problem problem by at least two orders of magnitude.
The dissertation deals with methods to improve design-based and model-assisted estimation techniques for surveys in a finite population framework. The focus is on the development of the statistical methodology as well as their implementation by means of tailor-made numerical optimization strategies. In that regard, the developed methods aim at computing statistics for several potentially conflicting variables of interest at aggregated and disaggregated levels of the population on the basis of one single survey. The work can be divided into two main research questions, which are briefly explained in the following sections.
First, an optimal multivariate allocation method is developed taking into account several stratification levels. This approach results in a multi-objective optimization problem due to the simultaneous consideration of several variables of interest. In preparation for the numerical solution, several scalarization and standardization techniques are presented, which represent the different preferences of potential users. In addition, it is shown that by solving the problem scalarized with a weighted sum for all combinations of weights, the entire Pareto frontier of the original problem can be generated. By exploiting the special structure of the problem, the scalarized problems can be efficiently solved by a semismooth Newton method. In order to apply this numerical method to other scalarization techniques as well, an alternative approach is suggested, which traces the problem back to the weighted sum case. To address regional estimation quality requirements at multiple stratification levels, the potential use of upper bounds for regional variances is integrated into the method. In addition to restrictions on regional estimates, the method enables the consideration of box-constraints for the stratum-specific sample sizes, allowing minimum and maximum stratum-specific sampling fractions to be defined.
In addition to the allocation method, a generalized calibration method is developed, which is supposed to achieve coherent and efficient estimates at different stratification levels. The developed calibration method takes into account a very large number of benchmarks at different stratification levels, which may be obtained from different sources such as registers, paradata or other surveys using different estimation techniques. In order to incorporate the heterogeneous quality and the multitude of benchmarks, a relaxation of selected benchmarks is proposed. In that regard, predefined tolerances are assigned to problematic benchmarks at low aggregation levels in order to avoid an exact fulfillment. In addition, the generalized calibration method allows the use of box-constraints for the correction weights in order to avoid an extremely high variation of the weights. Furthermore, a variance estimation by means of a rescaling bootstrap is presented.
Both developed methods are analyzed and compared with existing methods in extensive simulation studies on the basis of a realistic synthetic data set of all households in Germany. Due to the similar requirements and objectives, both methods can be successively applied to a single survey in order to combine their efficiency advantages. In addition, both methods can be solved in a time-efficient manner using very comparable optimization approaches. These are based on transformations of the optimality conditions. The dimension of the resulting system of equations is ultimately independent of the dimension of the original problem, which enables the application even for very large problem instances.
Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)
An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.
Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
Every action we perform, no matter how simple or complex, has a cognitive representation. It is commonly assumed that these are organized hierarchically. Thus, the representation of a complex action consists of multiple simpler actions. The representation of a simple action, in turn, consists of stimulus, response, and effect features. These are integrated into one representation upon the execution of an action and can be retrieved if a feature is repeated. Depending on whether retrieved features match or only partially match the current action episode, this might benefit or impair the execution of a subsequent action. This pattern of costs and benefits results in binding effects that indicate the strength of common representation between features. Binding effects occur also in more complex actions: Multiple simple actions seem to form representations on a higher level through the integration and retrieval of sequentially given responses, resulting in so-called response-response binding effects. This dissertation aimed to investigate what factors determine whether simple actions form more complex representations. The first line of research (Articles 1-3) focused on dissecting the internal structure of simple actions. Specifically, I investigated whether the spatial relation of stimuli, responses, or effects, that are part of two different simple actions, influenced whether these simple actions are represented as one more complex action. The second line of research (Articles 2, 4, and 5) investigated the role of context on the formation and strength of more complex action representations. Results suggest that spatial separation of responses as well as context might affect the strength of more complex action representations. In sum, findings help to specify assumptions on the structure of complex action representations. However, it may be important to distinguish factors that influence the strength and structure of action representations from factors that terminate action representations.
This dissertation is dedicated to the analysis of the stabilty of portfolio risk and the impact of European regulation introducing risk based classifications for investment funds.
The first paper examines the relationship between portfolio size and the stability of mutual fund risk measures, presenting evidence for economies of scale in risk management. In a unique sample of 338 fund portfolios we find that the volatility of risk numbers decreases for larger funds. This finding holds for dispersion as well as tail risk measures. Further analyses across asset classes provide evidence for the robustness of the effect for balanced and fixed income portfolios. However, a size effect did not emerge for equity funds, suggesting that equity fund managers simply scale their strategy up as they grow. Analyses conducted on the differences in risk stability between tail risk measures and volatilities reveal that smaller funds show higher discrepancies in that respect. In contrast to the majority of prior studies on the basis of ex-post time series risk numbers, this study contributes to the literature by using ex-ante risk numbers based on the actual assets and de facto portfolio data.
The second paper examines the influence of European legislation regarding risk classification of mutual funds. We conduct analyses on a set of worldwide equity indices and find that a strategy based on the long term volatility as it is imposed by the Synthetic Risk Reward Indicator (SRRI) would lead to substantial variations in exposures ranging from short phases of very high leverage to long periods of under investments that would be required to keep the risk classes. In some cases, funds will be forced to migrate to higher risk classes due to limited means to reduce volatilities after crises events. In other cases they might have to migrate to lower risk classes or increase their leverage to ridiculous amounts. Overall, we find if the SRRI creates a binding mechanism for fund managers, it will create substantial interference with the core investment strategy and may incur substantial deviations from it. Fruthermore due to the forced migrations the SRRI degenerates to a passive indicator.
The third paper examines the impact of this volatility based fund classification on portfolio performance. Using historical data on equity indices we find initially that a strategy based on long term portfolio volatility, as it is imposed by the Synthetic Risk Reward Indicator (SRRI), yields better Sharpe Ratios (SRs) and Buy and Hold Returns (BHRs) for the investment strategies matching the risk classes. Accounting for the Fama-French factors reveals no significant alphas for the vast majority of the strategies. In our simulation study where volatility was modelled through a GJR(1,1) - model we find no significant difference in mean returns, but significantly lower SRs for the volatility based strategies. These results were confirmed in robustness checks using alternative models and timeframes. Overall we present evidence which suggests that neither the higher leverage induced by the SRRI nor the potential protection in downside markets does pay off on a risk adjusted basis.
This work is concerned with two kinds of objects: regular expressions and finite automata. These formalisms describe regular languages, i.e., sets of strings that share a comparatively simple structure. Such languages - and, in turn, expressions and automata - are used in the description of textual patterns, workflow and dependence modeling, or formal verification. Testing words for membership in any given such language can be implemented using a fixed - i.e., finite - amount of memory, which is conveyed by the phrasing finite-automaton. In this aspect they differ from more general classes, which require potentially unbound memory, but have the potential to model less regular, i.e., more involved, objects. Other than expressions and automata, there are several further formalisms to describe regular languages. These formalisms are all equivalent and conversions among them are well-known.However, expressions and automata are arguably the notions which are used most frequently: regular expressions come natural to humans in order to express patterns, while finite automata translate immediately to efficient data structures. This raises the interest in methods to translate among the two notions efficiently. In particular,the direction from expressions to automata, or from human input to machine representation, is of great practical relevance. Probably the most frequent application that involves regular expressions and finite automata is pattern matching in static text and streaming data. Common tools to locate instances of a pattern in a text are the grep application or its (many) derivatives, as well as awk, sed and lex. Notice that these programs accept slightly more general patterns, namely ''POSIX expressions''. Concerning streaming data, regular expressions are nowadays used to specify filter rules in routing hardware.These applications have in common that an input pattern is specified in form a regular expression while the execution applies a regular automaton. As it turns out, the effort that is necessary to describe a regular language, i.e., the size of the descriptor,varies with the chosen representation. For example, in the case of regular expressions and finite automata, it is rather easy to see that any regular expression can be converted to a finite automaton whose size is linear in that of the expression. For the converse direction, however, it is known that there are regular languages for which the size of the smallest describing expression is exponential in the size of the smallest describing automaton.This brings us to the subject at the core of the present work: we investigate conversions between expressions and automata and take a closer look at the properties that exert an influence on the relative sizes of these objects.We refer to the aspects involved with these consideration under the titular term of Relative Descriptional Complexity.
On the Influence of Ignored Stimuli: Generalization and Application of Distractor-Response Binding.
(2011)
In selection tasks where target stimuli are accompanied by distractors, responses to target stimuli, target stimuli and the distractor stimuli can be encoded together as one episode in memory. Subsequent repetition of any aspect of such an episode can lead to the retrieval of the whole episode including the response. Thus, repeating a distractor can retrieve responses given to previous targets; this mechanism was labeled distractor-response binding and has been evidenced in several visual setups. Three experiments of the present thesis implemented a priming paradigm with an identification task to generalize this mechanism to auditory and tactile stimuli as well as to stimulus concepts. In four more experiments the possible effect of distractor-response binding on drivers' reactions was investigated. The same paradigm was implemented using more complex stimuli, foot responses, go/no-go responses, and a dual task setup with head-up and head-down displays. The results indicate that distractor-response binding effects occur with auditory and tactile stimuli and that the process is mediated by a conceptual representation of the distractor stimuli. Distractor-response binding effects also revealed for stimuli, responses, and framework conditions likely to occur in a driving situation. It can be concluded that the effect of distractor-response binding needs to be taken into account for the design of local danger warnings in driver assistance systems.
This thesis deals with economic aspects of employees' sickness. In addition to the classical case of sickness absence, in which an employee is completely unable to work and hence stays at home, there is the case of sickness presenteeism, in which the employee comes to work despite being sick. Accordingly, the thesis at hand covers research on both sickness states, absence and presenteeism. The first section covers sickness absence and labour market institutions. Chapter 2 presents theoretical and empirical evidence that differences in the social norm against benefit fraud, so-called benefit morale, can explain cross country diversity in the generosity of statutory sick pay entitlements between developed countries. In our political economy model, a stricter benefit morale reduces the absence rate, with counteracting effects on the politically set sick pay replacement rate. On the one hand, less absence caused by a stricter norm, makes the tax-financed insurance cheaper, leading to the usual demand side effect and hence to more generous sick pay entitlements. On the other hand, being less likely to be absent due to a stricter norm, the voters prefer a smaller fee over more insurance. We document both effects in a sample of 31 developed countries, capturing the years from 1981 to 2010. In Chapter 3 we investigate the relationship between the existence of works councils and illness-related absence and its consequences for plants. Using individual data from the German Socio-Economic Panel (SOEP), we find that the existence of a works council is positively correlated with the incidence and the annual duration of absence. Additionally, linked employer-employee data (LIAB) suggests that employers are more likely to expect personnel problems due to absence in plants with a works council. In western Germany, we find significant effects using a difference-in-differences approach, which can be causally interpreted. The second part of this thesis covers two studies on sickness presenteeism. In Chapter 4, we empirically investigate the determinants of the annual duration of sickness presenteeism using the European Working Conditions Survey (EWCS). Work autonomy, workload and tenure are positively related to the number of sickness presenteeism days, while a good working environment comes with less presenteeism. In Chapter 5 we theoretically and empirically analyze sickness absence and presenteeism behaviour with a focus on their interdependence. Specifically, we ask whether work-related factors lead to a substitutive, a complementary or no relationship between sickness absence and presenteeism. In other words, we want to know whether changes in absence and presenteeism behaviour incurred by work-related characteristics point in opposite directions (substitutive), the same direction (complementary), or whether they only affect either one of the two sickness states (no relationship). Our theoretical model shows that the relationship between sickness absence and presenteeism with regard to work-related characteristics is not necessarily of a substitutive nature. Instead, a complementary or no relationship can emerge as well. Turning to the empirical investigation, we find that only one out of 16 work-related factors, namely the supervisor status, leads to a substitutive relationship between absence and presenteeism. Few of the other determinants are complements, while the large majority is either related to sickness absence or presenteeism.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
In 2014/2015 a one-year field campaign at the Tiksi observatory in the Laptev Sea area was carried out using Sound Detection and Ranging/Radio Acoustic Sounding System (SODAR/RASS) measurements to investigate the atmospheric boundary layer (ABL) with a focus on low-level jets (LLJ) during the winter season. In addition to SODAR/RASS-derived vertical profiles of temperature, wind speed and direction, a suite of complementary measurements at the Tiksi observatory was available. Data of a regional atmospheric model were used to put the local data into the synoptic context. Two case studies of LLJ events are presented. The statistics of LLJs for six months show that in about 23% of all profiles LLJs were present with a mean jet speed and height of about 7 m/s and 240 m, respectively. In 3.4% of all profiles LLJs exceeding 10 m/s occurred. The main driving mechanism for LLJs seems to be the baroclinicity, since no inertial oscillations were found. LLJs with heights below 200 m are likely influenced by local topography.
The parameterization of ocean/sea-ice/atmosphere interaction processes is a challenge for regional climate models (RCMs) of the Arctic, particularly for wintertime conditions, when small fractions of thin ice or open water cause strong modifications of the boundary layer. Thus, the treatment of sea ice and sub-grid flux parameterizations in RCMs is of crucial importance. However, verification data sets over sea ice for wintertime conditions are rare. In the present paper, data of the ship-based experiment Transarktika 2019 during the end of the Arctic winter for thick one-year ice conditions are presented. The data are used for the verification of the regional climate model COSMO-CLM (CCLM). In addition, Moderate Resolution Imaging Spectroradiometer (MODIS) data are used for the comparison of ice surface temperature (IST) simulations of the CCLM sea ice model. CCLM is used in a forecast mode (nested in ERA5) for the Norwegian and Barents Seas with 5 km resolution and is run with different configurations of the sea ice model and sub-grid flux parameterizations. The use of a new set of parameterizations yields improved results for the comparisons with in-situ data. Comparisons with MODIS IST allow for a verification over large areas and show also a good performance of CCLM. The comparison with twice-daily radiosonde ascents during Transarktika 2019, hourly microwave water vapor measurements of first 5 km in the atmosphere and hourly temperature profiler data show a very good representation of the temperature, humidity and wind structure of the whole troposphere for CCLM.
Today obesity has been recognized as a disease. Evidence suggests that obesity often has Genetic, environmental, psychological and other factors. Growing evidence points to heredity as a strong determining factor of obesity. The characterization of uncoupling proteins (UCP) represents a major breakthrough of genetic factors towards understanding the molecular basis for energy expenditure and therefore likely to have important implication for the cause and treatment of human obesity. UCPs as mitochondrial anion carriers which creates a pathway that allows dissipation of the proton electrochemical gradient therefore which when deregulated are key risk factors in the development of obesity and other eating disorders. In order to better understand the roles of both UCP2 and UCP3 which considered as prime candidate genes involved in the pathogenesis of obesity, this study elucidate (1) Genomic organization: The human UCP2 (3) gene spans over 8.7 kb (7.5 kb) distributed on 8 (7) exons. Three UCP genes may have evolved from a common ancestor or are the result from gene duplication events. Two mRNA transcripts are generated from hUCP3 gene, the long and short form of hUCP3 is differing by the presence or absence of 37 amino acid residues at the C-terminus. (2) Mutational analysis revealed a mutation in exon 4 of hUCP2 resulting in the substitution of an alanine by a valine at codon 55 and an insertion polymorphism in exon 8 consisted of a 45 bp repeat located 150 bp downstream of the stop codon in the 3'-UTR. The allele frequencies of both polymorphisms were not significantly elevated in a subgroup of children characterized by low Resting Metabolic Rates (RMR). (3) Promoter Analysis showed that the promoter region of hUCP2 lacks a classical TATA or CAAT box. Functional characterization of hUCP2 promoter showed that minimal promoter activity was observed within 65 bp upstream of the transcriptional start site. 75 bp further upstream a strong cis-acting regulatory element was identified which significantly enhanced basal promoter activity. The regulation of human UCP2 gene expression involves complex interactions among positive and negative regulatory elements. the 5"-flanking region of the hUCP3 gene were characterized in which contains both TATA and CAAT boxes as well as consensus motifs for PPRE, TRE, CRE and muscle-specific MyoD and MEF2 sites. Functional characterization identified a cis-acting negative regulatory element between - 2983 and -982 while the region between -982 and -284 showed greatly increased basal promoter activity suggesting the presence of a strong enhancer element. Promoter activity was particularly enhanced in the murine skeletal muscle cell line C2C12 reflecting the tissue-selective expression pattern of UCP3.
In this thesis, we present a new approach for estimating the effects of wind turbines for a local bat population. We build an individual based model (IBM) which simulates the movement behaviour of every single bat of the population with its own preferences, foraging behaviour and other species characteristics. This behaviour is normalized by a Monte-Carlo simulation which gives us the average behaviour of the population. The result is an occurrence map of the considered habitat which tells us how often the bat and therefore the considered bat population frequent every region of this habitat. Hence, it is possible to estimate the crossing rate of the position of an existing or potential wind turbine. We compare this individual based approach with a partial differential equation based method. This second approach produces a lower computational effort but, unfortunately, we lose information about the movement trajectories at the same time. Additionally, the PDE based model only gives us a density profile. Hence, we lose the information how often each bat crosses special points in the habitat in one night. In a next step we predict the average number of fatalities for each wind turbine in the habitat, depending on the type of the wind turbine and the behaviour of the considered bat species. This gives us the extra mortality caused by the wind turbines for the local population. This value is used for a population model and finally we can calculate whether the population still grows or if there already is a decline in population size which leads to the extinction of the population. Using the combination of all these models, we are able to evaluate the conflict of wind turbines and bats and to predict the result of this conflict. Furthermore, it is possible to find better positions for wind turbines such that the local bat population has a better chance to survive. Since bats tend to move in swarm formations under certain circumstances, we introduce swarm simulation using partial integro-differential equations. Thereby, we have a closer look at existence and uniqueness properties of solutions.
In this psycho-neuro-endocrine study the molecular basis of different variants of steroid receptors as well as highly conserved non steroidal receptors was investigated. These nuclear receptors (NRs) are important key regulators of a wide variety of different physiological and pathophysiological challenges ranging from inflammation and stress to complex behaviour and disease. NRs control gene transcription in a ligand dependent manner and are embedded in the huge interaction network of the neuroendocrine and immune system. Two receptors, the glucocorticoid receptor (GR) and the chicken ovalbumin upstream promoter-transcription factorII (Coup-TFII), both expressed in the immune and nervous system, were investigated regarding possible splice variants and their implication in the control of gene transcription. Both NRs are known to interact and modulate each other- target gene regulation. This study could be shown that both NRs have different splice variants that are expressed in a tissue specific manner. The different 5-´alternative transcript variants of the human GR were in silico identified in other species and evidence for a highly conserved and tightly controlled function was provided. Investigations of the N-terminal transactivation domain of the GR showed a deletion suggesting an altered glucocorticoid-dependent transactivation profile. The newly identified alternative transcript variant of Coup-TFII leads to a DNA binding deficient Coup-TFII isoform that is highly expressed in the brain. This Coup-TFII isoform alters Coup-TFII target gene expression and is suggested to interact with GR via its ligand binding domain resulting in an impaired GR target gene regulation in the nervous system. In this thesis it was demonstrated that NR variants are important for the understanding of the enormous regulatory potential of this receptor family and have to be taken into account for the development of therapeutic strategies for complex diseases such as stress related and neurodegenerative disorders.
Nonlocal operators are used in a wide variety of models and applications due to many natural phenomena being driven by nonlocal dynamics. Nonlocal operators are integral operators allowing for interactions between two distinct points in space. The nonlocal models investigated in this thesis involve kernels that are assumed to have a finite range of nonlocal interactions. Kernels of this type are used in nonlocal elasticity and convection-diffusion models as well as finance and image analysis. Also within the mathematical theory they arouse great interest, as they are asymptotically related to fractional and classical differential equations.
The results in this thesis can be grouped according to the following three aspects: modeling and analysis, discretization and optimization.
Mathematical models demonstrate their true usefulness when put into numerical practice. For computational purposes, it is important that the support of the kernel is clearly determined. Therefore nonlocal interactions are typically assumed to occur within an Euclidean ball of finite radius. In this thesis we consider more general interaction sets including norm induced balls as special cases and extend established results about well-posedness and asymptotic limits.
The discretization of integral equations is a challenging endeavor. Especially kernels which are truncated by Euclidean balls require carefully designed quadrature rules for the implementation of efficient finite element codes. In this thesis we investigate the computational benefits of polyhedral interaction sets as well as geometrically approximated interaction sets. In addition to that we outline the computational advantages of sufficiently structured problem settings.
Shape optimization methods have been proven useful for identifying interfaces in models governed by partial differential equations. Here we consider a class of shape optimization problems constrained by nonlocal equations which involve interface-dependent kernels. We derive the shape derivative associated to the nonlocal system model and solve the problem by established numerical techniques.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
Background and rationale: Changing working conditions demand adaptation, resulting in higher stress levels in employees. In consequence, decreased productivity, increasing rates of sick leave, and cases of early retirement result in higher direct, indirect, and intangible costs. Aims of the Research Project: The aim of the study was to test the usefulness of a novel translational diagnostic tool, Neuropattern, for early detection, prevention, and personalized treatment of stress-related disorders. The trial was designed as a pilot study with a wait list control group. Materials and Methods: In this study, 70 employees of the Forestry Department Rhineland-Palatinate, Germany, were enrolled. Subjects were block-randomized according to the functional group of their career field, and either underwent Neuropattern diagnostics immediately, or after a waiting period of three months. After the diagnostic assessment, their physicians received the Neuropattern Medical Report, including the diagnostic results and treatment recommendations. Participants were informed by the Neuropattern Patient Report, and were eligible to an individualized Neuropattern Online Counseling account. Results: The application of Neuropattern diagnostics significantly improved mental health and health-related behavior, reduced perceived stress, emotional exhaustion, overcommitment and possibly, presenteeism. Additionally, Neuropattern sensitively detected functional changes in stress physiology at an early stage, thus allowing timely personalized interventions to prevent and treat stress pathology. Conclusion: The present study encouraged the application of Neuropattern diagnostics to early intervention in non-clinical populations. However, further research is required to determine the best operating conditions.
The aim of the thesis was to investigate the role of the immune system in fibromyalgia (FM), as part of a dynamic co-regulation between different bodily systems. FM is a chronic musculoskeletal disorder characterized by widespread pain and specific tender points, combined with other symptoms including fatigue, sleep disturbances, morning stiffness and anxiety. The main goal of the work was to identify possible dysregulation of peripheral immune and endocrine parameters in patients with FM compared to matched healthy controls. Moreover, the possible relation between symptom complaints and the specific parameters measured was also evaluated. A first approach was to investigate possible differences between FM patients and controls in the expression of cytokines, as they have been implicated in the occurrence of several of the symptoms associated with FM. Furthermore, adhesion molecules which are involved in cell-to-cell communication and immune cell trafficking were also studied. The latter are known to be regulated by both cytokines and glucocorticoids (GCs) and their expression is often found altered in patients with immune dysregulation. It was expected that subjects with FM would have an increased production of proinflammatory cytokines and/or a reduced antiinflammatory cytokine production and that certain cytokines and/or adhesion molecules would be differently regulated by dexamethasone (DEX). Unstimulated blood was used in the analysis of adhesion molecule expression by flow cytometry while stimulated whole blood cell cultures were used in cytokine flow cytometry assays. Peripheral blood mononuclear cells (PBMCs) were also cultured and the supernatants collected to determine the concentration of cytokines by biochip protein array. In addition, serum samples were used in enzyme-linked immunosorbent assays (ELISA) for quantification of soluble adhesion molecules. L-selectin was found elevated on monocytes and neutrophils of FM patients. A bias toward lower IL-4 levels was observed in FM patients. Based on studies showing differences in glucocorticoid receptor (GR) affinity and disturbances associated with loss of hypothalamic-pituitary-adrenal (HPA) axis resiliency in FM, it was hypothesized whether FM would be associated with abnormalities in glucocorticoid sensitivity. Total plasma cortisol and salivary free cortisol were quantified by ELISA and time-resolved fluorescence immunoassay, respectively. GR sensitivity through DEX inhibition of IL-6, in stimulated whole blood, was evaluated after cytokine quantification by ELISA. The corticosteroid receptors, GR alpha and mineralocorticoid receptor (MR), as well as the glucocorticoid-induced leucine zipper (GILZ) and the FK506 binding protein 5 mRNA expression were assessed in PBMCs by real-time reverse transcription-polymerase chain reaction (RT-PCR). Furthermore, sequencing of RT-PCR products and/or genomic DNA was used for mutational analysis of the corticosteroid receptors. We observed lower basal plasma cortisol levels (borderline statistical significance) and a lower expression of corticosteroid receptors and GILZ in FM patients when compared to healthy controls. The minor allele of the MR single nucleotide polymorphism (SNP) rs5522 was found more often in FM patients than in controls. In addition, female carriers of this SNP seemed to have reduced salivary cortisol responses to a strong psychological stressor (Trier Social Stress Test) compared to non-carriers. FM patient carriers of an MR intronic SNP (rs17484245), before exon 3, were associated with significantly higher scores of depression symptoms compared to patient non-carriers. The thesis includes also a comprehensive analysis of the complexity of GR regulation and the role of alternative mRNA splicing. It focuses on the differential expression of the untranslated GR first exons, their high sequence homology among different species and how genetic determinants, without apparent relevance, may have implications in health and disease. In FM patients, GR exon 1-C expression was found lower and a significant difference was observed when comparing GR 1-C expression between antidepressant-free and patients who had taken antidepressants until two weeks before sample collection. In summary, the study shows a slight disturbance of some components of the innate immune system of FM patients and suggests an enhanced adhesion and possible recruitment of leukocytes to inflammatory sites. The reduced expression of corticosteroid receptors and possibly the reduced MR function may be associated with an impaired function of the HPA axis in these patients. A hyporesponsiveness of the HPA axis under stress or disturbances of the stress response could make these patients more vulnerable to cytokines and inflammation which, compounded by lower antiinflammatory mediators, may sustain some of the symptoms that contribute to the clinical picture of FM.
During the last decade, anatomic and physiological neuroscience research has yielded extensive information on the physiological regulators of short-term satiety, visceral and interoceptive sensation. Distinct neural circuits regulate the elements of food ingestion physiologically. The general aim of the current studies is to elucidate the peripheral neural pathways to the brain in healthy subjects to establish the groundwork for the study of the pathophysiology of bulimia nervosa (BN). We aimed to define the central activation pattern during non-nutritive gastric distension in humans, and aimed to define the cognitive responses to this mechanical gastric distension. We estimated regional cerebral blood flow with 15O-water positron emission tomography during intragastric balloon inflation and deflation in 18 healthy young women of normal weight. The contrast between inflated minus deflated in the exploratory analysis revealed activation in more than 20 brain regions. The analysis confirmed several well known areas in the central nervous system that contribute to visceral processing: the inferior frontal cortex, representing a zone of convergence for food related stimuli; the insula and operculum referred to as "visceral cortex"; the anterior cingulate gyrus (and insula), processing affective information; and the brainstem, a site of vagal relay for visceral afferent stimuli. Brain activation in the left ventrolateral prefrontal cortex was reproducible. This area is well known for higher cognitive processing, especially reward-related stimuli. The ventrolateral prefrontal cortex with the insular regions may provide a link between the affective and rewarding components of eating and disordered eating as observed in BN and binge-eating obesity. Gastric distension caused a significant rapid, reversible, and reproducible increase in the feelings of fullness, sleepiness, and gastric discomfort as well as a significant rapid, reversible, and reproducible decrease in the feeling of hunger. We showed that mechanical activation of the neurocircuitry involved in meal termination led to the described phenomena. The current brain activation studies of non-painful, proximal gastric distension could provide groundwork in the field of abnormal eating behavior by suggesting a link between visceral sensation and abnormal eating patterns. A potential treatment for disordered eating and obesity could alter the conscious and unconscious perception and interoceptive awareness of gastric distension contributing to meal termination.
1.The Discursive Construction of Black Masculinity: Intersections of Race, Gender, and Sexuality
1.1.The Plight of Black Men: A History of Lynchings and Castrations
1.2.The Discursive Construction of the Black Man as Otherrn
1.3.Black Corporeality and the Scopic Regime of Racism
2. Ralph Ellison's 'Invisible man'
2.1.Invisible Black Men: Between Emasculation and Hypermasculinityrn
2.2.Transcending Invisibility
Mental processes are filters which intervene in the literary presentation of nature. This article will take you on a journey through literary landscapes, starting from Joseph Furphy and end-ing with Gerald Murnane. It will try to show the development of Australian literary landscape depiction. The investigation of this extensive topic will show that the perception of the Aus-tralian landscape as foreign and threatening is a coded expression of the protagonists" crisis of identity due to their estrangement from European cultural roots. Only a feeling of being at home enables the characters to perceive landscapes in a positive way and allows the author to depict intimate and familiar views of nature. This topic will be investigated with a range of novels to reveal the development of this theme from the turn of the nineteenth century (the time of Furphy- novel Such is Life) up to the present (i.e. novels by Malouf, Foster, Hall, Murnane).
Background
In light of the current biodiversity crisis, DNA barcoding is developing into an essential tool to quantify state shifts in global ecosystems. Current barcoding protocols often rely on short amplicon sequences, which yield accurate identification of biological entities in a community but provide limited phylogenetic resolution across broad taxonomic scales. However, the phylogenetic structure of communities is an essential component of biodiversity. Consequently, a barcoding approach is required that unites robust taxonomic assignment power and high phylogenetic utility. A possible solution is offered by sequencing long ribosomal DNA (rDNA) amplicons on the MinION platform (Oxford Nanopore Technologies).
Findings
Using a dataset of various animal and plant species, with a focus on arthropods, we assemble a pipeline for long rDNA barcode analysis and introduce a new software (MiniBar) to demultiplex dual indexed Nanopore reads. We find excellent phylogenetic and taxonomic resolution offered by long rDNA sequences across broad taxonomic scales. We highlight the simplicity of our approach by field barcoding with a miniaturized, mobile laboratory in a remote rainforest. We also test the utility of long rDNA amplicons for analysis of community diversity through metabarcoding and find that they recover highly skewed diversity estimates.
Conclusions
Sequencing dual indexed, long rDNA amplicons on the MinION platform is a straightforward, cost-effective, portable, and universal approach for eukaryote DNA barcoding. Although bulk community analyses using long-amplicon approaches may introduce biases, the long rDNA amplicons approach signifies a powerful tool for enabling the accurate recovery of taxonomic and phylogenetic diversity across biological communities.
Optimal control problems are optimization problems governed by ordinary or partial differential equations (PDEs). A general formulation is given byrn \min_{(y,u)} J(y,u) with subject to e(y,u)=0, assuming that e_y^{-1} exists and consists of the three main elements: 1. The cost functional J that models the purpose of the control on the system. 2. The definition of a control function u that represents the influence of the environment of the systems. 3. The set of differential equations e(y,u) modeling the controlled system, represented by the state function y:=y(u) which depends on u. These kind of problems are well investigated and arise in many fields of application, for example robot control, control of biological processes, test drive simulation and shape and topology optimization. In this thesis, an academic model problem of the form \min_{(y,u)} J(y,u):=\min_{(y,u)}\frac{1}{2}\|y-y_d\|^2_{L^2(\Omega)}+\frac{\alpha}{2}\|u\|^2_{L^2(\Omega)} subject to -\div(A\grad y)+cy=f+u in \Omega, y=0 on \partial\Omega and u\in U_{ad} is considered. The objective is tracking type with a given target function y_d and a regularization term with parameter \alpha. The control function u takes effect on the whole domain \Omega. The underlying partial differential equation is assumed to be uniformly elliptic. This problem belongs to the class of linear-quadratic elliptic control problems with distributed control. The existence and uniqueness of an optimal solution for problems of this type is well-known and in a first step, following the paradigm 'first optimize, then discretize', the necessary and sufficient optimality conditions are derived by means of the adjoint equation which ends in a characterization of the optimal solution in form of an optimality system. In a second step, the occurring differential operators are approximated by finite differences and the hence resulting discretized optimality system is solved with a collective smoothing multigrid method (CSMG). In general, there are several optimization methods for solving the optimal control problem: an application of the implicit function theorem leads to so-called black-box approaches where the PDE-constrained optimization problem is transformed into an unconstrained optimization problem and the reduced gradient for these reduced functional is computed via the adjoint approach. Another possibilities are Quasi-Newton methods, which approximate the Hessian by a low-rank update based on gradient evaluations, Krylov-Newton methods or (reduced) SQP methods. The use of multigrid methods for optimization purposes is motivated by its optimal computational complexity, i.e. the number of required computer iterations scales linearly with the number of unknowns and the rate of convergence, which is independent of the grid size. Originally multigrid methods are a class of algorithms for solving linear systems arising from the discretization of partial differential equations. The main part of this thesis is devoted to the investigation of the implementability and the efficiency of the CSMG on commodity graphics cards. GPUs (graphic processing units) are designed for highly parallelizable graphics computations and possess many cores of SIMD-architecture, which are able to outperform the CPU regarding to computational power and memory bandwidth. Here they are considered as prototype for prospective multi-core computers with several hundred of cores. When using GPUs as streamprocessors, two major problems arise: data have to be transferred from the CPU main memory to the GPU main memory, which can be quite slow and the limited size of the GPU main memory. Furthermore, only when the streamprocessors are fully used to capacity, a remarkable speed-up comparing to a CPU is achieved. Therefore, new algorithms for the solution of optimal control problems are designed in this thesis. To this end, a nonoverlapping domain decomposition method is introduced which allows the exploitation of the computational power of many GPUs resp. CPUs in parallel. This algorithm is based on preliminary work for elliptic problems and enhanced for the application to optimal control problems. For the domain decomposition into two subdomains the linear system for the unknowns on the interface is solved with a Schur complement method by using a discrete approximation of the Steklov-Poincare operator. For the academic optimal control problem, the arising capacitance matrix can be inverted analytically. On this basis, two different algorithms for the nonoverlapping domain decomposition for the case of many subdomains are proposed in this thesis: on the one hand, a recursive approach and on the other hand a simultaneous approach. Numerical test compare the performance of the CSMG for the one domain case and the two approaches for the multi-domain case on a GPU and CPU for different variants.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Objective: Attunement is a novel measure of nonverbal synchrony reflecting the duration of the present moment shared by two interaction partners. This study examined its association with early change in outpatient psychotherapy.
Methods: Automated video analysis based on motion energy analysis (MEA) and cross-correlation of the movement time-series of patient and therapist was conducted to calculate movement synchrony for N = 161 outpatients. Movement-based attunement was defined as the range of connected time lags with significant synchrony. Latent change classes in the HSCL-11 were identified with growth mixture modeling (GMM) and predicted by pre-treatment covariates and attunement using multilevel multinomial regression.
Results: GMM identified four latent classes: high impairment, no change (Class 1); high impairment, early response (Class 2); moderate impairment (Class 3); and low impairment (Class 4). Class 2 showed the strongest attunement, the largest early response, and the best outcome. Stronger attunement was associated with a higher likelihood of membership in Class 2 (b = 0.313, p = .007), Class 3 (b = 0.251, p = .033), and Class 4 (b = 0.275, p = .043) compared to Class 1. For highly impaired patients, the probability of no early change (Class 1) decreased and the probability of early response (Class 2) increased as a function of attunement.
Conclusions: Among patients with high impairment, stronger patient-therapist attunement was associated with early response, which predicted a better treatment outcome. Video-based assessment of attunement might provide new information for therapists not available from self-report questionnaires and support therapists in their clinical decision-making.
Background
The morphology of anuran larvae is suggested to differ between species with tadpoles living in standing (lentic) and running (lotic) waters. To explore which character combinations within the general tadpole morphospace are associated with these habitats, we studied categorical and metric larval data of 123 (one third of which from lotic environments) Madagascan anurans.
Results
Using univariate and multivariate statistics, we found that certain combinations of fin height, body musculature and eye size prevail either in larvae from lentic or lotic environments.
Conclusion
Evidence for adaptation to lotic conditions in larvae of Madagascan anurans is presented. While lentic tadpoles typically show narrow to moderate oral discs, small to medium sized eyes, convex or moderately low fins and non-robust tail muscles, tadpoles from lotic environments typically show moderate to broad oral discs, medium to big sized eyes, low fins and a robust tail muscle.
The midcingulate cortex has become the focus of scientific interest as it has been associated with a wide range of attentional phenomena. This survey found evidence indicating the relevance of gender and handedness for measures of regional cortical morphology. Although gender was associated with structural variations concerning the neuroanatomy of the midcingulum bundle as well, handedness did not emerge in the analyses of white matter characteristics as significant factor. Hemispheric differences were found at the level of both gray and white matter. Turning to the functional implications of neuroanatomical variations and comparing subjects with a pronounced and a low degree of midcingulate folding, which indicates differential expansions of cytoarchitectural areas, behavioral and electrophysiological differences in the processing of interference became evident. A high degree of leftward midcingulate fissurization was associated with better behavioral performance, presumably caused by a more effective conflict-monitoring system triggering fast and automatic attentional filtering mechanisms. Subjects exhibiting a lower degree of midcingulate fissurization rather seem to rely on more effortful control processes. These results carry implications not only concerning neuronal representations of individual differences in attentional processes, but might also be of relevance for the refinement of models for mental disorders.
Arctic and Antarctic polynya systems are of high research interest since extensive new ice formation takes place in these regions. The monitoring of polynyas and the ice production is crucial with respect to the changing sea-ice regime. The thin-ice thickness (TIT) distribution within polynyas controls the amount of heat that is released to the atmosphere and has therefore an impact on the ice-production rates. This thesis presents an improved method to retrieve thermal-infrared thin-ice thickness distributions within polynyas. TIT with a spatial resolution of 1 km × 1 km is calculated using the MODIS ice-surface temperature and atmospheric model variables within the Laptev Sea polynya for the winter periods 2007/08 and 2008/09. The improvement of the algorithm is focused on the surface-energy flux parameterizations. Furthermore, a thorough sensitivity analysis is applied to quantify the uncertainty in the thin-ice thickness results. An absolute mean uncertainty of -±4.7 cm for ice below 20 cm of thickness is calculated. Furthermore, advantages and drawbacks using different atmospheric data sets are investigated. Daily MODIS TIT composites are computed to fill the data gaps arising from clouds and shortwave radiation. The resulting maps cover on average 70 % of the Laptev Sea polynya. An intercomparison of MODIS and AMSR-E polynya data indicates that the spatial resolution issue is essential for accurately deriving polynya characteristics. Monthly fast-ice masks are generated using the daily TIT composites. These fast-ice masks are implemented into the coupled sea-ice/ocean model FESOM. An evaluation of FESOM sea-ice concentrations is performed with the result that a prescribed high-resolution fast-ice mask is necessary regarding the accurate polynya location. However, for a more realistic simulation of other small-scale sea-ice features further model improvements are required. The retrieval of daily high-resolution MODIS TIT composites is an important step towards a more precise monitoring of thin sea ice and sea-ice production. Future work will address a combined remote sensing " model assimilation method to simulate fully-covered thin-ice thickness maps that enable the retrieval of accurate ice production values.
The argan woodlands of South Morocco represent an open-canopy dryland forest with traditional silvopastoral usage that includes browsing by goats, sheep and camels, oil production as well as agricultural use. In the past, these forests have undergone extensive clearing, but are now protected by the state. However, the remaining argan woodlands are still under pressure from intensive grazing and illegal firewood collection. Although the argan-forest area seems to be overall decreasing due to large forest clearings for intensive agriculture, little quantitative data is available on the dynamics and overall state of the remaining argan forest. To determine how the argan woodlands in the High Atlas and the Anti-Atlas had changed in tree-crown cover from 1972 to 2018 we used historical black and white HEXAGON satellite images as well as recent WorldView satellite images (see Part A of our study). Because tree shadows can oftentimes not be separated from the tree crown on panchromatic satellite images, individual trees were mapped in three size categories to determine if trees were unchanged, had decreased/increased in crown size or had disappeared or newly grown. The current state of the argan trees was evaluated by mapping tree architectures in the field. Tree-cover changes varied highly between the test sites. Trees that remained unchanged between 1972 and 2018 were in the majority, while tree mortality and tree establishment were nearly even. Small unchanged trees made up 48.4% of all remaining trees, of these 51% showed degraded tree architectures. 40% of small (re-) grown trees were so overbrowsed that they only appeared as bushes, while medium (3–7 m crown diameter) and large trees (>7 m) showed less degraded trees regardless if they had changed or not. Approaches like grazing exclusion or cereal cultivation lead to a positive influence on tree architecture and less tree-cover decrease. Although the woodland was found to be mostly unchanged 1972–2018, the analysis of tree architecture reveals that a lot of (mostly small) trees remained stable but in a degraded state. This stability might be the result of the small trees’ high degradation status and shows the heavy pressure on the argan forest.
Monetary Policy During Times of Crisis - Frictions and Non-Linearities in the Transmission Mechanism
(2017)
For a long time it was believed that monetary policy would be able to maintain price stability and foster economic growth during all phases of the business cycle. The era of the Great Moderation, often also called the Volcker-Greenspan period, beginning in the mid 1980s was characterized by a decline in volatility of output growth and inflation among the industrialized countries. The term itself is first used by Stock and Watson (2003). Economist have long studied what triggered the decline in volatility and pointed out several main factors. An important research strand points out structural changes in the economy, such as a decline of volatility in the goods producing sector through better inventory controls and developments in the financial sector and government spending (McConnell2000, Blanchard2001, Stock2003, Kim2004, Davis2008). While many believed that monetary policy was only 'lucky' in terms of their reaction towards inflation and exogenous shocks (Stock2003, Primiceri2005, Sims2006, Gambetti2008), others reveal a more complex picture of the story. Rule based monetary policy (Taylor1993) that incorporates inflation targeting (Svensson1999) has been identified as a major source of inflation stabilization by increasing transparency (Clarida2000, Davis2008, Benati2009, Coibion2011). Apart from that, the mechanics of monetary policy transmission have changed. Giannone et al. (2008) compare the pre-Great Moderation era with the Great Modertation and find that the economies reaction towards monetary shocks has decreased. This finding is supported by Boivin et al. (2011). Similar to this, Herrera and Pesavento (2009) show that monetary policy during the Volcker-Greenspan period was very effective in dampening the effects of exogenous oil price shocks on the economy, while this cannot be found for the period thereafter. Yet, the subprime crisis unexpectedly hit worldwide economies and ended the era of Great Moderation. Financial deregulation and innovation has given banks opportunities for excessive risk taking, weakened financial stability (Crotty2009, Calomiris2009) and led to the build-up of credit-driven asset price bubbles (SchularickTaylor2012). The Federal Reserve (FED), that was thought to be the omnipotent conductor of price stability and economic growth during the Great Moderation, failed at preventing a harsh crisis. Even more, it did intensify the bubble with low interest rates following the Dotcom crisis of the early 2000s and misjudged the impact of its interventions (Taylor2009, Obstfeld2009). New results give a more detailed explanation on the question of latitude for monetary policy raised by Bernanke and suggest the existence of non-linearities in the transmission of monetary policy. Weise (1999), Garcia and Schaller (2002), Lo and Piger (2005), Mishkin (2009), Neuenkirch (2013) and Jannsen et al. (2015) find that monetary policy is more potent during times of financial distress and recessions. Its effectiveness during 'normal times' is much weaker or even insignificant. This prompts the question if these non-linearities limit central banks ability to lean against bubbles and financial imbalances (White2009, Walsh2009, Boivin2010, Mishkin2011).
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Exposure to fine and ultra-fine environmental particles is still a problem of concern in many industrialized parts of the world and the intensified use of nanotechnology may further increase exposure to small particles. Since many years air pollution is recognized as a critical problem in western countries, which led to rigorous regulation of air quality and the introduction of strict guidelines. However, the upper thresholds for particulates in ambient air recommended by the world health organization are often exceeded several times in newly industrialized countries. Such high levels of air pollution have the potential to induce adverse effects on human health. The response triggered by air pollutants is not limited to local effects of the respiratory system but is often systemic, resulting in endothelial dysfunction or atherosclerotic malady. The link between air pollution and cardiovascular disease is now accepted by the scientific community but the underlying mechanisms responsible for the pro-atherogenic potential still need to be unraveled in detail. Based on the results from in- vivo and in vitro studies the production of reactive oxygen species due to exposure to particles is the most important mechanism to explain the observed adverse effects. However, the doses that were applied in many in vivo and in vitro studies are far beyond the range of what humans are exposed to and there is the need for more realistic exposure studies. Complex in vitro coculture systems may be valuable tools to study particle-induced processes and to extrapolate effects of particles on the lung. One of the objectives of this PhD thesis was the establishment and further improvement of a complex coculture system initially described by Alfaro-Moreno et al. [1]. The system is composed of an alveolar type-II cell line (A549), differentiated macrophage-like cells (THP-1), mast cells (HMC-1) and endothelial cells (EA.hy 926), seeded in a 3D-orientation on a microporous membrane to mimic the cell response of the alveolar surface in vitro in conjunction with native aerosol exposure (VitrocellTM chamber). The tetraculture system was carefully characterized to ensure its performance and repeatability of results. The spatial distribution of the cells in the tetraculture was analyzed by confocal laser scanning microscopy (CLSM), showing a confluent layer of endothelial and epithelial cells on both sides of the Transwellâ„¢. Macrophage-like cells and mast cells can be found on top of the epithelial cells. The latter cells formed colonies under submerged conditions, which disappeared at the air-liquid-interface (ALI). The VitrocellTM aerosol exposure system was not significantly influencing the viability. Using this system, cells were exposed to an aerosol of 50 nm SiO2-Rhodamine nanoparticles (NPs) in PBS. The distribution of the NPs in the tetraculture after exposure was evaluated by CLSM. Fluorescence from internalized particles was detected in CD11b-positive THP-1 cells only. Furthermore, all cell lines were found to be able to respond to xenobiotic model compounds, such as benzo[a]pyrene (B[a]P) or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) with the upregulation of CYP1 mRNA. With this tetraculture system the response of the endothelial part of the alveolar barrier was studied in- vitro in a still realistic exposure scenario representing the conditions for a polluted situation without direct exposure of endothelial cells. After exposure to diesel exhaust particulate matter (DEPM) the expression of different anti-oxidant target genes and inflammatory genes such as NAD(P)H dehydrogenase quinone 1 (NQO1), superoxide dismutase 1 (SOD1) and heme oxygenase 1 (HMOX1), as well as the nuclear translocation nuclear factor erythroid-derived 2 (Nrf2) was evaluated. In addition, the potential of DEPM to induce the upregulation of CYP1A1 mRNA in the endothelium was analyzed. DEPM exposure led not to an upregulation of the anti-oxidant or inflammatory target genes, but to clear nuclear translocation of Nrf2. The endothelial cells responded to the DEPM treatment also with the upregulation of CYP1A1 mRNA and nuclear translocation of the aryl hydrocarbon receptor (AhR). Overall, DEPM triggered a response in the endothelial cells after indirect exposure of the tetraculture system to low doses of DEPM, underlining the sensitivity of ALI exposure systems. The use of the tetraculture together with the native aerosol exposure equipment may finally lead to a more realistic judgment regarding the hazard of new compounds and/or new nano-scaled materials in the future. For the first time, it was possible to study the response of the endothelial cells of the alveolar barrier in vitro in a realistic exposure scenario avoiding direct exposure of endothelial cells to high amounts of particulates.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
Matching problems with additional resource constraints are generalizations of the classical matching problem. The focus of this work is on matching problems with two types of additional resource constraints: The couple constrained matching problem and the level constrained matching problem. The first one is a matching problem which has imposed a set of additional equality constraints. Each constraint demands that for a given pair of edges either both edges are in the matching or none of them is in the matching. The second one is a matching problem which has imposed a single equality constraint. This constraint demands that an exact number of edges in the matching are so-called on-level edges. In a bipartite graph with fixed indices of the nodes, these are the edges with end-nodes that have the same index. As a central result concerning the couple constrained matching problem we prove that this problem is NP-hard, even on bipartite cycle graphs. Concerning the complexity of the level constrained perfect matching problem we show that it is polynomially equivalent to three other combinatorial optimization problems from the literature. For different combinations of fixed and variable parameters of one of these problems, the restricted perfect matching problem, we investigate their effect on the complexity of the problem. Further, the complexity of the assignment problem with an additional equality constraint is investigated. In a central part of this work we bring couple constraints into connection with a level constraint. We introduce the couple and level constrained matching problem with on-level couples, which is a matching problem with a special case of couple constraints together with a level constraint imposed on it. We prove that the decision version of this problem is NP-complete. This shows that the level constraint can be sufficient for making a polynomially solvable problem NP-hard when being imposed on that problem. This work also deals with the polyhedral structure of resource constrained matching problems. For the polytope corresponding to the relaxation of the level constrained perfect matching problem we develop a characterization of its non-integral vertices. We prove that for any given non-integral vertex of the polytope a corresponding inequality which separates this vertex from the convex hull of integral points can be found in polynomial time. Regarding the calculation of solutions of resource constrained matching problems, two new algorithms are presented. We develop a polynomial approximation algorithm for the level constrained matching problem on level graphs, which returns solutions whose size is at most one less than the size of an optimal solution. We then describe the Objective Branching Algorithm, a new algorithm for exactly solving the perfect matching problem with an additional equality constraint. The algorithm makes use of the fact that the weighted perfect matching problem without an additional side constraint is polynomially solvable. In the Appendix, experimental results of an implementation of the Objective Branching Algorithm are listed.
Service innovation has increasingly gained acknowledgement to contribute to economic growth and well-being. Despite this increased relevance in practice, service innovation is a developing research field. To advance literature on service innovation, this work analyzes with a qualitative study how firms manage service innovation activities in their organization differently. In addition, it evaluates the influence of top management commitment and corporate service innovativeness on service innovation capabilities of a firm and their implications for firm-level performance by conducting a quantitative study. Accordingly, the main overall research questions of this dissertation are: 1.) How and why do firms manage service innovation activities in their organization differently? 2.) What influence do top management commitment and corporate service innovativeness have on service innovation capabilities of a firm and what are the implications for firm-level performance? To respond to the first research question the way firms manage service innovation activities in their organization is investigated and by whom and how service innovations are developed. Moreover, it is examined why firms implement their service innovation activities differently. To achieve this a qualitative empirical study is conducted which included 22 semi-structured interviews with 15 firms in the sectors of construction, financial services, IT services, and logistics. Addressing the second research question, the aim is to improve the understanding about factors that enhance firm-level performance through service innovations. Deploying a dynamic capabilities perspective, a quantitative study is performed which underlines the importance of service innovation capabilities. More specifically, a theoretical framework is developed that proposes a positive relationship of top management commitment and corporate service innovativeness with service innovation capabilities and a positive relationship between service innovation capabilities and the firm-level performance indicators market performance, competitive advantage, and efficiency. A survey with double respondents from 87 companies from the sectors construction, financial services, IT services, and logistics was conducted to test the proposed theoretical framework by applying partial least squares structural equation modeling (PLS-SEM).
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
The study at hand deals with madness as it is represented in English Canadian fiction. The topic seemed most interesting and fruitful for analysis due to the fact that as the ways madness has been defined, understood, described, judged and handled differ quite profoundly from society to society, from era to era, as the language, ideas and associations surrounding insanity are both strongly culture-relative and shifting, madness as a theme of myth and literature has always been a excellent vehicle to mirror the assumptions and arguments, the aspirations and nostalgia, the beliefs and values, hopes and fears of its age and society. Thus, while the overall intent of this study is to elucidate some discernible patterns of structure and style which accompany the use of madness in Canadian literature, to investigate the varying sorts of portrayal and the conventions of presentation, to interpret the use of madness as literary devices and to highlight the different statements which are made, the continuity, variation, and changes in the theme of madness provide an informing principle in terms of certain Canadian experiences and perceptions. By examining madness as it represents itself in Canadian literature and considering the respective explorations of the deranged mind within their historical context, I hope to demonstrate that literary interpretations of madness both reflect and question cultural, political, religious and psychological assumptions of their times and that certain symptoms or usages are characteristic of certain periods. Such an approach, it is hoped, might not only contribute towards an assessment of the wealth of associations which surround madness and the ambivalence with which it is viewed, but also shed some light on the Canadian imagination. As such this study can be considered not only as a history of literary madness, but a history of Canadian society and the Canadian mind.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Food waste is the origin of major social and environmental issues. In industrial societies, domestic households are the biggest contributors to this problem. But why do people waste food although they buy and value it? Answering this question is mandatory to design effective interventions against food waste. So far, however, many interventions have not been based on theoretical knowledge. Integrating food waste literature and ambivalence research, we propose that domestic food waste can be understood via the concept of ambivalence—the simultaneous presence of positive and negative associations towards the same attitude object. In support of this notion, we demonstrated in three pre-registered experiments that people experienced ambivalence towards non-perishable food products with expired best before dates. The experience of ambivalence was in turn associated with an increased willingness to waste food. However, two informational interventions aiming to prevent people from experiencing ambivalence did not work as intended (Experiment 3). We hope that the outlined conceptualization inspires theory-driven research on why and when people dispose of food and on how to design effective interventions.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
Background: The growing production and use of engineered AgNP in industry and private households make increasing concentrations of AgNP in the environment unavoidable. Although we already know the harmful effects of AgNP on pivotal bacterial driven soil functions, information about the impact of silver nanoparticles (AgNP) on the soil bacterial community structure is rare. Hence, the aim of this study was to reveal the long-term effects of AgNP on major soil bacterial phyla in a loamy soil. The study was conducted as a laboratory incubation experiment over a period of 1 year using a loamy soil and AgNP concentrations ranging from 0.01 to 1 mg AgNP/kg soil. Effects were quantified using the taxon-specific 16S rRNA qPCR.
Results: The short-term exposure of AgNP at environmentally relevant concentration of 0.01 mg AgNP/kg caused significant positive effects on Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%), whereas beta-Proteobacteria population was minimized by 14.2% relative to the control (p ≤ 0.05). After 1 year of exposure to 0.01 mg AgNP/kg diminished Acidobacteria (p = 0.007), Bacteroidetes (p = 0.005) and beta-Proteobacteria (p = 0.000) by 14.5, 10.1 and 13.9%, respectively. Actino- and alpha-Proteobacteria were statistically unaffected by AgNP treatments after 1-year exposure. Furthermore, a statistically significant regression and correlation analysis between silver toxicity and exposure time confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions.
Conclusions: Even very low concentrations of AgNP may cause disadvantages for the autotrophic ammonia oxidation (nitrification), the organic carbon transformation and the chitin degradation in soils by exerting harmful effects on the liable bacterial phyla.
Flexibility and spatial mobility of labour are central characteristics of modern societies which contribute not only to higher overall economic growth but also to a reduction of interregional employment disparities. For these reasons, there is the political will in many countries to expand labour market areas, resulting especially in an overall increase in commuting. The picture of the various, unintended long-term consequences of commuting on individuals is, however, relatively unclear. Therefore, in recent years, the journey to work has gained high attention especially in the study of health and well-being. Empirical analyses based on longitudinal as well as European data on how commuting may affect health and well-being are nevertheless rare. The principle aim of this thesis is, thus, to address this question with regard to Germany using data from the Socio-Economic Panel. Chapter 2 empirically investigates the causal impact of commuting on absence from work due to sickness-related reasons. Whereas an exogenous change in commuting distance does not affect the number of absence days of those individuals who commute short distances to work, it increases the number of absence days of those employees who commute middle (25 " 49 kilometres) or long distances (50 kilometres and more). Moreover, our results highlight that commuting may deteriorate an individual- health. However, this effect is not sufficient to explain the observed impact of commuting on absence from work. Chapter 3 explores the relationship between commuting distance and height-adjusted weight and sheds some light on the mechanisms through which commuting might affect individual body weight. We find no evidence that commuting leads to excess weight. Compensating health behaviour of commuters, especially healthy dietary habits, could explain the non-relationship of commuting and height-adjusted weight. In Chapter 4, a multivariate probit approach is used to estimate recursive systems of equations for commuting and health-related behaviours. Controlling for potential endogeneity of commuting, the results show that long distance commutes significantly decrease the propensity to engage in health-related activities. Furthermore, unobservable individual heterogeneity can influence both the decision to commute and healthy lifestyle choices. Chapter 5 investigates the relationship between commuting and several cognitive and affective components of subjective well-being. The results suggest that commuting is related to lower levels of satisfaction with family life and leisure time which can largely be ascribed to changes in daily time use patterns, influenced by the work commute.
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Global human population growth is associated with many problems, such asrnfood and water provision, political conflicts, spread of diseases, and environmental destruction. The mitigation of these problems is mirrored in several global conventions and programs, some of which, however, are conflicting. Here, we discuss the conflicts between biodiversity conservation and disease eradication. Numerous health programs aim at eradicating pathogens, and many focus on the eradication of vectors, such as mosquitos or other parasites. As a case study, we focus on the "Pan African Tsetse and Trypanosomiasis Eradication Campaign," which aims at eradicating a pathogen (Trypanosoma) as well as its vector, the entire group of tsetse flies (Glossinidae). As the distribution of tsetse flies largely overlaps with the African hotspots of freshwater biodiversity, we argue for a strong consideration of environmental issues when applying vector control measures, especially the aerial applications of insecticides.rnFurthermore, we want to stimulate discussions on the value of speciesrnand whether full eradication of a pathogen or vector is justified at all. Finally, we call for a stronger harmonization of international conventions. Proper environmental impact assessments need to be conducted before control or eradication programs are carried out to minimize negative effects on biodiversity.
Up-to-date information about the type and spatial distribution of forests is an essential element in both sustainable forest management and environmental monitoring and modelling. The OpenStreetMap (OSM) database contains vast amounts of spatial information on natural features, including forests (landuse=forest). The OSM data model includes describing tags for its contents, i.e., leaf type for forest areas (i.e., leaf_type=broadleaved). Although the leaf type tag is common, the vast majority of forest areas are tagged with the leaf type mixed, amounting to a total area of 87% of landuse=forests from the OSM database. These areas comprise an important information source to derive and update forest type maps. In order to leverage this information content, a methodology for stratification of leaf types inside these areas has been developed using image segmentation on aerial imagery and subsequent classification of leaf types. The presented methodology achieves an overall classification accuracy of 85% for the leaf types needleleaved and broadleaved in the selected forest areas. The resulting stratification demonstrates that through approaches, such as that presented, the derivation of forest type maps from OSM would be feasible with an extended and improved methodology. It also suggests an improved methodology might be able to provide updates of leaf type to the OSM database with contributor participation.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
The main research question of this thesis was to set up a framework to allow for the identification of land use changes in drylands and reveal their underlying drivers. The concept of describing land cover change processes in a framework of global change syndrome was introduced by Schellnhuber et al. (1997). In a first step the syndrome approach was implemented for semi-natural areas of the Iberian Peninsula based on time series analysis of the MEDOKADS archive. In the subsequent study the approach was expanded and adapted to other land cover strata. Furthermore, results of an analysis of the relationship of annual NDVI and rainfall data were incorporated to designate areas that show a significant relationship indicating that at least a part of the variability found in NDVI time series was caused by precipitation. Additionally, a first step was taken towards the integration of socio-economic data into the analysis; population density changes between 1961 and 2008 were utilized to support the identification of processes related to land abandonment accompanied by cessation of agricultural practices on the one hand and urbanization on the other. The main findings of the studies comprise three major land cover change processes caused by human interaction: (i) shrub and woody vegetation encroachment in the wake of land abandonment of marginal areas, (ii) intensification of non-irrigated and irrigated, intensively used fertile regions and (iii) urbanization trends along the coastline caused by migration and the increase of mass tourism. Land abandonment of cultivated fields and the give-up of grazing areas in marginal mountainous areas often lead to the encroachment of shrubs and woody vegetation in the course of succession or reforestation. Whereas this cover change has positive effects concerning soil stabilization and carbon sequestration the increase of biomass involves also negative consequences for ecosystem goods and services; these include decreased water yield as a result of increased evapotranspiration, increasing fire risk, decreasing biodiversity due to landscape homogenization and loss of aesthetic value. Arable land in intensively used fertile zones of Spain was further intensified including the expansion of irrigated arable land. The intensification of agriculture has also generated land abandonment in these areas because less people are needed in the agricultural labour sector due to mechanization. Urbanization effects due to migration and the growth of the tourism sector were mapped along the eastern Mediterranean coast. Urban sprawl was only partly detectable by means of the MEDOKADS archive as the changes of urbanization are often too subtle to be detected by data with a spatial resolution of 1 km-². This is in line with a comparison of a Landsat TM time series and the NOAA AVHRR archive for a study area in the Greece that showed that small scale changes cannot be detected based on this approach, even though they might be of high relevance for local management of resources. This underlines the fact that land degradation processes are multi-scale problems and that data of several spatial and temporal scales are mandatory to build a comprehensive dryland observation system. Further land cover processes related to a decrease of greenness did not play an important role in the observation period. Thus, only few patches were identified, suggesting that no large-scale land degradation processes are taking place in the sense of decline of primary productivity after disturbances. Nevertheless, the land cover processes detected impact ecosystem functioning and using the example of shrub encroachment, bear risks for the provision of goods and services which can be valued as land degradation in the sense of a decline of important ecosystem goods and services. This risk is not only confined to the affected ecosystem itself but can also impact adjacent ecosystems due to inter-linkages. In drylands water availability is of major importance and the management of water resources is an important political issue. In view of climate change this topic will become even more important because aridity in Spain did increase within the last decades and is likely to further do so. In addition, the land cover changes detected by the syndrome approach could even augment water scarcity problems. Whereas the water yield of marginal areas, which often serve as headwaters of rivers, decreases with increasing biomass, water demand of agriculture and tourism is not expected to decline. In this context it will be of major importance to evaluate the trade-offs between different land uses and to take decisions that maintain the future functioning of the ecosystems for human well-being.
This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
We study planned changes in protective routines after the COVID-19 pandemic: in a survey in Germany among >650 respondents, we find that the majority plans to use face masks in certain situations even after the end of the pandemic. We observe that this willingness is strongly related to the perception that there is something to be learned from East Asians’ handling of pandemics, even when controlling for perceived protection by wearing masks. Given strong empirical evidence that face masks help prevent the spread of respiratory diseases and given the considerable estimated health and economic costs of such diseases even pre-Corona, this would be a very positive side effect of the current crisis.
Digital technologies have become central to social interaction and accessing goods and services. Development strategies and approaches to governance have increasingly deployed self-labelled ‘smart’ technologies and systems at various spatial scales, often promoted as rectifying social and geographic inequalities and increasing economic and environmental efficiencies. These have also been accompanied with similarly digitalized commercial and non-profit offers, particularly within the sharing economy. Concern has grown, however, over possible inequalities linked to their introduction. In this paper we critically analyse the role of sharing economies’ contribution to more inclusive, socially equitable
and spatially just transitions. Conceptually, this paper brings together literature on sharing economies, smart urbanism
and just transitions. Drawing on an explorative database of sharing initiatives within the cross-border region of Luxembourg and Germany, we discuss aspects of sustainability as they relate to distributive justice through spatial accessibility, intended benefits, and their operationalization. The regional analysis shows the diversity of sharing models, how they are appropriated in different ways and how intent and operationalization matter in terms of potential benefits.
Results emphasize the need for more fine-grained, qualitative research revealing who is, and is not, participating and
benefitting from sharing economies.
The present study covers the period from the late-ninth to the early-sixteenth centuries. Within this period, the late-thirteenth to mid-fourteenth centuries marked the decisive turning point, shaped more by attitudes and actions among the Christian majority than among Jewish agents. Our findings indicate an intensification of anti-Jewish tendencies, rooted in religious developments in Western Christendom. According to circumstances, however, these tendencies had a very varying impact across time and space. The frequent religious and ecclesiastical reform movements of Western Europe offer cases in point. In the 'German' Empire north of the Alps the monastic reforms of Saint Maximin and Gorze were by no means confined to the realm of monasticism; they were essential for shaping the historical circumstances in which the foundations of Ashkenazic Judaism were laid in the tenth and early-eleventh centuries. The concept of 'honor' was used by leading ecclesiastics such as bishop Rudiger of Speyer in 1084 to justify the settlement of Jews, but also by civic authorities such as those of Regensburg later on. It is significant for the long-term tendency, therefore, that the late-medieval expulsions from cities like Trier, Cologne, and Regensburg were eventually also legitimized by reference to the idea of honor.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
Entrepreneurship has become an essential phenomenon all over the world because it is a major driving force behind the economic growth and development of a country. It is widely accepted that entrepreneurship development in a country creates new jobs, pro-motes healthy competition through innovation, and benefits the social well being of individuals and societies. The policymakers in both developed and developing countries focus on entrepreneurship because it helps to alleviate impediments to economic development and social welfare. Therefore, policymakers and academic researchers consider the promotion of entrepreneurship as essential for the economy and research-based support is needed for further development of entrepreneurship activities.
The impact of entrepreneurial activities on economic and social development also varies from country to country. The effect of entrepreneurial activities on economic and social development also varies from country to country because the level of entrepreneur-ship activities also varies from one region to another or one country to another. To under-stand these variations, policymakers have investigated the determinants of entrepreneur-ship at different levels, such as the individual, industry, and country levels. Moreover, entrepreneurship behavior is influenced by various personal and environmental level factors. However, these personal-level factors cannot be separated from the surrounding environment.
The link between religion and entrepreneurship is well established and can be traced back to Weber (1930). Researchers have analyzed the relationship between religion and entrepreneurship from various perspectives, and the research related to religion and entrepreneurship is diversified and scattered across disciplines. This dissertation tries to explain the link between religion and entrepreneurship, specifically Islamic religion and entrepreneurship. Technically this dissertation comprises three parts. The first part of this dissertation consists of two chapters that discuss the definition and theories of entrepreneurship (Chapter 2) and the theoretical relationship between religion and entrepreneur-ship (Chapter 3).
The second part of this dissertation (Chapter 4) provides an overview of the field with a purpose to gain a better understanding of the field’s current state of knowledge to bridge the different views and perspectives. In order to provide an overview of the field, a systematic literature search leading to a descriptive overview of the field based on 270 articles published in 163 journals Subsequently, bibliometric methods are used to identify thematic clusters, the most influential authors and articles, and how they are connected.
The third part of this dissertation (Chapter 5) empirically evaluates the influence of Islamic values and Islamic religious practices on entrepreneurship intentions within the Islamic community. Using the theory of planned behavior as a theoretical lens, we also take into account that the relationship between religion and entrepreneurial intentions can be mediated by individual’s attitude towards entrepreneurship. A self-administrative questionnaire was used to collect the responses from a sample of 1895 Pakistani university students. A structured equation modeling was adopted to perform a nuanced assessment of the relationship between Islamic values and practices and entrepreneurship intentions and to account for mediating effect of attitude towards entrepreneurship.
The research on religion and entrepreneurship has increased sharply during the last years and is scattered across various academic disciplines and fields. The analysis identifies and characterize the most important publications, journals, and authors in the area and map the analyzed religions and regions. The comprehensive overview of previous studies allows us to identify research gaps and derive avenues for future research in a substantiated way. Moreover, this dissertation helps the research scholars to understand the field in its entirety, identify relevant articles, and to uncover parallels and differences across religions and regions. Besides, the study reveals a lack of empirical research related to specific religions and specific regions. Therefore, scholars can take these regions and religions into consideration when conducting empirical research.
Furthermore, the empirical analysis about the influence of Islamic religious values and Islamic religious practices show that Islamic values served as a guiding principle in shaping people’s attitudes towards entrepreneurship in an Islamic community; they had an indirect influence on entrepreneurship intention through attitude. Similarly, the relationship between Islamic religious practices and the entrepreneurship intentions of students was fully mediated by the attitude towards entrepreneurship. Furthermore, this dissertation contributes to prior research on entrepreneurship in Islamic communities by applying a more fine-grained approach to capture the link between religion and entrepreneurship. Moreover, it contributes to the literature on entrepreneurship intentions by showing that the influence of religion on entrepreneurship intentions is mainly due to religious values and practices, which shape the attitude towards entrepreneurship and thereby influence entrepreneurship intentions in religious communities. The entrepreneur-ship research has put a higher emphasis on assessing the influence of a diverse set of con-textual factors. This dissertation introduces Islamic values and Islamic religious practices as critical contextual factors that shape entrepreneurship in countries that are characterized by the Islamic religion.
At any given moment, our senses are assaulted with a flood of information from the environment around us. We need to pick our way through all this information in order to be able to effectively respond to that what is relevant to us. In most cases we are usually able to select information relevant to our intentions from what is not relevant. However, what happens to the information that is not relevant to us? Is this irrelevant information completely ignored so that it does not affect our actions? The literature suggests that even though we mayrnignore an irrelevant stimulus, it may still interfere with our actions. One of the ways in which irrelevant stimuli can affect actions is by retrieving a response with which it was associated. An irrelevant stimulus that is presented in close temporal contiguity with a relevant stimulus can be associated with the response made to the relevant stimulus " an observation termed distractor-response binding (Rothermund, Wentura, & De Houwer, 2005). The studies presented in this work take a closer look at such distractor-response bindings, and therncircumstances in which they occur. Specifically, the study reported in chapter 6 examined whether only an exact repetition of the distractor can retrieve the response with which it was associated, or whether even similar distractors may cause retrieval. The results suggested that even repeating a similar distractor caused retrieval, albeit less than an exact repetition. In chapter 7, the existence of bindings between a distractor and a response were tested beyond arnperceptual level, to see whether they exist at an (abstract) conceptual level. Similar to perceptual repetition, distractor-based retrieval of the response was observed for the repetition of concepts. The study reported in chapter 8 of this work examined the influence of attention on the feature-response binding of irrelevant features. The results pointed towards a stronger binding effects when attention was directed towards the irrelevant feature compared to whenrnit was not. The study in chapter 9 presented here looked at the processes underlying distractor-based retrieval and distractor inhibition. The data suggest that motor processes underlie distractor-based retrieval and cognitive process underlie distractor inhibition. Finally, the findings of all four studies are also discussed in the context of learning.
Redox-driven biogeochemical cycling of iron plays an integral role in the complex process network of ecosystems, such as carbon cycling, the fate of nutrients and greenhouse gas emissions. We investigate Fe-(hydr)oxide (trans)formation pathways from rhyolitic tephra in acidic topsoils of South Patagonian Andosols to evaluate the ecological relevance of terrestrial iron cycling for this sensitive fjord ecosystem. Using bulk geochemical analyses combined with micrometer-scale-measurements on individual soil aggregates and tephra pumice, we document biotic and abiotic pathways of Fe released from the glassy tephra matrix and titanomagnetite phenocrysts. During successive redox cycles that are controlled by frequent hydrological perturbations under hyper-humid climate, (trans)formations of ferrihydrite-organic matter coprecipitates, maghemite and hematite are closely linked to tephra weathering and organic matter turnover. These Fe-(hydr)oxides nucleate after glass dissolution and complexation with organic ligands, through maghemitization or dissolution-(re)crystallization processes from metastable precursors. Ultimately, hematite represents the most thermodynamically stable Fe-(hydr)oxide formed under these conditions and physically accumulates at redox interfaces, whereas the ferrihydrite coprecipitates represent a so far underappreciated terrestrial source of bio-available iron for fjord bioproductivity. The insights into Fe-(hydr)oxide (trans)formation in Andosols have implications for a better understanding of biogeochemical cycling of iron in this unique Patagonian fjord ecosystem.
By rodent studies it has been shown that the mineralocorticoid receptor (MR) is a candidate gene for the investigation of cognitive functions comparable to human executive function. The present work addresses the question if polymorphisms in the MR gene can act as a "probe" to explain a part of the interindividual variance of human executive functions. For this purpose, 72 healthy young participants were assigned to four equally sized groups, concerning their particular MR genotype for two common MR polymorphisms. They were investigated in an electroencephalogram (EEG) test session, accomplishing two cognitive tests while delivering saliva samples for subsequent cortisol measures. The two tests chosen for the assessment of executive functions were the Attention Network Task (ANT) and a modified version of the Wisconsin Card Sorting Test (WCST).Chapter 1 of the present work reports of the rational bases for the empirical approach, which were built up on a broad theoretical background presented in Chapter 2. In the third chapter, the investigation and results of the statistical analysis for behavioral data (i.e. reaction times, accuracy/error rates) are presented. No association with MR polymorphisms was found for the reaction times of both tests. For the accuracy rate, differences between genotype groups were found for ANT and WCST, indicating an association of MR polymorphisms and accuracy in the Alertness and Executive Control network of the ANT and during the detection of an intradimensional shift in the WCST. Data acquisition and the results for EEG data analyses are presented in Chapter 4. The results show that groups differing for MR genotype show different activity over prefrontal motor areas during the process of answering to the ANT. Those group differences again were prominent for the Alertness and Executive Control network. A tendency for further significant group differences was found for activity on frontopolar positions in extradimensional rule switching. Chapter 5 summarizes the findings for the analysis of salivary free cortisol, showing a tendency for an association between MR polymorphisms and a mildly stimulated Hypothalamus-pituitary-adrenal (HPA) axis during the test situation. The results of the different measures are integrated and discussed in Chapter 6 within the scope of novel findings in investigating the functionality of the chosen MR polymorphisms. Finally, Chapter 7 gives an outlook on the methodology and constraints of future research strategies to further describe the role of the MR in human cognitive function.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
During pregnancy every eighth woman is treated with glucocorticoids. Glucocorticoids inhibit cell division but are assumed to accelerate the differentiation of cells. In this review animal models for the development of the human fetal and neonatal hypothalamic-pituitary-adrenal (HPA) axis are investigated. It is possible to show that during pregnancy in humans, as in most of the here-investigated animal models, a stress hyporesponsive period (SHRP) is present. In this period, the fetus is facing reduced glucocorticoid concentrations, by low or absent fetal glucocorticoid synthesis and by reduced exposure to maternal glucocorticoids. During that phase, sensitive maturational processes in the brain are assumed, which could be inhibited by high glucocorticoid concentrations. In the SHRP, species-specific maximal brain growth spurt and neurogenesis of the somatosensory cortex take place. The latter is critical for the development of social and communication skills and the secure attachment of mother and child. Glucocorticoid treatment during pregnancy needs to be further investigated especially during this vulnerable SHRP. The hypothalamus and the pituitary stimulate the adrenal glucocorticoid production. On the other hand, glucocorticoids can inhibit the synthesis of corticotropin-releasing hormone (CRH) in the hypothalamus and of adrenocorticotropic hormone (ACTH) in the pituitary. Alterations in this negative feedback are assumed among others in the development of fibromyalgia, diabetes and factors of the metabolic syndrome. In this work it is shown that the fetal cortisol surge at the end of gestation is at least partially due to reduced glucocorticoid negative feedback. It is also assumed that androgens are involved in the control of fetal glucocorticoid synthesis. Glucocorticoids seem to prevent masculinization of the female fetus by androgens during the sexual gonadal development. In this work a negative interaction of glucocorticoids and androgens is detectable.
Subject of this publication is torture as an interrogational instrument in criminal proceedings from a legal history point of view. Thereby, the paper at hand is the continuation of Volume I (published in 2014, number 68 of the Legal Policy Forum).
Volume II covers the following historical periods: Late Middle Ages and Early Modern Age; the latter ending with the 18th century as the so called Century of Enlightenment, being the actual beginning of the Modern Age in criminal law and criminal procedure law.
The paper ends with critical remarks against the predominant view that the torture's reign of terror in the former inquisitionsprozess merely was the inevitable consequence of the unreasonable kaw on evidence applicable at that time.
Subject of this publication is torture as an interrogational instrument in criminal proceedings from a legal history point of view. Thereby, the author makes a distinction between torturing the accused on the one hand and, on the other hand, torture as an instrument to force a witness' incriminating testimony against third parties (in German: Zeugenfolter), torture as a means to avert dangers (lifesaving torture), torture as an additional cruelty to the accused's punishment (in German: Straffolter), and corporal punlishment for lying in a court. Only the first manifestation, namely torturing the accused intending to extort his confession, is the real subject of this paper.
Interaction between the Hypothalamic-Pituitary-Adrenal Axis and the Circadian Clock System in Humans
(2017)
Rotation of the Earth creates day and night cycles of 24 h. The endogenous circadian clocks sense these light/dark rhythms and the master pacemaker situated in the suprachiasmatic nucleus of the hypothalamus entrains the physical activities according to this information. The circadian machinery is built from the transcriptional/translational feedback loops generating the oscillations in all nucleated cells of the body. In addition, unexpected environmental changes, called stressors, also challenge living systems. A response to these stimuli is provided immediately via the autonomic-nervous system and slowly via the hypothalamus"pituitary"adrenal (HPA) axis. When the HPA axis is activated, circulating glucocorticoids are elevated and regulate organ activities in order to maintain survival of the organism. Both the clock and the stress systems are essential for continuity and interact with each other to keep internal homeostasis. The physiological interactions between the HPA axis and the circadian clock system are mainly addressed in animal studies, which focus on the effects of stress and circadian disturbances on cardiovascular, psychiatric and metabolic disorders. Although these studies give opportunity to test in whole body, apply unwelcome techniques, control and manipulate the parameters at the high level, generalization of the results to humans is still a debate. On the other hand, studies established with cell lines cannot really reflect the conditions occurring in a living organism. Thus, human studies are absolutely necessary to investigate mechanisms involved in stress and circadian responses. The studies presented in this thesis were intended to determine the effects of cortisol as an end-product of the HPA axis on PERIOD (PER1, PER2 and PER3) transcripts as circadian clock genes in healthy humans. The expression levels of PERIOD genes were measured under baseline conditions and after stress in whole blood. The results demonstrated here have given better understanding of transcriptional programming regulated by pulsatile cortisol at standard conditions and short-term effects of cortisol increase on circadian clocks after acute stress. These findings also draw attention to inter-individual variations in stress response as well as non-circadian functions of PERIOD genes in the periphery, which need to be examined in details in the future.
Despite significant advances in terms of the adoption of formal Intellectual Property Rights (IPR) protection, enforcement of and compliance with IPR regulations remains a contested issue in one of the world's major contemporary economies—China. The present review seeks to offer insights into possible reasons for this discrepancy as well as possible paths of future development by reviewing prior literature on IPR in China. Specifically, it focuses on the public's perspective, which is a crucial determinant of the effectiveness of any IPR regime. It uncovers possible differences with public perspectives in other countries and points to mechanisms (e.g., political, economic, cultural, and institutional) that may foster transitions over time in both formal IPR regulation and in the public perception of and compliance with IPR in China. On this basis, the review advances suggestions for future research in order to improve scholars' understanding of the public's perspective of IPR in China, its antecedents and implications.
Floods are hydrological extremes that have enormous environmental, social and economic consequences.The objective of this thesis was a contribution to the implementation of a processing chain that integrates remote sensing information into hydraulic models. Specifically, the aim was to improve water elevation and discharge simulations by assimilating microwave remote sensing-derived flood information into hydraulic models. The first component of the proposed processing chain is represented by a fully automated flood mapping algorithm that enables the automated, objective, and reliable flood extent extraction from Synthetic Aperture Radar images, providing accurate results in both rural and urban regions. The method operates with minimum data requirements and is efficient in terms of computational time. The map obtained with the developed algorithm is still subject to uncertainties, both introduced by the flood mapping algorithm and inherent in the image itself. In this work, particular attention was given to image uncertainty deriving from speckle. By bootstrapping the original satellite image pixels, several synthetic images were generated and provided as input to the developed flood mapping algorithm. From the analysis performed on the mapping products, speckle uncertainty can be considered as a negligible component of the total uncertainty. In the final step of the proposed processing chain real event water elevations, obtained from satellite observations, were assimilated in a hydraulic model with an adapted version of the Particle Filter, modified to work with non-Gaussian distribution of observations. To deal with model structure error and possibly biased observations, a global and a local weight variant of the Particle Filter were tested. The variant to be preferred depends on the level of confidence that is attributed to the observations or to the model. This study also highlighted the complementarity of remote sensing derived and in-situ data sets. An accurate binary flood map represents an invaluable product for different end users. However, deriving from this binary map additional hydraulic information, such as water elevations, is a way of enhancing the value of the product itself. The derived data can be assimilated into hydraulic models that will fill the gaps where, for technical reasons, Earth Observation data cannot provide information, also enabling a more accurate and reliable prediction of flooded areas.
Institutional and cultural determinants of speed of government responses during COVID-19 pandemic
(2021)
This article examines institutional and cultural determinants of the speed of government responses during the COVID-19 pandemic. We define the speed as the marginal rate of stringency index change. Based on cross-country data, we find that collectivism is associated with higher speed of government response. We also find a moderating role of trust in government, i.e., the association of individualism-collectivism on speed is stronger in countries with higher levels of trust in government. We do not find significant predictive power of democracy, media freedom and power distance on the speed of government responses.
In my paper I will talk about the mutual influences between the female spectators and the programming practices of Imperial Germany- cinema. I will focus on the period of the transition from the short film programme of the "cinema of attractions" to the dominance of the long feature film, i.e. from 1906-1918. I will ask questions how the presence of women in the cinema (the place where they first entered the public sphere) influenced the practice of programming. So I will deal with the relatively new topic of the programme (and its structural changes) as a mode of exhibition and I will try to connect this to the role the female audience played in shaping this format: how does the female audience affect the changes of the programme patterns, the modification of genres and their meaning within the structure of the programme and does it finally bring about a change in the mode of reception. And on the other hand how does the cinematographical programme represent and influence the female identity, and women- wishes and needs. One must ask for the reasons why the early cinema, that was characterised by diversity concerning class, gender and cultural issues and that built a kind of alternative public sphere, was displaced by an institutionalised, state monitored and nationalised German cinema. Taking into account that this change of film forms was not a teleological evolution, "gender" might be a more useful and insightful category than "class" is to explain the changes of the programme and the essence of early cinema. First I am going to present the main ideas of my project, then I"ll talk about the composition of the audience and the relation between audience and program, after that I"ll make some remarks on the reform movement and what the reformers thought about women in the cinema and about the programming practices, and as a last part, we"ll have a look at what actually happened in the cinema and at the program of the year 1911/1912 and how this program catered to the interests of the female audience. I"ll conclude with a short outlook on the changes that occurred during WW I.
Influence of Ozone and Drought on Tree Growth under Field Conditions in a 22 Year Time Series
(2022)
Studying the effect of surface ozone (O3) and water stress on tree growth is important for planning sustainable forest management and forest ecology. In the present study, a 22-year long time series (1998–2019) on basal area increment (BAI) and fructification severity of European beech (Fagus sylvatica L.) and Norway spruce (Picea abies (L.) H.Karst.) at five forest sites in Western Germany (Rhineland Palatinate) was investigated to evaluate how it correlates with drought and stomatal O3 fluxes (PODY) with an hourly threshold of uptake (Y) to represent the detoxification capacity of trees (POD1, with Y = 1 nmol O3 m−2 s−1). Between 1998 and 2019, POD1 declined over time by on average 0.31 mmol m−2 year−1. The BAI showed no significant trend at all sites, except in Leisel where a slight decline was observed over time (−0.37 cm2 per year, p < 0.05). A random forest analysis showed that the soil water content and daytime O3 mean concentration were the best predictors of BAI at all sites. The highest mean score of fructification was observed during the dry years, while low level or no fructification was observed in most humid years. Combined effects of drought and O3 pollution mostly influence tree growth decline for European beech and Norway spruce.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.
This thesis is focused on improving the knowledge on a group of threatened species, the European cave salamanders (genus Hydromantes). There are three main sections gathering studies dealing with different topics: Ecology (first part), Life traits (second part) and Monitoring methodologies (third part). First part starts with the study of the response of Hydromantes to the variation of climatic conditions, analysing 15 different localities throughout a full year (CHAPTER I; published in PEERJ in August 2015). After that, the focus moves on identify which is the operative temperature that these salamander experience, including how their body respond to variation of environmental temperature. This study was conducted using one of the most advanced tool, an infrared thermocamera, which gave the opportunity to perform detailed observation on salamanders body (CHAPTER II; published in JOURNAL OF THERMAL BIOLOGY in June 2016). In the next chapter we use the previous results to analyse the ecological niche of all eight Hydromantes species. The study mostly underlines the mismatch between macro- and microscale analysis of ecological niche, showing a weak conservatism of ecological niches within the evolution of species (CHAPTER III; unpublished manuscript). We then focus only on hybrids, which occur within the natural distribution of mainland species. Here, we analyse if the ecological niche of hybrids shows divergences from those of parental species, thus evaluating the power of hybrids adaptation (CHAPTER IV; unpublished manuscript). Considering that hybrids may represent a potential threat for parental species (in terms of genetic erosion and competition), we produced the first ecological study on an allochthonous mixed population of Hydromantes, analysing population structure, ecological requirements and diet. The interest on this particular population mostly comes by the fact that its members are coming from all three mainland Hydromantes species, and thus it may represent a potential source of new hybrids (CHAPTER V; accepted in AMPHIBIA-REPTILIA in October 2017). The focus than moves on how bioclimatic parameters affect species within their distributional range. Using as model species the microendemic H. flavus, we analyse the relationship between environmental suitability and local abundance of the species, also focusing on all intermediate dynamics which provide useful information on spatial variation of individual fitness (CHAPTER VI; submitted to SCIENTIFIC REPORTS in November 2017). The first part ends with an analysis of the interaction between Hydromantes and Batracobdella algira leeches, the only known ectoparasite for European cave salamanders. Considering that the effect of leeches on their hosts is potentially detrimental, we investigated if these ectoparasites may represent a further threat for Hydromantes (CHAPTER VII; submitted to INTERNATIONAL JOURNAL FOR PARASITOLOGY: PARASITES AND WILDLIFE in November 2017). The second part is related to the reproduction of Hydromantes. In the first study we perform analyses on the breeding behaviour of several females belonging to a single population, identifying differences and similarities occurring in cohorting females (CHAPTER VIII; published in NORTH-WESTERN JOURNAL OF ZOOLOGY in December 2015). In the second study we gather information from all Hydromantes species, analysing size and development of breeding females, and identifying a relationship between breeding time and climatic conditions (CHAPTER IX; submitted to SALAMANDRA in June 2017). In the last part of this thesis, we analyse two potential methods for monitoring Hydromantes populations. In the first study we evaluate the efficiency of the marking method involving Alpha tags (CHAPTER X; published in SALAMANDRA in October 2017). In the second study we focus on evaluating N-mixtures models as a methodology for estimating abundance in wild populations (CHAPTER XI; submitted to BIODIVERSITY & CONSERVATION in October 2017).
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
There is a lot of evidence for the impact of acute glucocorticoid treatment on hippocampus-dependent explicit learning and memory (memory for facts and events). But there have been few studies, investigating the effect of glucocorticoids on implicit learning and memory. We conducted three studies with different methodology to investigate the effect of glucocorticoids on different forms of implicit learning. In Study 1, we investigated the effect of cortisol depletion on short-term habituation in 49 healthy subjects. 25 participants received oral metyrapone (1500 mg) to suppress endogenous cortisol production, while 24 controls received oral placebo. Eye blink electromyogram (EMG) responses to 105 dB acoustic startle stimuli were assessed. Effective endogenous cortisol suppression had no effect on short-term habituation of the startle reflex, but startle eye blink responses were significantly increased in the metyrapone group. The latter findings are in line with previous human studies, which have shown that excess cortisol, sufficient to fully occupy central nervous system (CNS) corticosteroid receptors, may reduce startle eye blink. This effect may be mediated by CNS mechanisms controlling cortisol feedback. In Study 2, we investigated delay or trace eyeblink conditioning in a patient group with a relative hypocortisolism (30 patients with fibromyaligia syndrome/FMS) compared to 20 healthy control subjects. Conditioned eyeblink response probability was assessed by EMG. Morning cortisol levels, ratings of depression, anxiety and psychosomatic complaints as well as general symptomatology and psychological distress were assessed. As compared to healthy controls FMS patients showed lower morning cortisol levels, and trace eyeblink conditioning was facilitated whereas delay eyeblink conditioning was reduced. Cortisol measures correlate significantly only with trace eyeblink conditioning. Our results are in line with studies of pharmacologically induced hyper- and hypocortisolism, which affected trace eyeblink conditioning. We suggest that endocrine mechanisms affecting hippocampus-mediated forms of associative learning may play a role in the generation of symptoms in these patients.rnIn Study 3, we investigated the effect of excess cortisol on implicit sequence learning in healthy subjects. Oral cortisol (30 mg) was given to 29 participants, whereas 31 control subjects received placebo. All volunteers performed a 5-choice serial reaction time task (SRTT). The reaction speed of every button-press was determined and difference-scores were calculated as a proof of learning. Compared to the control group, we found a delayed learning in the cortisol group at the very beginning of the task. This study is the first human investigation, indicating impaired implicit memory function after exogenous administration of the stress hormone cortisol. Our findings support a previous neuroimaging study, which suggested that the medial temporal lobe (including the hippocampus) is also active in implicit sequence learning, but our results may also depend on the engagement of other brain structures.
Energy transition strategies in Germany have led to an expansion of energy crop cultivation in landscape, with silage maize as most valuable feedstock. The changes in the traditional cropping systems, with increasing shares of maize, raised concerns about the sustainability of agricultural feedstock production regarding threats to soil health. However, spatially explicit data about silage maize cultivation are missing; thus, implications for soil cannot be estimated in a precise way. With this study, we firstly aimed to track the fields cultivated with maize based on remote sensing data. Secondly, available soil data were target-specifically processed to determine the site-specific vulnerability of the soils for erosion and compaction. The generated, spatially-explicit data served as basis for a differentiated analysis of the development of the agricultural biogas sector, associated maize cultivation and its implications for soil health. In the study area, located in a low mountain range region in Western Germany, the number and capacity of biogas producing units increased by 25 installations and 10,163 kW from 2009 to 2016. The remote sensing-based classification approach showed that the maize cultivation area was expanded by 16% from 7305 to 8447 hectares. Thus, maize cultivation accounted for about 20% of the arable land use; however, with distinct local differences. Significant shares of about 30% of the maize cultivation was done on fields that show at least high potentials for soil erosion exceeding 25 t soil ha−1 a−1. Furthermore, about 10% of the maize cultivation was done on fields that pedogenetically show an elevated risk for soil compaction. In order to reach more sustainable cultivation systems of feedstock for anaerobic digestion, changes in cultivated crops and management strategies are urgently required, particularly against first signs of climate change. The presented approach can regionally be modified in order to develop site-adapted, sustainable bioenergy cropping systems.
Allocating scarce resources efficiently is a major task in mechanism design. One of the most fundamental problems in mechanism design theory is the problem of selling a single indivisible item to bidders with private valuations for the item. In this setting, the classic Vickrey auction of~\citet{vickrey1961} describes a simple mechanism to implement a social welfare maximizing allocation.
The Vickrey auction for a single item asks every buyer to report its valuation and allocates the item to the highest bidder for a price of the second highest bid. This auction features some desirable properties, e.g., buyers cannot benefit from misreporting their true value for the item (incentive compatibility) and the auction can be executed in polynomial time.
However, when there is more than one item for sale and buyers' valuations for sets of items are not additive or the set of feasible allocations is constrained, then constructing mechanisms that implement efficient allocations and have polynomial runtime might be very challenging. Consider a single seller selling $n\in \N$ heterogeneous indivisible items to several bidders. The Vickrey-Clarke-Groves auction generalizes the idea of the Vickrey auction to this multi-item setting. Naturally, every bidder has an intrinsic value for every subset of items. As in in the Vickrey auction, bidders report their valuations (Now, for every subset of items!). Then, the auctioneer computes a social welfare maximizing allocation according to the submitted bids and charges buyers the social cost of their winning that is incurred by the rest of the buyers. (This is the analogue to charging the second highest bid to the winning bidder in the single item Vickrey auction.) It turns out that the Vickrey-Clarke-Groves auction is also incentive compatible but it poses some problems: In fact, say for $n=40$, bidders would have to submit $2^{40}-1$ values (one value for each nonempty subset of the ground set) in total. Thus, asking every bidder for its valuation might be impossible due to time complexity issues. Therefore, even though the Vickrey-Clarke-Groves auction implements a social welfare maximizing allocation in this multi-item setting it might be impractical and there is need for alternative approaches to implement social welfare maximizing allocations.
This dissertation represents the results of three independent research papers all of them tackling the problem of implementing efficient allocations in different combinatorial settings.
Background: Increasing exposure to engineered inorganic nanoparticles takes actually place in both terrestric and aquatic ecosystems worldwide. Although we already know harmful effects of AgNP on the soil bacterial community, information about the impact of the factors functionalization, concentration, exposure time, and soil texture on the AgNP effect expression are still rare. Hence, in this study, three soils of different grain size were exposed for up to 90 days to bare and functionalized AgNP in concentrations ranging from 0.01 to 1.00 mg/kg soil dry weight. Effects on soil microbial community were quantified by various biological parameters, including 16S rRNA gene, photometric, and fluorescence analyses.
Results: Multivariate data analysis revealed significant effects of AgNP exposure for all factors and factor combinations investigated. Analysis of individual factors (silver species, concentration, exposure time, soil texture) in the unifactorial ANOVA explained the largest part of the variance compared to the error variance. In depth analysis of factor combinations revealed even better explanation of variance. For the biological parameters assessed in this study, the matching of soil texture and silver species, and the matching of soil texture and exposure time were the two most relevant factor combinations. The factor AgNP concentration contributed to a lower extent to the effect expression compared to silver species, exposure time and physico–chemical composition of soil.
Conclusions: The factors functionalization, concentration, exposure time, and soil texture significantly impacted the effect expression of AgNP on the soil microbial community. Especially long-term exposure scenarios are strongly needed for the reliable environmental impact assessment of AgNP exposure in various soil types.
In this thesis, three studies investigating the impact of stress on the protective startle eye blink reflex are reported. In the first study a decrease in prepulse inhibition of the startle reflex was observed after intravenous low dose cortisol application. In the second study a decrease in reflex magnitude of the startle reflex was observed after pharmacological suppression of endogenous cortisol production. In the third study, a higher reflex magnitude of the startle reflex was observed at reduced arterial and central venous blood pressure. These results can be interpreted in terms of an adaption to hostile environments.
Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.
The main objective of the present thesis was to investigate whether antibody effects observed in earlier in vitro studies can translate into the protection against chemical carcinogenesis in vivo as the basis of an immunoprophylactic approach against carcinogens. As model for chemical carcinogenesis, we selected B[a]P the prototype polycyclic aromatic hydrocarbon (PAH), an environmental pollutant emanating from both natural and anthropogenic sources. Many in vivo models conveniently use high doses of carcinogens mostly given as single bolus, which provides simple surrogate readouts, but poorly reflects chronic exposure to the low concentrations found in the environment. In addition, these concentrations cannot be matched with equimolar antibody concentrations obtained by immunisation. However, low B[a]P concentrations do not permit to directly measure chemical carcinogenesis. Therefore, in the present thesis, the pharmacokinetic, metabolism and B[a]P mediated immunotoxicity were chosen as experimental read-outs. B[a]P conjugate vaccines based on ovalbumin, tetanus toxoid and diphtheria toxoid (DT) as carrier proteins were developed to actively immunise mice against B[a]P. B[a]P-DT conjugate induced the most robust immune response. The antibodies reacted not only with B[a]P but also with the proximate carcinogen 7,8-diol-B[a]P. Antibodies modulated the bioavailability of B[a]P and its metabolic activation in a dose-dependent manner by sequestration in the blood. In order to further improve the vaccination, we replaced the protein carrier by promiscuous T-helper cell epitopes to induce higher antibody titer with increased specificity for the B[a]P hapten. We hypothesised that a reduction of B cell binding sites on the carrier, compared to whole protein carrier, should favour the activation of B cells recognising the hapten instead of the carrier protein. An internal processing of the carrier and cleavage of the B[a]P-BA and subsequent presentation of the carrier peptide by MHC II molecules to T cell receptor should induce a B cell dependent immune response by activating B cells capable to recognise B[a]P. We demonstrated that a vaccination against B[a]P using promiscuous T-helper cell epitopes as a carrier is feasible and some tested peptide conjugates were more immunogenic as whole protein conjugates with increased specificity. We showed that vaccination against B[a]P reduces immunotoxicity. B[a]P suppressed the proliferative response of both T and B cells after a sub-acute administration, an effect that was completely reversed by vaccination. In immunized mice the immunotoxic effect of B[a]P on IFN-γ, Il-12, TNF-ï¡ production and B cell activation was restored. In addition, specific antibodies inhibited the induction of Cyp1a1 by B[a]P in lymphocytes and Cyp1b1 in the liver, enzymes that are known to convert the procarcinogen B[a]P to the ultimate DNA-adduct forming metabolite, a major risk factor of chemical carcinogenesis. In order to replace Freund adjuvant and to improve the immunisation strategy in terms of antibody quantity and quality, several adjuvants that are potentially compatible with their use in humans were tested. In combination with Freund adjuvant, the conjugate-vaccine induced high levels of B[a]P-specific antibodies. We showed that all adjuvants tested induced specific antibodies against B[a]P and 7,8-diol-B[a]P, its carcinogenic metabolite. The highest antibody levels were obtained with Quil A, MF-59 and Alum. Biological activity in terms of enhanced retention of B[a]P was confirmed in mice immunised with Quil A, Montanide, Alum and MF-59. Our findings demonstrate that a vaccination against B[a]P is feasible in combination with adjuvants licensed in humans. Based on these results and with the current understanding of the mechanisms of chemical carcinogenesis of the ubiquitous carcinogen B[a]P and of the effects of specific antibodies, an immunoprophylactic approach against chemical carcinogenesis is absolutely warranted. Nevertheless, the direct effects of B[a]P-specific antibodies on the different stages of carcinogenesis (e.g. adduct formation) and whether these effects may translate into long-term protective effect against tumourigenesis needs to be proven in further experiments.
Legalisation cannot be fully explained by interest politics. If that were the case, the attitudes towards legalisation would be expected to be based on objective interests and actual policies in France and Germany would be expected to be more similar. Nor can it be explained by institutional agency, because there are no hints that states struggle with different normative traditions. Rather, political actors seek to make use of the structures that already exist to guar-antee legitimacy for their actions. If the main concern of governmental actors really is to accumulate legitimacy, as stated in the introduction, then politicians have a good starting position in the case of legalisation of illegal foreigners. Citizens" negative attitudes towards legalisation cannot be explained by imagined labour market competition; income effects play only a secondary role. The most important explanatory factor is the educational level of each individual. Objective interests do not trigger attitudes towards legalisation, but rather a basic men-tal predisposition for or against illegal immigrants who are eligible for legalisation. Politics concerning amnesties are thus not tied to an objectively given structure like the socio-economic composition of the electorate, but are open for political discretion. Attitudes on legalising illegal immigrants can be regarded as being mediated by beliefs and perceptions, which can be used by political agents or altered by political developments. However, politicians must adhere to a national frame of legitimating strategies that cannot be neglected without consequences. It was evident in the cross-country comparison of political debates that there are national systems of reference that provide patterns of interpretation. Legalisation is seen and incorporated into immigration policy in a very specific way that differs from one country to the next. In both countries investigated in this study, there are fundamental debates about which basic principles apply to legalisation and which of these should be held in higher esteem: a legal system able to work, humanitarian rights, practical considerations, etc. The results suggest that legalisation is "technicized" in France by describing it as an unusual but possible pragmatic instrument for the adjustment of the inefficient rule of law. In Germany, however, legalisation is discussed at a more normative level. Proponents of conservative immigration policies regard it as a substantial infringement on the rule of law, so that even defenders of a humanitarian solution for illegal immigrants are not able to challenge this view without significant political harm. But the arguments brought to bear in the debate on legalisation are not necessarily sound because they are not irrefutable facts, but instruments to generate legitimacy, and there are enough possibilities for arguing and persuading because socio-economic factors play a minor role. One of the most important arguments, the alleged pull effect of legalisation, has been subjected to an empirical investigation. In the political debate, it does not make any dif-ference whether this is true or not, insofar as it is not contested by incontrovertible findings. In reality, the results suggest that amnesties indeed exert a small attracting influence on illegal immigration, which has been contested by immigration friendly politicians in the French par-liament. The effect, however, is not large; therefore, some conservative politicians may put too much stress on this argument. Moreover, one can see legalisation as an instrument to restore legitimacy that has slipped away from immigration politics because of a high number of illegally residing foreigners. This aspect explains some of the peculiarities in the French debate on legalisation, e.g. the idea that the coherence of the law is secured by creating exceptional rules for legalising illegal immigrants. It has become clear that the politics of legalisation are susceptible to manipulation by introducing certain interpretations into the political debate, which become predominant and supersede other views. In this study, there are no signs of a systematic misuse of this constellation by any certain actor. However, the history of immigration policy is full of examples of symbolic politics in which a certain measure has been initiated while the actors are totally aware of its lack of effect. Legalisation has escaped this fate so far because it is a specific instrument that is the result of neglecting populist mechanisms rather than an ex-ample of a superficial measure. This result does not apply to policies concerning illegal immi-gration in general, both with regard to concealing a lack of control and flexing the state- muscles.
In recent decades, Border Studies have gained importance and have seen a noticeable increase in development. This manifests itself in an increased institutionalization, a differentiation of the areas of research interest and a conceptual reorientation that is interested in examining processes. So far, however, little attention has been paid to questions about (inter)disciplinary self-perception and methodological foundations of Border Studies and the associated consequences for research activities. This thematic issue addresses these desiderata and brings together articles that deal with their (inter)disciplinary foundations as well as method(olog)ical and practical research questions. The authors also provide sound insights into a disparate field of work, disclose practical research strategies, and present methodologically sophisticated systematizations.
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
Computer simulation has become established in a two-fold way: As a tool for planning, analyzing, and optimizing complex systems but also as a method for the scientific instigation of theories and thus for the generation of knowledge. Generated results often serve as a basis for investment decisions, e.g., road construction and factory planning, or provide evidence for scientific theory-building processes. To ensure the generation of credible and reproducible results, it is indispensable to conduct systematic and methodologically sound simulation studies. A variety of procedure models exist that structure and predetermine the process of a study. As a result, experimenters are often required to repetitively but thoroughly carry out a large number of experiments. Moreover, the process is not sufficiently specified and many important design decisions still have to be made by the experimenter, which might result in an unintentional bias of the results.
To facilitate the conducting of simulation studies and to improve both replicability and reproducibility of the generated results, this thesis proposes a procedure model for carrying out Hypothesis-Driven Simulation Studies, an approach that assists the experimenter during the design, execution, and analysis of simulation experiments. In contrast to existing approaches, a formally specified hypothesis becomes the key element of the study so that each step of the study can be adapted and executed to directly contribute to the verification of the hypothesis. To this end, the FITS language is presented, which enables the specification of hypotheses as assumptions regarding the influence specific input values have on the observable behavior of the model. The proposed procedure model systematically designs relevant simulation experiments, runs, and iterations that must be executed to provide evidence for the verification of the hypothesis. Generated outputs are then aggregated for each defined performance measure to allow for the application of statistical hypothesis testing approaches. Hence, the proposed assistance only requires the experimenter to provide an executable simulation model and a corresponding hypothesis to conduct a sound simulation study. With respect to the implementation of the proposed assistance system, this thesis presents an abstract architecture and provides formal specifications of all required services.
To evaluate the concept of Hypothesis-Driven Simulation Studies, two case studies are presented from the manufacturing domain. The introduced approach is applied to a NetLogo simulation model of a four-tiered supply chain. Two scenarios as well as corresponding assumptions about the model behavior are presented to investigate conditions for the occurrence of the bullwhip effect. Starting from the formal specification of the hypothesis, each step of a Hypothesis-Driven Simulation Study is presented in detail, with specific design decisions outlined, and generated inter- mediate data as well as final results illustrated. With respect to the comparability of the results, a conventional simulation study is conducted which serves as reference data. The approach that is proposed in this thesis is beneficial for both practitioners and scientists. The presented assistance system allows for a more effortless and simplified execution of simulation experiments while the efficient generation of credible results is ensured.
Hypothalamic-pituitary-adrenal (HPA) axis-related genetic variants influence the stress response
(2019)
The physiological stress system includes the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic-adrenal-medullary system (SAM). Parameters representing these systems such as cortisol, blood pressure or heart rate define the physiological reaction in response to a stressor. The main objective of the studies described in this thesis was to understand the role of the HPA-related genetic factors in these two systems. Genetic factors represent one of the components causing individual variations in physiological stress parameters. Five genes involved in the functioning of the HPA axis regarding stress responses are examined in this thesis. They are: corticotropin-releasing hormone (CRH), the glucocorticoid receptor (GR), the mineralocorticoid receptor (MR), the 5-hydroxytryptamine-transporter-linked polymorphic region (5-HTTLPR) in the serotonin transporter (5-HTT) and the brain-derived neurotrophic factor (BDNF) gene. Two hundred thirty-two healthy participants were genotyped. The influence of genetic factors on physiological parameters, such as post-awakening cortisol and blood pressure was assessed, as well as the influence of genetic factors on stress reactivity in response to a socially evaluated cold pressor test (SeCPT). Three studies tested the HPA-related genes each on three different levels. The first study examined the influences of genotypes and haplotypes of these five genes on physiological as well as psychological stress indicators (Chapter 2). The second study examined the effects of GR variants (genotypes and haplotypes) and promoter methylation level on both the SAM system and the HPA axis stress reactivity (Chapter 3). The third study comprised the characterization of CRH promoter haplotypes in an in-vitro study and the association of the CRH promoter with stress indicators in vivo (Chapter 4).
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.
Hydrodynamic processes play a fundamental role in the distribution of salt within mangrove-fringed estuaries and mangrove forests. In this thesis, two hydrodynamic processes and their ecological implications were examined. (1) Passive Irrigation and Functional Morphology of Crustacean Burrows in Rhizophora-forests. The mangrove Rhizophora excludes more than 90% of the seawater salt at water intake at the roots. By means of conductivity methods and resin casting, it was found that crustacean burrows play a key role in the removal of excess salt from the root zone. Salt diffuses from the roots into the burrows, and is efficiently flushed from the burrows by rainwater infiltration and tidal irrigation. The burrows contribute significantly to favourable conditions for the growth of Rhizophora trees. (2) Trapping of Mangrove Propagules due to Density-driven Secondary Circulation in Tropical Estuaries. In North East Australian estuaries, mangrove propagules are drifted upstream by density-driven axial surface convergences. Propagules accumulate in hydrodynamic traps upstream from suitable habitat, where they are trapped at least for the entire tropical dry season. Axial convergences may provide an efficient barrier for propagule exchange across estuaries. In such estuaries, mangrove populations can be regarded as floristically isolated, not unlike island communities, even though the populations lie on a continuous coastline. This effect may contribute to the disjunct distribution observed in some mangrove species. The outcomes of this work contribute to the understanding of the importance of salt as a growth and habitat-restricting factor in the mangrove environment.